Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Mean Reciprocal Rank (MRR) is a common evaluation metric for NLP retrieval and ranking tasks where each query has exactly one correct answer. You’ll compute MRR from ranked prediction lists by finding the rank position of the first relevant item per query.
Implement the function
Rules:
q, find the 1-based index of targets[q] inside rankings[q], then compute its reciprocal.0.0.sklearn or torchmetrics).Output:
| Argument | Type |
|---|---|
| targets | np.ndarray |
| rankings | list[np.ndarray] |
| Return Name | Type |
|---|---|
| value | float |
Input rankings is a list of NumPy arrays; targets is a NumPy array
No sklearn/torchmetrics ranking functions
Missing target contributes reciprocal rank 0.0
MRR is the average over queries of 1 / (1-based rank of the target); if the target isn’t in the list, contribute 0.
For each (rank_list, target), find the target’s index. In Python you can use list.index with a try/except, or with NumPy use np.where(rank_list == target).
Implementation pattern: loop with zip(rankings, targets), compute recip = 0.0 if no match else 1.0/(idx+1), append, then return float(np.mean(recip_ranks)).