Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Build a Precision–Recall (PR) curve for a binary classifier so you can visualize the trade-off between precision and recall across different decision thresholds. You’ll compute precision/recall points by sweeping thresholds over predicted probabilities and return the curve data for evaluation.
Implement the function
Rules:
y_score, sorted in descending order.precisions, recalls, and thresholds.Output:
| Argument | Type |
|---|---|
| y_true | np.ndarray |
| y_score | np.ndarray |
| Return Name | Type |
|---|---|
| value | tuple |
No sklearn PR-curve utilities
Return three NumPy arrays
Use vectorized operations (O(N log N) time)
Use argsort on -y_score to sort scores in descending order.
Use cumsum on sorted labels to compute TP counts efficiently; FP = k - TP for top-k predictions.
Identify indices where y_score changes to extract only unique threshold points.