Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Build a confusion matrix for a classification model so you can summarize its prediction performance for ML evaluation. For (K) classes, the confusion matrix entry (C_{i,j}) counts how often the true label is (i) while the predicted label is (j):
Implement the function
Rules:
Output:
| Argument | Type |
|---|---|
| y_pred | np.ndarray |
| y_true | np.ndarray |
| num_classes | int |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Use NumPy + built-ins only
Rows=true labels, cols=predicted
Start with a K x K matrix of zeros where rows = true labels and columns = predicted labels.
Use a single loop over samples (or vectorize) to increment matrix[y_true[i]][y_pred[i]].
Avoid per-class loops by flattening pair indices: flat = y_true*K + y_pred, then use np.add.at (or np.bincount(flat, minlength=K*K)) and reshape back to (K, K).