Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Bias–variance decomposition helps you explain why a machine learning algorithm makes prediction errors by splitting expected error into parts from underfitting and overfitting. In this question, you’ll compute bias, variance, and noise from repeated model predictions on the same inputs.
Implement the function
Rules:
predictions as an independent run (e.g., different training set, bootstrap sample, or random seed) of the same ml_algorithm.bias_squared, variance, and noise as averages across all n inputs.Output:
| Argument | Type |
|---|---|
| y_true | np.ndarray |
| predictions | np.ndarray |
| Return Name | Type |
|---|---|
| value | list |
Use only NumPy, no ML libraries
predictions shape must be (m, n)
Return global averages over all n inputs
Convert y_true and predictions to NumPy arrays and check shapes: y is (n,), preds is (m, n).
Compute the per-input mean prediction mu = preds.mean(axis=0), then bias² is mean((mu - y)**2) over inputs.
Variance is the average over inputs of mean((preds - mu)**2, axis=0). Compute global MSE as mean((preds - y)**2) (over runs and inputs), then noise = mse - bias2 - var.