Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Implement the inference (forward pass) of a simple Multi-Layer Perceptron (MLP) so you can turn an input vector into an output prediction. You’ll compute two linear layers with a ReLU activation in between, using the standard MLP formula:
Implement the function
Rules:
Output:
| Argument | Type |
|---|---|
| x | np.ndarray |
| W1 | np.ndarray |
| W2 | np.ndarray |
| b1 | np.ndarray |
| b2 | np.ndarray |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Only NumPy + Python built-ins
Use ReLU between two linear layers
Use np.dot(W, x) for matrix-vector multiplication.
For ReLU, use np.maximum(0, z1) elementwise.
Double-check dimensions: W1 shape (d_hidden, d_in), W2 shape (d_out, d_hidden).