Join Our 5-Week ML/AI Engineer Interview Bootcamp š led by ML Tech Leads at FAANGs
Gradient checking is a simple way to verify your backprop gradients by comparing them to numerical gradients from finite differences. In this task, youāll implement gradient checking for a tiny neural network layer so you can catch sign/shape mistakes early.
Implement the function
Rules:
log(0) by clipping probabilities (e.g., to [1e-12, 1-1e-12]).Output:
| Argument | Type |
|---|---|
| W | np.ndarray |
| X | np.ndarray |
| b | np.ndarray |
| y | np.ndarray |
| eps | float |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Use NumPy only; no autograd frameworks.
Return relative error as (d,1) np.ndarray.
Use stable sigmoid; clip probabilities for log.
Convert inputs to NumPy arrays with consistent shapes: X:(n,d), y:(n,1), W:(d,1), b:(1,) so z = X @ W + b works without broadcasting surprises.
For sigmoid + binary cross-entropy, the gradient simplifies: dz = (a - y)/n where a = sigmoid(z). Then dW = X.T @ dz gives the analytical gradient.
Write a forward_loss(W_local) helper that returns a scalar loss. For each weight W[i,0], compute centered-difference g_num = (L(W+eps)-L(W-eps))/(2*eps), then return relative error |g_anal-g_num|/max(1e-8,|g_anal|+|g_num|). Clip a to avoid log(0) and use a stable sigmoid.