Join Our 5-Week ML/AI Engineer Interview Bootcamp π led by ML Tech Leads at FAANGs
Implement a single SGD (Stochastic Gradient Descent) parameter update step for a deep learning model, given current weights and their gradients. This tests your understanding of how training minimizes loss by moving parameters in the negative gradient direction.
The SGD update rule is:
where is the learning rate and is the gradient w.r.t. the weights.
Implement the function
Rules:
w_new = weights - lr * grads.weights array in-place.Output:
| Argument | Type |
|---|---|
| lr | float |
| grads | np.ndarray |
| weights | np.ndarray |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Do not modify weights array in-place
Use only NumPy + built-ins
Use vectorized subtraction: w_new = weights - lr * grads.
NumPy arrays handle element-wise operations automatically.
This creates a new array so the original weights isnβt modified; return it directly.