Join Our 5-Week ML/AI Engineer Interview Bootcamp πŸš€ led by ML Tech Leads at FAANGs

Back to Questions

37. SGD update step

easy
GeneralGeneral
senior

Implement a single SGD (Stochastic Gradient Descent) parameter update step for a deep learning model, given current weights and their gradients. This tests your understanding of how training minimizes loss by moving parameters in the negative gradient direction.

The SGD update rule is:

wt+1=wtβˆ’Ξ·β€‰βˆ‡wtw_{t+1} = w_t - \eta \,\nabla w_t

where Ξ·\eta is the learning rate and βˆ‡wt\nabla w_t is the gradient w.r.t. the weights.

Requirements

Implement the function

python

Rules:

  • Update using the vector formula w_new = weights - lr * grads.
  • Return the updated weights as a NumPy array.
  • Do not modify the input weights array in-place.
  • Use only NumPy and Python built-in libraries.
  • Do not use any optimizer utilities (e.g., from PyTorch/TensorFlow).

Example

python

Output:

python
Input Signature
ArgumentType
lrfloat
gradsnp.ndarray
weightsnp.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Return NumPy array

  • Do not modify weights array in-place

  • Use only NumPy + built-ins

Hint 1

Use vectorized subtraction: w_new = weights - lr * grads.

Hint 2

NumPy arrays handle element-wise operations automatically.

Hint 3

This creates a new array so the original weights isn’t modified; return it directly.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
sgd
gradient-descent
parameter-update
python-lists
47 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit