Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

167. Backpropagation through activation

easy
GeneralGeneral
senior

Backpropagation through an activation function is a core step in training neural networks, where you compute how the loss changes with respect to the pre-activation values. In this problem you’ll implement the backward pass for a common activation so gradients can flow from later layers to earlier ones.

Requirements

Implement the function

python

Rules:

  • Use the chain rule to compute ( dZ = dA \odot g'(Z) ), where (g) is the activation and (\odot) is elementwise multiply.
  • If activation == "relu", use ( g'(z) = 1 ) when ( z > 0 ) else ( 0 ).
  • If activation == "sigmoid", use ( \sigma(z) = \frac{1}{1 + e^{-z}} ) and ( \sigma'(z) = \sigma(z)\bigl(1-\sigma(z)\bigr) ).
  • Return the result as a NumPy array.
  • Don’t call any deep learning frameworks (no PyTorch/TensorFlow); just NumPy + built-ins.

Example

python

Output:

python
Input Signature
ArgumentType
Znp.ndarray
dAnp.ndarray
activationstr
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Return an array

  • Use NumPy only; no DL frameworks

  • Elementwise ops; preserve (m,n) shape

Hint 1

Use the chain rule elementwise: dZ = dA * g_prime(Z); keep shapes (m, n) aligned.

Hint 2

For ReLU backward, build a mask from Z: derivative is 1 where Z > 0 else 0, then multiply by dA.

Hint 3

For sigmoid backward, compute s = 1/(1+exp(-Z)) first, then g'(Z) = s*(1-s).

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
backpropagation
ReLU
sigmoid
numpy
35 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit