Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

240. Batch normalization forward pass

medium
GeneralGeneral
senior

Implement the forward pass of Batch Normalization for a mini-batch, a core building block in deep learning that stabilizes activations during training. You’ll compute per-feature mean/variance over the batch, normalize, then apply learnable scale and shift.

Requirements

Implement the function

python

Rules:

  • Normalize each feature (column) using batch mean and batch variance, then apply (Y = \gamma \odot \hat{X} + \beta).
  • Use the variance definition (\mathrm{var} = \frac{1}{m}\sum_{i=1}^{m}(x_i-\mu)^2) (no Bessel correction).
  • Keep everything vectorized with NumPy (no per-element Python loops over m*d).
  • Do not use any prebuilt batch norm utilities (e.g., from PyTorch/TensorFlow).
  • Return a NumPy array.

Example

python

Output:

python
Input Signature
ArgumentType
Xnp.ndarray
epsfloat
betanp.ndarray
gammanp.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Use NumPy vectorization; no elementwise Python loops

  • Variance uses 1/m (no Bessel correction)

  • Return NumPy array

Hint 1

Use axis=0 to compute mean and variance per feature.

Hint 2

Variance must be no Bessel correction: var = np.mean((x - mu)**2, axis=0); then normalize with np.sqrt(var + eps).

Hint 3

Use broadcasting for the affine step: y = gamma * x_hat + beta.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
batch-normalization
numpy-broadcasting
vectorization
numerical-stability
41 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit