Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

159. Dropout forward pass

easy
GeneralGeneral
senior

Implement the forward pass of Dropout, a simple regularization technique that randomly zeroes some activations during training to reduce overfitting. You’ll apply inverted dropout, which scales kept activations so the expected output stays the same at train time.

Y=X⊙M1−p,Mij∼Bernoulli(1−p)Y = \frac{X \odot M}{1 - p}, \quad M_{ij} \sim \text{Bernoulli}(1-p)

Requirements

Implement the function

python

Rules:

  • If training=True, sample a dropout mask M with keep probability 1 - p and apply inverted dropout scaling.
  • If training=False, return X unchanged (no mask, no scaling).
  • Use np.random.seed(seed) to make the mask deterministic.
  • Do not use any prebuilt dropout layers (e.g., from PyTorch/TensorFlow).
  • Return a NumPy array.

Example

python

Output:

python
Input Signature
ArgumentType
Xnp.ndarray
pfloat
seedint
trainingbool
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Return NumPy array

  • Use np.random.seed(seed) for determinism

  • No framework dropout layers allowed

Hint 1

Remember inverted dropout: keep units with probability keep_prob = 1 - p, then scale kept activations by dividing by keep_prob so the expected value stays unchanged.

Hint 2

When training=True, set np.random.seed(seed) and create a Bernoulli mask with the same shape as X (1 = keep, 0 = drop), then compute out = X * mask / keep_prob.

Hint 3

If training=False, return X (preferably X.copy()).

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
dropout
numpy
bernoulli-mask
forward-pass
38 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit