Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Implement the forward pass of Dropout, a simple regularization technique that randomly zeroes some activations during training to reduce overfitting. You’ll apply inverted dropout, which scales kept activations so the expected output stays the same at train time.
Implement the function
Rules:
training=True, sample a dropout mask M with keep probability 1 - p and apply inverted dropout scaling.training=False, return X unchanged (no mask, no scaling).np.random.seed(seed) to make the mask deterministic.Output:
| Argument | Type |
|---|---|
| X | np.ndarray |
| p | float |
| seed | int |
| training | bool |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Use np.random.seed(seed) for determinism
No framework dropout layers allowed
Remember inverted dropout: keep units with probability keep_prob = 1 - p, then scale kept activations by dividing by keep_prob so the expected value stays unchanged.
When training=True, set np.random.seed(seed) and create a Bernoulli mask with the same shape as X (1 = keep, 0 = drop), then compute out = X * mask / keep_prob.
If training=False, return X (preferably X.copy()).