Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Implement the Leaky ReLU activation function, a small twist on ReLU that keeps a tiny gradient for negative inputs to help training stay stable. You’ll apply it element-wise to a vector of inputs from a basic deep learning forward pass.
Implement the function
Rules:
Output:
| Argument | Type |
|---|---|
| x | np.ndarray |
| alpha | float |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Element-wise on 1D array only
Use only NumPy + built-ins
Start from the definition: for each scalar v, output max(v, alpha*v).
Translate the scalar rule into an element-wise operation over the 1D input using np.where.
Handle negatives with alpha*v while leaving v >= 0 unchanged.