Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Implement the ReLU activation function, a common nonlinearity in deep learning that keeps positive values and zeroes out negative values.
ReLU is defined elementwise as:
Implement the function
Rules:
x.Output:
| Argument | Type |
|---|---|
| x | np.ndarray |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Return NumPy array
Use NumPy vectorization; avoid deep learning libs
No prints; no in-place mutation
ReLU works elementwise: each output is max(0, x_i).
If you use NumPy, np.maximum(0.0, arr) applies ReLU to every element at once.