Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

184. Activation function leaky relu

easy
GeneralGeneral
senior

Implement the Leaky ReLU activation function, a small twist on ReLU that keeps a tiny gradient for negative inputs to help training stay stable. You’ll apply it element-wise to a vector of inputs from a basic deep learning forward pass.

f(x)={xif x0αxif x<0f(x) = \begin{cases} x & \text{if } x \ge 0 \\ \alpha x & \text{if } x < 0 \end{cases}

Requirements

Implement the function

python

Rules:

  • Apply Leaky ReLU element-wise using: f(x)=max(x,αx)f(x) = \max(x, \alpha x)
  • Use only NumPy and Python built-in libraries (no deep learning frameworks).
  • Do not use any prebuilt activation helpers (e.g., from PyTorch/TensorFlow).
  • Keep the function as a single forward-pass utility (no backprop needed).

Example

python

Output:

python
Input Signature
ArgumentType
xnp.ndarray
alphafloat
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Return NumPy array

  • Element-wise on 1D array only

  • Use only NumPy + built-ins

Hint 1

Start from the definition: for each scalar v, output max(v, alpha*v).

Hint 2

Translate the scalar rule into an element-wise operation over the 1D input using np.where.

Hint 3

Handle negatives with alpha*v while leaving v >= 0 unchanged.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
activation-function
leaky-relu
numpy
elementwise-operations
42 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit