Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

139. Activation function tanh

easy
GeneralGeneral
senior

Implement the tanh activation function for a layer output, which squashes real-valued inputs into the range ((-1, 1)). This is a common deep-learning basic used to add non-linearity to neural networks.

The activation is defined as:

tanh(x)=exexex+ex\tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}

Requirements

Implement the function

python

Rules:

  • Compute tanh(xi)\tanh(x_i) for every element in the input array x.
  • Return a NumPy array of the same shape as x.
  • Do not use any prebuilt tanh activation helpers (e.g., np.tanh, torch.tanh, jax.numpy.tanh).
  • Use vectorized NumPy operations (e.g. np.exp) for efficiency.
  • Keep the implementation numerically reasonable (e.g., avoid obvious overflow patterns when possible).

Example

python

Output:

python
Input Signature
ArgumentType
xnp.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Input/output must be NumPy array.

  • No built-in tanh helpers (np.tanh/torch.tanh).

  • Avoid overflow; use stable exp-based formula.

Hint 1

Implement tanh element-wise using vectorized NumPy operations.

Hint 2

Avoid torch.tanh/np.tanh; use np.exp and the identity tanh(x) = (e^x - e^{-x})/(e^x + e^{-x}) (or an equivalent form).

Hint 3

For numerical stability, avoid computing both exp(x) and exp(-x) for large |x|. Use a stable rewrite with np.where: e.g. for x>=0: (1-exp(-2x))/(1+exp(-2x)), and for x<0 mirror it with exp(2x).

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
activation-function
numerical-stability
elementwise-computation
math-exp
50 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit