Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

256. RNN forward pass

easy
GeneralGeneral
senior

Implement the forward pass of a simple vanilla RNN to process a sequence and produce hidden states over time. You’ll compute the recurrent hidden update for each timestep using given weights and an activation function.

Requirements

Implement the function

python

Rules:

  • Use the recurrence ht=tanh(Wxxt+Whht1+b)h_t = \tanh(W_x x_t + W_h h_{t-1} + b) for each timestep (t = 0 \ldots T-1).
  • Return all hidden states in order (one per timestep), as a NumPy array.
  • Use only NumPy operations for math (no deep learning frameworks).
  • Don’t modify the input arrays in-place.
  • Keep it in a single function (no helper classes).

Example

python

Output:

python
Input Signature
ArgumentType
Xnp.ndarray
bnp.ndarray
Whnp.ndarray
Wxnp.ndarray
h0np.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Use NumPy only; no deep learning frameworks.

  • Do not modify input arrays in-place.

  • Return NumPy array of shape (T, H).

Hint 1

Inputs are already np.ndarray. Use @ for matrix multiplication and np.tanh for activation.

Hint 2

Maintain h_prev starting from h0, loop over timesteps t, and compute h_t = np.tanh(Wx @ x_t + Wh @ h_prev + b); append each h_t to an output list.

Hint 3

Be careful with shapes: x_t is (D,), Wx is (H,D) so Wx @ x_t -> (H,); Wh is (H,H) so Wh @ h_prev -> (H,). Return np.array(hidden_states).

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
vanilla-rnn
forward-pass
numpy-linear-algebra
tanh-activation
49 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit