Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Implement the forward pass of a simple vanilla RNN to process a sequence and produce hidden states over time. You’ll compute the recurrent hidden update for each timestep using given weights and an activation function.
Implement the function
Rules:
Output:
| Argument | Type |
|---|---|
| X | np.ndarray |
| b | np.ndarray |
| Wh | np.ndarray |
| Wx | np.ndarray |
| h0 | np.ndarray |
| Return Name | Type |
|---|---|
| value | np.ndarray |
Use NumPy only; no deep learning frameworks.
Do not modify input arrays in-place.
Return NumPy array of shape (T, H).
Inputs are already np.ndarray. Use @ for matrix multiplication and np.tanh for activation.
Maintain h_prev starting from h0, loop over timesteps t, and compute h_t = np.tanh(Wx @ x_t + Wh @ h_prev + b); append each h_t to an output list.
Be careful with shapes: x_t is (D,), Wx is (H,D) so Wx @ x_t -> (H,); Wh is (H,H) so Wh @ h_prev -> (H,). Return np.array(hidden_states).