Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

175. Multi layer perceptron inference

easy
GeneralGeneral
senior

Implement the inference (forward pass) of a simple Multi-Layer Perceptron (MLP) so you can turn an input vector into an output prediction. You’ll compute two linear layers with a ReLU activation in between, using the standard MLP formula:

y^=W2 ReLU(W1x+b1)+b2\hat{y} = W_2 \,\text{ReLU}(W_1 x + b_1) + b_2

Requirements

Implement the function

python

Rules:

  • Compute the first linear layer: ( z_1 = W_1 x + b_1 ).
  • Apply ReLU elementwise: ( h = \max(0, z_1) ).
  • Compute the output layer: ( \hat{y} = W_2 h + b_2 ).
  • Return the output as a NumPy array.
  • Use only NumPy and Python built-in libraries (no deep learning frameworks).

Example

python

Output:

python
Input Signature
ArgumentType
xnp.ndarray
W1np.ndarray
W2np.ndarray
b1np.ndarray
b2np.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Return NumPy array

  • Only NumPy + Python built-ins

  • Use ReLU between two linear layers

Hint 1

Use np.dot(W, x) for matrix-vector multiplication.

Hint 2

For ReLU, use np.maximum(0, z1) elementwise.

Hint 3

Double-check dimensions: W1 shape (d_hidden, d_in), W2 shape (d_out, d_hidden).

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
forward-pass
matrix-vector-multiplication
ReLU
MLP
45 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit