Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

206. Max pooling of embeddings

easy
GeneralGeneral
senior

Implement max pooling over a set of embedding vectors to produce a single fixed-size representation, a common step in embeddings-and-retrieval pipelines. Given multiple embeddings (e.g., from tokens, passages, or chunks), you’ll return a pooled embedding by taking the maximum value per dimension.

The max-pooled embedding is defined as:

pj=maxi{1,,n}Ei,jp_j = \max_{i \in \{1,\dots,n\}} E_{i,j}

where (E) is an (n \times d) matrix of embeddings and (p) is a length-(d) vector.

Requirements

Implement the function

python

Rules:

  • Perform element-wise max across all input embeddings (pool over the first dimension).
  • Return the pooled embedding as a NumPy array.
  • Do not use any prebuilt pooling helpers beyond basic NumPy ops (e.g., no deep learning frameworks).
  • Keep the implementation in a single Python function.

Example

python

Output:

python
Input Signature
ArgumentType
embeddingsnp.ndarray
Output Signature
Return NameType
valuenp.ndarray

Constraints

  • Input is a NumPy array

  • Use np.max with axis=0

  • Return NumPy array

Hint 1

The input is already a 2D NumPy array, so you can pool efficiently directly.

Hint 2

Pooling is element-wise: take the maximum per dimension across all embeddings (use axis=0).

Hint 3

Compute np.max(arr, axis=0) to get the max-pooled embedding.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
numpy
max-pooling
embeddings
vectorization
20 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit