Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

254. Hard vs soft voting

easy
GeneralGeneral
senior

Compare hard vs. soft voting in an ensemble to see how combining multiple classifiers can change final predictions. You’ll implement both voting strategies from a list of model outputs and return the final predicted class for each sample.

Requirements

Implement the function

python

Rules:

  • Hard voting: for each sample (i), return ( \arg\max_c \text{count}(c) ) across model labels.
  • Soft voting: for each sample (i), compute mean probabilities (\bar{p}c = \frac{1}{M}\sum{m=1}^{M} p_{m,c}) and return ( \arg\max_c \bar{p}_c ).
  • If there’s a tie in hard voting, break ties by choosing the smallest class index.
  • Don’t use any prebuilt ensemble/voting utilities (e.g., scikit-learn voting classifiers); use only NumPy and Python built-ins.
  • Return a tuple of two 1D NumPy arrays (integers).

Example

python

Output:

python
Input Signature
ArgumentType
pred_labelsnp.ndarray
pred_probasnp.ndarray
Output Signature
Return NameType
valuetuple

Constraints

  • Only NumPy + Python built-ins

  • Tie: pick smallest class index

  • Return tuple of np.ndarrays.

Hint 1

For hard voting, process one sample at a time and count how many models voted for each class; pick the class with the largest count.

Hint 2

To handle the tie-break, build a count array indexed by class (size = num_classes) and use argmax—it returns the smallest index among ties.

Hint 3

For soft voting, convert pred_probas to a NumPy array and compute mean_probas = probas.mean(axis=0) (shape (num_samples, num_classes)), then argmax across classes.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
ensemble-voting
numpy
multiclass-classification
argmax-tie-breaking
19 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit