Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Compare hard vs. soft voting in an ensemble to see how combining multiple classifiers can change final predictions. You’ll implement both voting strategies from a list of model outputs and return the final predicted class for each sample.
Implement the function
Rules:
Output:
| Argument | Type |
|---|---|
| pred_labels | np.ndarray |
| pred_probas | np.ndarray |
| Return Name | Type |
|---|---|
| value | tuple |
Only NumPy + Python built-ins
Tie: pick smallest class index
Return tuple of np.ndarrays.
For hard voting, process one sample at a time and count how many models voted for each class; pick the class with the largest count.
To handle the tie-break, build a count array indexed by class (size = num_classes) and use argmax—it returns the smallest index among ties.
For soft voting, convert pred_probas to a NumPy array and compute mean_probas = probas.mean(axis=0) (shape (num_samples, num_classes)), then argmax across classes.