Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Design an early stopping check that decides when to halt training based on validation loss over epochs. You’ll implement a small utility that returns the first epoch where training should stop and the epoch with the best validation loss.
An epoch (t) is considered an improvement if:
Implement the function:
Rules:
min_delta.patience consecutive epochs without improvement.stop_epoch as the epoch where the patience counter first reaches patience.Output:
| Argument | Type |
|---|---|
| patience | int |
| min_delta | float |
| val_losses | np.ndarray |
| Return Name | Type |
|---|---|
| value | tuple |
Single pass over val_losses (O(n))
Use NumPy arrays for input
Return (-1, best_epoch) if never stops
Keep three variables while scanning epochs: best_loss, best_epoch, and no_improve (consecutive non-improvements).
An epoch counts as improvement only if loss < best_loss - min_delta; on improvement reset no_improve = 0, otherwise increment it.
Early stopping triggers the first epoch where no_improve reaches patience; set stop_epoch = current_epoch and break. Track best_epoch as the earliest minimum.