Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Compute perplexity for a simple NLP language model, which measures how surprised the model is by a sequence of tokens. You’ll be given token-level probabilities for the true next token and should return a single perplexity score.
Implement the function
Rules:
np.log) and natural exponential (np.exp).Output:
| Argument | Type |
|---|---|
| next_token_probs | np.ndarray |
| Return Name | Type |
|---|---|
| value | float |
Use natural log/exp (np.log, np.exp).
Return single float, not list.
No pretrained NLP evaluation utilities.
Perplexity is the exponential of the average negative log probability: compute mean(-log(p_t)) over the array, then exp it.
Use np.mean on np.log(next_token_probs).
Watch edge cases: p_t must be in (0, 1].