Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs
Transfer learning lets you reuse a pre-trained neural network and adapt it to a new task by training only part of the model. In this problem, you’ll implement layer freezing for a simple MLP so only the last (k) layers update during one gradient step.
Implement the function
Rules:
weights and grads are lists of length .weights[i] is a matrix (list of lists) for layer .Output:
| Argument | Type |
|---|---|
| k | int |
| lr | float |
| grads | list |
| weights | list |
| Return Name | Type |
|---|---|
| value | list |
Use NumPy for math
Input/Output are lists of lists
Do not modify input in-place
The first L - k layers should be copied as-is.
The last k layers should be updated using weights[i] - lr * grads[i].
Use np.array(weights[i]) to convert each layer to NumPy for easy subtraction, then convert back to list using .tolist() if you want.