Join Our 5-Week ML/AI Engineer Interview Bootcamp 🚀 led by ML Tech Leads at FAANGs

Back to Questions

172. Transfer learning layer freezing

easy
GeneralGeneral
senior

Transfer learning lets you reuse a pre-trained neural network and adapt it to a new task by training only part of the model. In this problem, you’ll implement layer freezing for a simple MLP so only the last (k) layers update during one gradient step.

Requirements

Implement the function

python

Rules:

  • weights and grads are lists of length LL.
  • Each element weights[i] is a matrix (list of lists) for layer ii.
  • Layers 00 to L−k−1L - k - 1 are frozen (do not update).
  • Layers L−kL - k to L−1L - 1 are trainable (update using Wnew=Wold−lr×gradW_{new} = W_{old} - \text{lr} \times \text{grad}).
  • Return a new list of weights. Do not modify the input lists in-place.
  • Use NumPy for the matrix arithmetic.

Example

python

Output:

python
Input Signature
ArgumentType
kint
lrfloat
gradslist
weightslist
Output Signature
Return NameType
valuelist

Constraints

  • Use NumPy for math

  • Input/Output are lists of lists

  • Do not modify input in-place

Hint 1

The first L - k layers should be copied as-is.

Hint 2

The last k layers should be updated using weights[i] - lr * grads[i].

Hint 3

Use np.array(weights[i]) to convert each layer to NumPy for easy subtraction, then convert back to list using .tolist() if you want.

Roles
ML Engineer
AI Engineer
Companies
GeneralGeneral
Levels
senior
entry
Tags
transfer-learning
layer-freezing
sgd-update
nested-lists
27 people are solving this problem
Python LogoPython Editor
Ln 1, Col 1

Input Arguments

Edit values below to test with custom inputs

You need tolog in/sign upto run or submit