Linear Algebra Interview Questions

Dan Lee's profile image
Dan LeeData & AI Lead
Last updateMarch 13, 2026

Linear algebra questions dominate quantitative researcher interviews at top-tier firms like Jane Street, Citadel, Two Sigma, and DE Shaw. These aren't abstract math problems: interviewers want to see how you'll handle covariance matrices with 500 assets, diagnose numerical instability in real-time pricing engines, and debug factor models when eigenvalues go negative after a data update.

What makes linear algebra interviews brutal is the jump from textbook theory to messy financial reality. You might nail the definition of positive definiteness, then get stumped when asked why your mean-variance optimizer is producing absurd portfolio weights. Or you'll correctly compute an eigenvalue by hand, but miss that a condition number of 10^15 makes your colleague's 'small residual' completely meaningless.

Here are the top 32 linear algebra questions organized by mathematical concept, from core matrix operations to real-world applications in portfolio optimization and factor modeling.

Intermediate32 questions

Linear Algebra Interview Questions

Top Linear Algebra interview questions covering the key areas tested at leading tech companies. Practice with real questions and detailed solutions.

Quantitative ResearcherJane StreetCitadelTwo SigmaDE ShawGoldman SachsAQRJump TradingDRW

Vectors, Matrices & Core Operations

Interviewers start with vectors and matrices to separate candidates who memorized formulas from those who truly understand the geometry. Most candidates fail because they can't connect computational steps to geometric intuition, especially when asked to construct examples or explain what properties mean in practice.

The key insight here is that matrix multiplication creates dependencies between spaces: when AB = 0, the column space of B must live entirely in the null space of A. Master this connection between algebraic operations and geometric relationships, because interviewers will push you to explain not just how to compute, but why the computation matters.

Vectors, Matrices & Core Operations

Before anything else, interviewers will probe whether you truly understand the mechanics of vector spaces, matrix multiplication, rank, and null spaces. You might be surprised how often candidates stumble when asked to explain why matrix multiplication is not commutative or to compute a projection by hand under time pressure.

Given two 3x3 matrices A and B where AB = 0 but neither A nor B is the zero matrix, what can you conclude about the rank and null space of A and B? Construct a concrete example.

Jane StreetJane StreetMediumVectors, Matrices & Core Operations

Sample Answer

Most candidates default to assuming AB = 0 implies A = 0 or B = 0, but that fails here because matrices are not an integral domain. The key insight is that the columns of $B$ must lie in the null space of $A$, so $\text{rank}(A) < 3$ and $\text{nullity}(A) \geq \text{rank}(B)$. A concrete example: let $A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ and $B = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$. Here $\text{rank}(A) = 1$, $\text{nullity}(A) = 2$, and the two nonzero columns of $B$ span a subspace of $\text{null}(A)$, which confirms the relationship $\text{rank}(B) \leq \text{nullity}(A)$.

Practice more Vectors, Matrices & Core Operations questions

Systems of Linear Equations & Matrix Inverses

Questions about solving linear systems reveal whether you understand the difference between mathematical existence and numerical stability. Candidates consistently stumble when interviewers introduce conditioning problems or ask about computational trade-offs in realistic scenarios with thousands of equations.

Here's what separates strong candidates: recognizing that residual size and solution quality are completely different things when matrices are poorly conditioned. A tiny residual can hide massive errors in your solution, which is why experienced quants never trust solutions from ill-conditioned systems without additional verification.

Systems of Linear Equations & Matrix Inverses

Firms like Jane Street and Citadel love asking you to solve or reason about linear systems, especially when the system is underdetermined or ill-conditioned. This section tests your ability to connect concepts like invertibility, determinants, and conditioning to practical scenarios where numerical stability matters.

You have a linear system $Ax = b$ where $A$ is a 100x100 matrix with a condition number of $10^{15}$. Your colleague says the solution looks fine because the residual $\|Ax - b\|$ is small. Why should you be skeptical?

Jane StreetJane StreetMediumSystems of Linear Equations & Matrix Inverses

Sample Answer

A small residual does not guarantee an accurate solution when the matrix is ill-conditioned. The relative error in $x$ can be amplified by the condition number: $\frac{\|\delta x\|}{\|x\|} \leq \kappa(A) \frac{\|\delta b\|}{\|b\|}$, so with $\kappa(A) \approx 10^{15}$, even rounding errors at machine epsilon ($\approx 10^{-16}$) can produce a solution with no correct digits. You should check the forward error directly or use a more stable formulation, such as regularization or iterative refinement, rather than trusting the residual alone.

Practice more Systems of Linear Equations & Matrix Inverses questions

Eigenvalues & Eigenvectors

Eigenvalue questions test your ability to extract financial meaning from mathematical structure, particularly with covariance matrices and Markov chains. The biggest mistake candidates make is treating eigenvalues as pure numbers rather than understanding what they reveal about risk concentration, dimensionality, and long-term behavior.

Smart interviewers focus on edge cases: what happens when eigenvalues are zero, negative, or repeated? These aren't pathological cases in finance, they're signals that your data has rank deficiencies, your model assumptions are violated, or your risk estimates are unstable.

Eigenvalues & Eigenvectors

Understanding eigenvalues and eigenvectors is non-negotiable for quant roles, yet many candidates can only recite definitions without applying them. You will face questions that require you to interpret eigenvalues in the context of covariance matrices, stability analysis, or Markov chains, so be ready to go well beyond textbook computation.

You have a 2x2 covariance matrix of daily returns for two correlated assets. Without computing anything, what do the eigenvalues tell you about the portfolio, and what happens to the smaller eigenvalue as the correlation approaches 1?

AQRAQREasyEigenvalues & Eigenvectors

Sample Answer

You could interpret the eigenvalues as variances along the principal axes or think of them as scaling factors of the matrix. The first interpretation wins here because it directly maps to portfolio risk: the eigenvalues of a covariance matrix give you the variance of returns along each eigenvector (principal component) direction. The larger eigenvalue captures the dominant risk factor, while the smaller one captures the residual independent risk. As correlation approaches 1, the two assets become linearly dependent, the matrix approaches rank 1, and the smaller eigenvalue approaches 0, meaning nearly all portfolio variance is explained by a single factor.

Practice more Eigenvalues & Eigenvectors questions

Matrix Decompositions

Matrix decomposition questions separate candidates who can choose the right tool for each job from those who apply SVD to everything. Interviewers want to see that you understand computational cost, numerical stability, and which decompositions preserve which properties under different conditions.

The critical insight is that decomposition choice depends heavily on what you do next: if you're solving the same system repeatedly with different right-hand sides, LU factorization wins. If you need robust solutions with rank-deficient matrices, SVD is your only reliable option. Know when each decomposition breaks down.

Matrix Decompositions

Interviewers at Two Sigma, DE Shaw, and similar firms frequently test whether you can distinguish between SVD, Cholesky, QR, and LU decompositions and know when each one is appropriate. You need to articulate not just the factorizations themselves but also their computational costs, numerical properties, and real-world use cases in portfolio optimization or regression.

You have a covariance matrix of asset returns and need to simulate correlated random samples for a Monte Carlo risk engine. Which decomposition do you use, and what happens if the covariance matrix is only positive semi-definite rather than positive definite?

Two SigmaTwo SigmaMediumMatrix Decompositions

Sample Answer

Reason through it: You need a matrix $L$ such that $\Sigma = LL^T$, so you can generate correlated samples as $L z$ where $z \sim N(0, I)$. The natural choice is Cholesky decomposition because it is $O(n^3/3)$, roughly half the cost of a general LU, and it directly exploits the symmetry and positive definiteness of $\Sigma$. Now, if $\Sigma$ is only positive semi-definite, Cholesky will fail because you will encounter a zero or negative value under the square root during factorization. In that case, you fall back to the eigendecomposition $\Sigma = Q \Lambda Q^T$, zero out or clamp tiny negative eigenvalues, and form $L = Q \Lambda^{1/2}$. This is a common practical issue when your number of assets exceeds your number of return observations.

Practice more Matrix Decompositions questions

Linear Transformations & Quadratic Forms

Linear transformations and quadratic forms questions probe your geometric intuition about how matrices reshape space and preserve or destroy properties. Candidates often get lost because they try to compute everything instead of visualizing what the transformation actually does to vectors.

Pay special attention to quadratic forms with covariance matrices: when eigenvalues go negative, your optimization problem becomes non-convex and standard mean-variance approaches fail catastrophically. This isn't a theoretical curiosity, it's a daily reality when working with estimated covariance matrices that have sampling noise.

Linear Transformations & Quadratic Forms

This section challenges you to think geometrically about what matrices do to spaces: rotations, reflections, projections, and changes of basis. Candidates often struggle here because quant interviews expect you to connect abstract transformation properties to concrete problems like positive definiteness checks in risk models or orthogonal projections in least squares estimation.

You have a covariance matrix for a portfolio of 500 assets, and after a data update one eigenvalue comes back slightly negative. Your PM asks if the matrix is still usable for mean-variance optimization. What do you tell them, and how do you fix it?

Two SigmaTwo SigmaMediumLinear Transformations & Quadratic Forms

Sample Answer

This question is checking whether you can connect positive definiteness to the practical requirement that portfolio variance $\mathbf{w}^T \Sigma \mathbf{w} > 0$ for all nonzero weight vectors. A negative eigenvalue means the quadratic form can go negative, so your optimizer could produce a portfolio with 'negative variance,' which is nonsensical and will blow up your risk estimates. The standard fix is spectral clipping: decompose $\Sigma = Q \Lambda Q^T$, replace any negative eigenvalues in $\Lambda$ with a small positive floor (or zero), and reconstruct. You should mention that this is equivalent to projecting onto the cone of positive semidefinite matrices in the Frobenius norm, and note the tradeoff that aggressive clipping distorts correlations.

Practice more Linear Transformations & Quadratic Forms questions

Applications in Finance & Machine Learning

Application questions are where mathematical rubber meets financial road, testing whether you can diagnose real problems using linear algebra tools. The trap here is focusing on perfect textbook scenarios instead of messy realities like overfitted covariance matrices, rank-deficient data, and optimization problems that blow up.

Successful candidates think like debugging engineers: when portfolio weights are extreme, they immediately check condition numbers and eigenvalue spreads. When PCA gives unexpected results, they examine the data matrix rank and time period stability. Learn to use linear algebra as a diagnostic toolkit, not just a computational engine.

Applications in Finance & Machine Learning

Knowing the theory is only half the battle: top firms want to see you apply linear algebra to PCA for dimensionality reduction, factor models, mean-variance optimization, and regression diagnostics. You should be prepared to walk through how spectral properties of a covariance matrix drive portfolio construction or how low-rank approximations improve signal extraction from noisy financial data.

You have a covariance matrix estimated from 500 daily returns of 200 assets. Walk me through how you would use PCA to build a factor model, and explain why the raw eigenvalues might mislead you here.

Two SigmaTwo SigmaMediumApplications in Finance & Machine Learning

Sample Answer

The standard move is to eigendecompose the sample covariance matrix $\hat{\Sigma}$ and retain the top $k$ eigenvectors as factor loadings, choosing $k$ where the eigenvalue scree flattens. But here, the fact that $T=500 < N=200$ is not the issue since $T > N$, yet the ratio $T/N = 2.5$ is still small enough that Marcenko-Pastur theory tells you the bulk of your eigenvalues are inflated by noise. You should compare your empirical eigenvalue distribution against the Marcenko-Pastur bound $\lambda_{+} = \sigma^2(1 + \sqrt{N/T})^2$ and only trust eigenvalues that exceed it. In practice this means you might keep 5 to 15 factors instead of the 50 that naive variance-explained thresholds would suggest, and you should consider shrinkage estimators like Ledoit-Wolf to regularize the covariance before downstream portfolio construction.

Practice more Applications in Finance & Machine Learning questions

How to Prepare for Linear Algebra Interviews

Visualize Matrix Operations Geometrically

Draw simple 2D examples for every matrix concept you study. When you see AB = 0, sketch how B's columns must lie in A's null space. This geometric intuition will save you when interviewers ask for examples or explanations under pressure.

Practice Constructing Concrete Examples

For every theorem or property, build a small numerical example that demonstrates it. If you claim a matrix is rank-deficient, construct a 2x2 example with specific numbers. Interviewers love asking 'show me an example' to test real understanding.

Connect Every Concept to Numerical Stability

Ask yourself: when does this computation become unreliable? Study condition numbers, iterative refinement, and how small perturbations in data can destroy solutions. Financial data is always noisy, so numerical robustness matters more than theoretical elegance.

Master the Economics of Matrix Computations

Know the operation counts for different approaches: O(n³) for matrix inversion, O(n²k) for k right-hand sides with pre-computed LU factorization. In high-frequency environments, algorithmic complexity directly impacts profitability.

Build Intuition for Degenerate Cases

Spend extra time on rank-deficient matrices, repeated eigenvalues, and nearly-singular systems. These edge cases appear constantly in real financial data, and interviewers use them to separate candidates who only know the happy path from those ready for production systems.

How Ready Are You for Linear Algebra Interviews?

1 / 6
Vectors, Matrices & Core Operations

An interviewer asks: 'Two feature vectors have a cosine similarity of zero. What does this tell you about the vectors?' How do you respond?

Frequently Asked Questions

How deep does my linear algebra knowledge need to be for a Quantitative Researcher interview?

You should be comfortable well beyond introductory coursework. Expect questions on eigendecomposition, singular value decomposition, positive definiteness, matrix calculus, and numerical stability. Firms often probe your intuition around why certain decompositions matter in practice, such as how PCA relates to the spectral theorem or why condition numbers affect regression results.

Which companies ask the most linear algebra questions for Quantitative Researcher roles?

Top quantitative trading firms like Jane Street, Two Sigma, Citadel, DE Shaw, and Jump Trading are well known for asking rigorous linear algebra questions. Renaissance Technologies and Hudson River Trading also emphasize mathematical fundamentals heavily. Even large banks with quantitative research desks will test core linear algebra, though typically with less depth than dedicated quant firms.

Will I need to code linear algebra solutions during the interview, or is it purely theoretical?

Many Quantitative Researcher interviews blend theory with implementation. You may be asked to code a matrix decomposition, implement least squares from scratch, or debug a numerical routine in Python or C++. Practicing both pen-and-paper derivations and coding implementations will prepare you for either format. You can sharpen your coding skills with problems at datainterview.com/coding.

How do linear algebra interview questions differ across Quantitative Researcher sub-roles?

For alpha research roles, expect questions tied to dimensionality reduction, covariance estimation, and factor models. For roles closer to statistical modeling or machine learning, you will see more emphasis on kernel methods, matrix calculus, and optimization. Execution-focused quant roles may lean toward numerical linear algebra topics like sparse solvers and iterative methods.

How should I prepare for linear algebra interviews if I lack real-world quantitative research experience?

Start by deeply reviewing a rigorous textbook such as Strang's 'Linear Algebra and Its Applications' or Axler's 'Linear Algebra Done Right,' then connect each concept to practical applications like portfolio optimization or PCA. Work through applied problems that simulate real scenarios, and practice explaining your reasoning out loud. You can find targeted interview questions at datainterview.com/questions to bridge the gap between theory and application.

What are the most common mistakes candidates make on linear algebra interview questions?

The biggest mistake is memorizing formulas without understanding geometric or statistical intuition. For example, many candidates can state the SVD formula but cannot explain what the singular vectors represent in a data context. Other common errors include confusing rank with dimension, ignoring numerical stability when discussing algorithms, and failing to connect linear algebra concepts to the firm's actual problems like risk decomposition or signal extraction.

Dan Lee's profile image

Written by

Dan Lee

Data & AI Lead

Dan is a seasoned data scientist and ML coach with 10+ years of experience at Google, PayPal, and startups. He has helped candidates land top-paying roles and offers personalized guidance to accelerate your data career.

Connect on LinkedIn