Business case questions are the make-or-break moment in data analyst interviews at top tech companies and consulting firms. Meta will ask you to diagnose why Reels engagement dropped 15% in a week. Amazon wants you to build unit economics for a new Prime benefit. McKinsey expects you to size a market with zero data in 10 minutes. These aren't just analytical puzzles: they test whether you can think like a business owner, not just a data processor.
What makes business cases brutally hard is that there's no single right answer, but there are definitely wrong approaches. You might nail the math but completely miss that your recommendation would cannibalize the company's core revenue stream. Or you could propose a brilliant strategy that's impossible to measure, leaving stakeholders with no way to track success. The worst mistake? Jumping straight into analysis without framing the problem, which signals you don't understand how business decisions actually get made.
Here are the top 27 business case questions organized by the core skills you need to master.
Business Case Interview Questions
Top Business Case interview questions covering the key areas tested at leading tech companies. Practice with real questions and detailed solutions.
Problem Framing and Assumptions
Most candidates fail problem framing questions because they treat them like math problems instead of business conversations. You hear 'engagement is down' and immediately start listing potential causes, when smart interviewers want to see you clarify what engagement actually means, what timeframe matters, and whether this connects to revenue impact.
The key insight here: your first job isn't to solve the problem, it's to define what problem you're actually solving. A Director at Uber told me they automatically reject candidates who dive into analysis without asking what metric defines 'driver earnings' or whether they care about gross earnings, net earnings, or earnings per hour.
Problem Framing and Assumptions
Start by turning an ambiguous prompt into a crisp objective, constraints, and success metric. You are tested on structured thinking and assumption hygiene, and you may struggle if you jump into math before aligning on scope and definitions.
Meta asks you: "Engagement is down on Reels this week. What should we do?" Before you analyze data, what objective, metric, and scope questions do you ask to make this a solvable problem?
Sample Answer
Most candidates default to jumping into a dashboard deep dive, but that fails here because you can optimize the wrong outcome or time window. You first pin down the objective, for example restore Reels consumption without hurting creator supply or session quality. You define the primary metric precisely, for example Reels watch time per DAU, plus guardrails like retention, hides, reports, and creator uploads. You align scope and segments, for example which geos, platforms, new versus existing users, and whether the drop is absolute or relative to seasonality and experiments.
At Amazon, you are told: "Prime churn increased last month." What are the minimum assumptions and definitions you need to agree on before proposing an analysis plan?
Uber says: "Drivers are complaining about earnings in Chicago." How do you frame the problem, pick a success metric, and state assumptions so you can diagnose whether this is a perception issue or a real earnings drop?
Airbnb asks: "We want to grow bookings in a new city." What clarifying questions would you ask to convert this into a measurable objective with constraints, including supply, quality, and budget?
BCG gives you: "Fix our food delivery marketplace profitability." What assumptions must you make explicit about unit economics, customer cohorts, and time horizon before you build any model?
Market Sizing and Growth Drivers
Market sizing questions separate strong business thinkers from calculator operators. The challenge isn't getting the exact right number (impossible with limited data), but showing you can break down complex markets into logical, defensible segments.
Your success here depends on choosing the right segmentation approach and pressure testing your own assumptions before the interviewer does. Top candidates segment by behavior or willingness to pay, not just demographics. They also build multiple approaches and triangulate to avoid the classic mistake of getting anchored on one methodology that could be completely wrong.
Market Sizing and Growth Drivers
In market sizing, you show you can estimate demand with defensible decompositions and sanity checks. You are evaluated on whether you can pick the right segmentation and avoid compounding shaky assumptions under time pressure.
Meta is considering launching a paid subscription for creators in the US. Size the annual revenue opportunity in year 1, using a defensible segmentation and at least one sanity check.
Sample Answer
A reasonable year 1 US opportunity is about $300M in subscription revenue. Start with US monthly active creators, segment into professional or semi-pro creators versus casual posters, then estimate the share willing to pay for tools and support. Multiply $\text{paying creators} \times \text{monthly price} \times 12$, for example $2M \times \$12 \times 12 \approx \$288M$. Sanity check by comparing to plausible creator tool spend per creator per year and ensuring adoption is not higher than the share who already monetize.
DoorDash wants to estimate the total addressable market for grocery delivery in a mid sized US metro. How would you size monthly orders, and what growth drivers would you isolate first?
Uber is launching in a new international city with limited data. Estimate weekly ride demand, and call out which assumptions you would pressure test first under time pressure.
Amazon is evaluating whether to expand same day delivery to a new region. Size the incremental annual package volume that would shift from 1 to 2 day to same day, and identify the primary growth drivers.
Google is exploring a new AI assisted customer support product for small businesses. Size the US market in annual recurring revenue, and explain how you would sanity check your willingness to pay assumptions.
McKinsey is advising an airport on monetizing parking with dynamic pricing. Estimate the annual revenue upside from dynamic pricing, and identify the 2 to 3 growth drivers you would test with data first.
Unit Economics and Revenue Modeling
Unit economics questions test whether you understand how digital businesses actually make money, beyond surface-level metrics. You need to connect user actions to cash flows, accounting for timing, churn, and variable costs that many candidates forget.
The make-or-break skill is knowing which costs are truly variable versus fixed, and how retention curves actually behave over time. I've seen brilliant candidates build beautiful LTV models that assume linear retention, which would bankrupt most subscription businesses within six months.
Unit Economics and Revenue Modeling
You will often need to build a simple revenue model from funnels, pricing, take rates, and retention. You can get tripped up by mixing stock and flow metrics, or by not stating how your model ties to measurable product events.
Uber Eats is piloting a new city. You are given 200,000 monthly app sessions, a 6% session-to-order conversion, $32 average basket size, 18% take rate, and $2 average promo per order. Build a monthly net revenue model and call out the key product events that back each input.
Sample Answer
You could model top down from sessions or bottom up from orders. Top down wins here because the interviewer gave you funnel inputs that map cleanly to events: session, checkout, order completed. Compute orders as $200{,}000\times 0.06=12{,}000$, then gross bookings as $12{,}000\times 32=384{,}000$. Platform revenue is $384{,}000\times 0.18=69{,}120$, then net out promos: $12{,}000\times 2=24{,}000$, so net is $69{,}120-24{,}000=45{,}120$. Tie each metric to instrumentation, sessions from app_open, conversion from order_completed divided by sessions, basket from order_subtotal, take rate from fee ledger, promo from applied_promo_amount.
Amazon is considering lowering Prime free shipping threshold from $35 to $25. You are told 1 million weekly browsing sessions, 8% add-to-cart rate, 50% cart-to-purchase rate, $42 current AOV, and a forecast that AOV drops to $36 but purchase rate rises by 10% relative. Estimate the weekly incremental gross profit given 25% gross margin and $1.50 incremental shipping cost per order.
DoorDash wants to estimate LTV per new consumer for a new promo. You are given: $8 contribution margin per order after variable costs, 2.2 orders per active month, 55% month-1 retention to month-2, and then 80% monthly retention thereafter. Build a simple LTV model and state where stock versus flow confusion can break it.
Meta is testing a new ad format in Reels. You have 50 million daily Reels impressions, a 1.2% ad load today, the test increases ad load to 1.5% but decreases time spent so total impressions drop by 4%. If eCPM is $6 and fill rate is 95%, estimate daily revenue impact and name the key measurement pitfalls.
Airbnb sees that adding a new host onboarding step reduces host signup conversion by 5% relative, but increases host activation rate by 12% relative and increases average nights booked per active host by 0.3 per month. Given baseline 20,000 host signup attempts per month, 40% signup conversion, 50% activation, 6 nights per active host per month, $140 ADR, and 14% take rate, estimate monthly revenue impact.
BCG is advising a subscription product that has both monthly and annual plans. You are given: 70% choose monthly at $12, 30% choose annual at $120, monthly churn 6%, annual renewal rate 75%, and payment processing fees of 3% of revenue. Build a 12 month expected revenue per new subscriber model and explain how you would validate each assumption with product events and logs.
Cost Structure and Cost-Benefit Tradeoffs
Cost-benefit tradeoffs reveal whether you can think like an executive making resource allocation decisions under uncertainty. These questions have no clean answers: you're choosing between spending on growth versus retention, short-term gains versus long-term sustainability.
What trips up most candidates is trying to optimize for a single metric instead of acknowledging inherent tradeoffs. Strong answers explicitly call out what you're giving up, quantify uncertainty ranges, and propose experiments to reduce risk rather than pretending you can predict the future with precision.
Cost Structure and Cost-Benefit Tradeoffs
This section asks you to weigh costs like compute, incentives, support, and ops against incremental benefit. You are assessed on whether you can quantify tradeoffs, surface second order effects, and choose the right time horizon for payback.
At Meta, your feed ranking team wants to add a new feature that increases CTR by 0.2%, but adds 15% more model inference compute per request. How do you decide if it is worth shipping, and what payback horizon do you use?
Sample Answer
Reason through it: you first translate CTR lift into incremental value per impression, for example $$\Delta \text{Value} = \Delta \text{CTR} \times \text{Impressions} \times \text{Value per click}$$ and compare it to incremental compute cost $$\Delta \text{Cost} = \Delta \text{Compute per req} \times \text{Requests} \times \text{Unit cost}$$. You sanity check units and time scale, daily is usually easiest, then you run sensitivity on value per click and the true lift after regression to the mean. Next you surface second order effects: extra latency can reduce CTR, higher spend can force budget cuts elsewhere, and reliability risk may create incident costs. For horizon, you use the model refresh cycle and infra contracts, if you can roll back quickly you can accept a shorter payback, if you need capex or long term capacity reservations you require a longer payback with downside protection.
At Uber, ops proposes increasing driver support headcount by 20% to cut average ticket resolution time from 12 hours to 2 hours. How do you build a cost benefit case and decide if the spend is justified?
At Amazon, a team can either send a 10% off coupon to win back churned customers or invest in a recommendation model that improves repeat purchase by 1% but requires ongoing ML ops. How do you choose between these, and what tradeoffs do you highlight?
At DoorDash, you are considering a new dasher incentive that costs an extra $2 per delivery in a city to reduce late deliveries by 15%. What data do you need to decide if the incentive should run, and how do you avoid overpaying?
At Google, your data pipeline team wants to move a batch job to near real time streaming, increasing infra cost by $400K per year, to reduce metric latency from 24 hours to 5 minutes for a critical product dashboard. When is this worth it, and how do you quantify benefits that are not directly revenue tied?
Strategic Recommendations and Stakeholder Communication
Strategic recommendation questions are where analytical skills meet business judgment. You have complete information but limited time, and you need to distill complex tradeoffs into clear action items that different stakeholders can actually execute.
The crucial insight: your recommendation needs to work for the organization, not just the data. That means acknowledging political realities, resource constraints, and measurement challenges. The best answers include a decision framework that stakeholders can apply to similar decisions in the future, not just a one-off recommendation.
Strategic Recommendations and Stakeholder Communication
To close the case, you must synthesize analysis into a recommendation, risks, and next steps tailored to executives and cross-functional partners. You may struggle if you present numbers without a decision, or if you fail to anticipate stakeholder objections and measurement plans.
You analyzed an experiment showing a +0.6% lift in conversion but a -1.8% drop in retention for a new onboarding flow. You have 2 minutes with a VP, what do you recommend, and how do you defend it against a growth and a product quality objection?
Sample Answer
This question is checking whether you can turn mixed metrics into a decision, handle tradeoffs, and communicate crisply to executives. You should recommend either ship, iterate, or stop, then anchor on the company goal and the metric hierarchy, for example, protect retention if it is a North Star and treat short-term conversion as secondary. Quantify the tradeoff in units leaders care about, for example projected net $\Delta$ active users or $\Delta$ revenue, then state your confidence level and the key assumption driving it. Preempt objections by proposing a mitigation, like ramping by cohort, adding guardrails on retention, and running a follow-up test to isolate the retention drop mechanism.
You built a pricing elasticity model that suggests raising delivery fees by $\$0.50$ increases contribution margin, but churn risk is unclear. How would you present a recommendation to Finance, Product, and Ops, and what decision framework do you use to align them?
An exec asks you to recommend whether to launch a new driver incentive in 3 cities based on a quick analysis that shows shorter ETAs but higher cost per trip. What do you recommend, what risks do you flag, and what is your measurement plan for the first 2 weeks?
Your analysis shows the search ranking change improves click-through rate but increases customer support tickets for wrong matches. You need to write a one-page update for leadership and a separate message for customer support leads, what do you include in each and how do you tailor the ask?
A partner team disputes your metric definition and says your recommendation would hurt their OKRs. In a live review, how do you de-escalate, reframe the decision, and still land on a next step that keeps the launch on schedule?
How to Prepare for Business Case Interviews
Start Every Case With Definition Questions
Before touching any numbers, ask what success looks like, what timeframe matters, and what constraints you're working within. Practice turning vague prompts like 'growth is slowing' into specific, measurable problems you can actually solve.
Build Multiple Models and Triangulate
For market sizing and revenue modeling, always create at least two different approaches and see if they're in the same ballpark. If your top-down and bottom-up estimates differ by 10x, you've found an assumption that needs pressure testing.
Practice Unit Economics With Real Company Data
Find actual conversion rates, retention curves, and margins from company earnings calls or public S-1 filings. Build models using realistic numbers so you develop intuition for what healthy unit economics actually look like across different business models.
Always Include a Measurement Plan
Every recommendation should end with how you'll know if it's working within 2-4 weeks. Specify leading indicators, sample sizes, and what would make you change course. This shows you think beyond analysis to actual business execution.
Call Out Your Biggest Assumption
In every answer, explicitly state the assumption you're least confident about and how it would change your conclusion. Interviewers love candidates who proactively identify risks rather than defending shaky foundations.
How Ready Are You for Business Case Interviews?
1 / 6Your client says, "Profits are down, figure out why." In the first 2 minutes, what is the best next step to frame the problem and avoid running with the wrong objective?
Frequently Asked Questions
How much business depth and domain knowledge do I need for a Data Analyst business case interview?
You usually do not need deep industry expertise, you need structured thinking and clear assumptions. Focus on defining the objective, choosing the right metrics, sizing impact, and outlining a data backed recommendation. If you use domain examples, keep them simple and explain your logic.
Which companies ask business case interview questions most often for Data Analyst roles?
You will see them frequently at product and tech companies, consulting firms, and fast growing startups that want analysts to influence decisions. Big tech, consumer internet, fintech, and marketplaces commonly use case style prompts like diagnosing a metric drop or evaluating a new feature. Hiring managers in analytics heavy teams also use them to test how you turn ambiguous problems into an analysis plan.
Is coding required in a business case interview for a Data Analyst?
Sometimes, but it is usually light and used to validate your approach, for example writing SQL to pull cohorts or compute KPIs. Many cases are primarily about framing the problem, selecting metrics, and interpreting results, with only small snippets of SQL or pseudo code. If you want to practice the coding portions, use datainterview.com/coding.
How do business case interviews differ across Data Analyst and other analytics roles?
For Data Analysts, cases often emphasize KPI definitions, dashboard logic, experiment analysis, and root cause investigation. For Data Scientists, cases typically add modeling choices, tradeoffs, and evaluation strategy. For Analytics Engineers or BI focused roles, you may be pushed more on data quality, metric governance, and how you would build reliable pipelines to support the analysis.
How can I prepare for business case interviews if I have no real world analytics experience?
Practice with mock cases that mimic real prompts, like investigating a retention drop or estimating the ROI of a feature. Build a repeatable framework: clarify goal, define success metrics, list hypotheses, specify needed data, and describe how you would decide. Use datainterview.com/questions to drill case style prompts and get comfortable talking through assumptions.
What are common mistakes to avoid in Data Analyst business case interviews?
Do not jump into analysis without clarifying the business goal, time window, and the exact metric definition. Avoid vague answers, you should state assumptions, prioritize hypotheses, and explain what data would confirm or reject them. Also avoid overcomplicating with advanced methods when a simple segmentation, funnel, or cohort view would answer the question.
