Why This Matters
A candidate is designing a chat application. The interviewer asks, "So how much storage do we need?" The candidate freezes for ten seconds, scribbles some multiplication on the whiteboard, second-guesses every number, erases half of it, and finally mumbles "maybe a few terabytes?" with zero confidence. The interviewer moves on, but the damage is done. Not because the number was wrong, but because the candidate couldn't reason through it. That single moment of hesitation signaled something the interviewer can't ignore: this person can't make engineering decisions when exact data isn't available.
And that's the thing. Estimation questions aren't math tests. The interviewer already knows the answer (roughly). What they're watching is whether you can distinguish a problem that fits on one machine from one that needs five hundred. Whether you can anchor on reasonable assumptions, do simple multiplication without drowning in decimals, and land within an order of magnitude of reality. Getting the number "right" doesn't matter. Showing that you have a process for getting there does. Because in real engineering, you'll face this exact situation every week: incomplete data, a decision that needs to be made, and no time to run a benchmark.
This guide gives you a repeatable five-step framework you can apply to any estimation question, whether it's about storage, QPS, bandwidth, or memory. You'll walk into your interview with the key numbers already memorized, a method for chaining them together on a whiteboard, and the confidence to say your assumptions out loud instead of doing silent arithmetic that nobody can follow. Every system design interview has at least one moment where an estimation unlocks (or blocks) your next architectural decision. Tomorrow, you won't be the candidate who freezes.
The Framework
Five phases. A few minutes total. That's the target for an estimation in an interview. Some take two minutes flat; others stretch to four or five when the interviewer jumps in with questions or asks you to explore a different scenario. Both are fine. The goal isn't speed. It's structured thinking that the interviewer can follow.
Memorize this structure. It's the skeleton you'll hang every estimation on, whether you're sizing a database, estimating QPS, or figuring out how much bandwidth a video streaming service needs.
| Phase | Goal |
|---|---|
| 1. Clarify the goal | Name the quantity and the decision it drives |
| 2. State assumptions | Anchor on a handful of known quantities, round aggressively |
| 3. Decompose | Build a multiplication chain where each factor maps to one assumption |
| 4. Compute | Powers-of-10 arithmetic, visible to the interviewer |
| 5. Sanity check | Compare against a real-world benchmark, then make the call |
These phases are your default order, but interviews aren't scripted. The interviewer might challenge an assumption in Step 2 that forces you to revisit your goal. They might say "what if the DAU is 10x higher?" after you've already finished computing. That's not a disruption. That's the interview working as intended. Run through the framework once, then be ready to re-enter at any phase when the conversation shifts.

Step 1: Clarify the Goal
Never start multiplying numbers until you've said one sentence out loud: what you're estimating and what architectural decision hangs on the result.
What to do: 1. Name the specific quantity (storage in bytes, requests per second, network bandwidth in MB/s). 2. Name the time horizon (per second, per day, over 5 years). 3. Connect it to a design decision. This is the part most candidates skip, and it's the part that matters most.
What to say:
"Before I start designing the storage layer, let me estimate total storage we'd need over 3 years. That'll tell us whether this fits on a single machine or if we need to think about sharding."
"I want to estimate peak QPS for reads so we can decide if a single database instance is sufficient or if we need caching and replicas."
The interviewer is checking one thing here: does this person estimate with purpose, or do they just crunch numbers for the sake of it? A candidate who ties the estimation to a decision immediately signals senior-level thinking. A candidate who says "let me estimate some numbers" and starts scribbling is already behind.
Do this: Always finish your opening sentence with "so that we can decide whether to X or Y." Force yourself into this pattern. It frames the entire estimation as a tool, not a math exercise.
Step 2: State Your Assumptions
This is where you earn partial credit even if your final number is off. Say every assumption out loud before you use it.
What to do: 1. Think ahead to your decomposition. What multiplication chain will you need? Pick only the base quantities that will appear as factors in that chain (DAU, average payload size, retention period, etc.). If an assumption won't plug directly into your formula, you probably don't need it yet. 2. For each one, state a rounded number and briefly justify it. One sentence per assumption, max. 3. Round everything to the nearest power of 10 or a clean number. 317 million becomes 300 million. 86,400 seconds becomes 100,000. Nobody on the interview panel cares about the 14% difference.
Aim for 3-5 assumptions. That number isn't arbitrary. Your decomposition in the next step will be a chain of 3-5 multiplications, and each factor should map to exactly one assumption you stated here. If you find yourself listing six or seven assumptions, pause and ask yourself which ones actually feed the formula. Drop the rest.
What to say:
"I'll assume 300 million DAU, which is roughly Twitter-scale. Each user creates about 0.5 tweets per day on average, so that's 150 million new tweets daily. For tweet size, I'll use 300 bytes including metadata but not media, since media would be stored separately in blob storage."
Notice the rhythm: number, source or justification, round. Number, source, round.
The interviewer is evaluating whether your assumptions are reasonable, not whether they're exact. They're also watching to see if you can be corrected. If they say "actually, assume 500 million DAU," you should say "great, let me adjust" and move on. Don't argue. Don't get flustered. Absorb the new number and keep going. This happens constantly, and it's not a trap. The interviewer wants to see how you incorporate new information without losing your thread.
Sometimes they'll push harder: "What if DAU is 10x what you assumed?" That's an invitation to re-run your chain with a different input, not to start over from scratch. Swap the one number, recompute, and talk about how the design implication changes. This is where the framework pays off. Because your structure is clean, changing one variable is trivial.
Don't do this: Silently write numbers on your workspace without explaining where they came from. The interviewer sees unexplained numbers and thinks you're guessing. Even if you ARE guessing, narrating your reasoning ("I don't know the exact number, but I'd estimate around X because...") turns a guess into an assumption.
"Alright, with those assumptions pinned down, let me break this into a multiplication chain."
Step 3: Decompose
Your goal is to turn one big, scary question ("how much storage does Twitter need?") into a chain of small multiplications where each factor is a number you just stated as an assumption.
What to do: 1. Write the multiplication chain on your visible workspace (whiteboard, shared doc, virtual notepad) as a formula BEFORE plugging in numbers. Let the interviewer see the structure. 2. Check that each factor in the chain maps back to exactly one assumption from Step 2, and that every assumption you stated has a home in the formula. If something is orphaned on either side, you've got a mismatch. Either drop the extra assumption or add the missing factor. 3. Keep it to 3-5 factors. If you have more than that, you're overcomplicating it.
What to say:
"Total storage per year equals DAU times tweets per user per day times bytes per tweet times 365 days. Let me write that out."
Then on your workspace:
Storage/year = DAU × tweets/user/day × bytes/tweet × days/year
This is the step where you prove you can think in systems, not just in arithmetic. The interviewer is looking at your decomposition to see if you've captured the right variables. Did you forget about metadata? Did you account for the right time period? A clean formula visible to both of you invites the interviewer to check your logic before you compute, which is exactly what you want. It's much better to catch a missing factor now than to redo the whole calculation.
Key insight: The decomposition IS the estimation. The arithmetic afterward is just mechanical. If your chain of multiplications captures the right factors, your answer will be in the right ballpark even with rough numbers. If you're missing a factor or have the wrong one, no amount of precise math will save you.
"Okay, let me plug in the numbers and crunch this."
Step 4: Compute
This is where most candidates lose the interviewer's attention. They hunch over doing long multiplication in silence. Don't be that person.
What to do: 1. Convert every number to scientific notation. 300 million becomes 3 × 10⁸. 365 days becomes ~4 × 10². This turns multiplication into addition of exponents. 2. Write each step where the interviewer can see it. Separate the coefficients from the powers of 10. 3. Narrate as you go. Not every arithmetic step, but the key moves.
What to say:
"300 million is 3 times 10 to the 8th. Times 0.5 tweets per user gives us 1.5 times 10 to the 8th tweets per day. Times 300 bytes, that's about 4.5 times 10 to the 10th bytes per day. Times 365, call it 400, so roughly 1.8 times 10 to the 13th bytes per year. That's about 18 terabytes."
On your workspace it should look something like:
3×10⁸ × 0.5 = 1.5×10⁸ tweets/day
× 3×10² bytes = 4.5×10¹⁰ bytes/day
× 4×10² days = 1.8×10¹³ bytes/year
≈ 18 TB/year
The interviewer is checking two things: can you do order-of-magnitude math without a calculator, and can you keep the work visible so they can follow along? Speed matters less than clarity. If you write it cleanly, the interviewer can glance at your exponents and immediately tell if you're in the right ballpark.
Don't do this: Multiply 300,000,000 × 0.5 × 300 × 365 longhand. You will make an arithmetic error, you will lose track of zeros, and the interviewer will zone out. Powers of 10 exist to prevent exactly this.
If you're in a virtual interview without a shared visual workspace, narrate even more deliberately. Say the intermediate results out loud and pause briefly so the interviewer can track your math mentally. "That gives me about 4.5 times 10 to the 10th bytes per day. Does that seem right so far?" Checking in like this replaces the whiteboard's role as a shared reference point.
"Let me gut-check that number before I use it to make a decision."
Step 5: Sanity Check
You've got a number. Before you move on, spend a moment asking: does this pass the smell test?
What to do: 1. Compare your result against one concrete, real-world benchmark. Not a vague feeling. A specific reference point. 2. If the number seems off by more than 10x from what you'd expect, go back and check your assumptions. Usually it's a unit error or a missing/extra factor. 3. State the design implication out loud. This is the payoff. This is why you did the estimation.
What to say:
"18 TB per year. A single commodity server with a few SSDs can hold a couple terabytes comfortably, so one year of tweets fits on one machine, but five years at 90 TB would not. That tells me we should plan for sharding the tweet store from day one."
Or if the number feels wrong:
"Hmm, that gives me 500 petabytes, which would be one of the largest storage systems on Earth. Let me double-check my units... ah, I mixed up bits and bytes. Let me divide by 8."
The interviewer is watching for two things in this final phase. First, do you have enough real-world intuition to know when a number is absurd? Second, and more importantly, do you convert the number into an engineering decision? An estimation that ends with "so it's about 18 TB" is incomplete. An estimation that ends with "so we need to shard" is the whole point.
This is also the natural moment for the interviewer to throw a curveball. "What if we add image uploads?" or "Now assume 10x growth over three years." When that happens, don't panic and don't restart. Go back to Step 2, update the relevant assumption, re-run the multiplication chain with the new number, and land on a new design implication. Your framework stays the same. Only the inputs change. Being able to iterate quickly on a changed assumption is one of the strongest signals you can send.
Do this: Keep two or three benchmark numbers loaded and ready. "A single SSD is about 1 TB." "A single MySQL instance handles roughly 1K-5K writes per second." "A single server can handle 10K-50K simple read QPS." These are your sanity-check anchors. If your estimate says you need 200 servers and the benchmark says one server handles the load, something is wrong. If your estimate says one server is fine but you're designing for Netflix-scale traffic, something is also wrong.
Putting It Into Practice
The framework only matters if you can run it live, under pressure, while someone watches you do math on a whiteboard. So let's do exactly that. Two worked examples, then a full mock dialogue showing how this actually sounds in a room.
Worked Example 1: Twitter's Tweet Storage Over One Year
You're designing a Twitter-like system and need to figure out whether tweets fit on a single machine or require distributed storage. Here's the multiplication chain:
- DAU: 300 million (~3 × 10⁸)
- Tweets per user per day: not every user tweets. Most are readers. Call it 0.5 tweets per active user per day.
- Total tweets per day: 3 × 10⁸ × 0.5 = 1.5 × 10⁸ (~150M tweets/day)
- Average tweet size: 140 characters of text is small, but add metadata (user ID, timestamp, geo, indices) and you're at roughly 300 bytes per tweet.
- Daily storage: 1.5 × 10⁸ × 300 bytes = 4.5 × 10¹⁰ bytes ≈ 45 GB/day
- Annual storage: 45 GB × 365 ≈ 16.4 TB/year. If you want even faster mental math, round 365 up to 400: 45 × 400 = 18 TB. Either way, you're in the 16–18 TB range.
Roughly 16 to 18 terabytes per year. A single high-end server with SSDs can hold that. But you'd never put all your eggs in one machine for a service at Twitter's scale, so this tells you the data volume itself isn't the hard problem. The hard problem is the read throughput and availability requirements, which is a completely different estimation.
Do this: Notice how the estimation ended with an architectural insight, not just a number. "16–18 TB fits on one box, so storage volume isn't the bottleneck" is the sentence that earns you points.
Worked Example 2: URL Shortener QPS
Different question, different target number. Here you care about requests per second, because that determines whether you need caching, read replicas, or load balancing.
- New short URLs created per day: 100 million (10⁸). This is your write load.
- Writes per second: 10⁸ / 10⁵ (seconds in a day) = 1,000 writes/sec. Round up to ~1,200 to give yourself headroom.
- Read-to-write ratio: URL shorteners are massively read-heavy. For every URL created, it gets clicked maybe 100 times. So 100:1.
- Reads per second: 1,200 × 100 = 120,000 reads/sec.
Now the design decision writes itself. A single MySQL instance handles maybe 1K-5K writes per second and 10K-50K simple reads per second. At 120K reads/sec, you're well beyond what one database can serve. You need a caching layer (Redis or Memcached) in front of the database, and probably multiple read replicas behind it.
That's the payoff. The interviewer didn't want to hear "120,000." They wanted to hear what you'd do about 120,000.
The Mock Dialogue: URL Shortener Estimation, Live
This is what a strong estimation actually sounds like in an interview. Pay attention to the moments where the interviewer pushes back. Those aren't traps. They're invitations to show your thinking.
Interviewer: Let's design a URL shortening service. Before we get into architecture, can you give me a sense of the scale we're dealing with?
You: Sure. Let me start by estimating QPS, since that'll drive most of our infrastructure decisions. Can I assume we're building something at the scale of Bitly? Roughly 100 million new short URLs created per day?
Interviewer: That's reasonable. Go with that.
Do this: The candidate anchored on a real product to justify the assumption. This is way better than pulling a number from thin air.
You: OK, so 100 million writes per day. To get per-second, I divide by about 10⁵, which is roughly the number of seconds in a day. That gives me about 1,000 writes per second. I'll round up to 1,200 to account for peak traffic being higher than the average.
Interviewer: Why 10⁵? A day has 86,400 seconds.
You: Right, 86,400. I round to 10⁵ for easier math. It's within 15%, which is fine for an order-of-magnitude estimate. If anything, it makes my QPS estimate slightly conservative, which I'd rather have than the other way around.
Do this: The interviewer tested whether you knew you were rounding. Don't get defensive. Acknowledge the real number, explain why the approximation is acceptable, and keep moving.
Interviewer: OK. What about reads?
You: URL shorteners are extremely read-heavy. Every link gets created once but clicked many times. I'll assume a 100:1 read-to-write ratio, so that's 1,200 times 100, giving us about 120,000 read requests per second.
Interviewer: 100:1 feels high. Where does that come from?
You: Fair question. Some links go viral and get millions of clicks, most links get almost none. The 100:1 is an average across the whole system. If you think it's too aggressive, I could drop it to 50:1 and we'd still be at 60K reads per second, which doesn't really change the architectural conclusion. Either way, we're past what a single database can handle.
Interviewer: What is that conclusion?
You: A single MySQL instance tops out around 10K to 50K simple reads per second, depending on the query. At 60K to 120K, we need a caching layer. Redis can handle 100K+ ops per second on a single instance, so one or two Redis nodes in front of the database would absorb most of the read traffic. For writes, 1,200 per second is well within MySQL's range, so we probably don't need to shard the write path yet. Though we'd want replication for availability.
Do this: The candidate didn't just survive the pushback on the 100:1 ratio. They showed that even cutting the assumption in half doesn't change the design decision. That's a senior-level move. It tells the interviewer you understand which assumptions are load-bearing and which ones aren't.
Interviewer: Great. Let's also think about storage. How much data are we storing over, say, five years?
You: Each shortened URL entry is small. The short code is maybe 7 characters, the original URL averages around 200 bytes, plus metadata like creation timestamp, user ID, expiration. Call it 500 bytes per entry. At 100 million new entries per day, that's 5 × 10¹⁰ bytes per day, or 50 GB per day. Over five years, that's 50 × 365 × 5... roughly 50 × 2,000... about 100 TB.
Interviewer: Can one machine hold that?
You: Not comfortably. A large SSD is 1-4 TB, and even with a RAID setup you'd be looking at a machine with dozens of drives. More practically, at 100 TB we'd want to shard across multiple database nodes. We could shard by the hash of the short code, which distributes evenly and makes lookups straightforward since every read request already contains the short code.
That's the whole arc. Clarify the goal, state assumptions, chain the multiplications, sanity-check against known benchmarks, and land on a design decision. The entire estimation took maybe three minutes of interview time.
Numbers You Should Have Memorized
You don't need to memorize hundreds of figures. You need these, and you need them cold.
| Category | Reference Point | Value |
|---|---|---|
| Time conversions | Seconds in a day | ~10⁵ (86,400) |
| Seconds in a month | ~2.5 × 10⁶ | |
| Seconds in a year | ~3 × 10⁷ | |
| 1M requests/day → QPS | ~12 | |
| Latency | L1 cache reference | ~1 ns |
| RAM reference | ~100 ns | |
| SSD random read | ~100 μs | |
| HDD seek | ~10 ms | |
| Same-datacenter round trip | ~0.5 ms | |
| Cross-continent round trip | ~150 ms | |
| Throughput | Single server, simple reads | 10K–50K QPS |
| Single MySQL, writes | 1K–5K TPS | |
| Single Redis instance | ~100K ops/sec | |
| 1 Gbps network link | ~125 MB/s | |
| Storage | Single SSD | 1–4 TB |
| Single HDD | 4–16 TB |
Two conversion tricks that will save you every time: to go from daily volume to per-second, divide by 10⁵. To go from per-second to per-month, multiply by 2.5 × 10⁶. Drill these until they're reflexive.
Common mistake: Candidates memorize these numbers but never connect them to decisions. Knowing that Redis handles 100K ops/sec is only useful if you can say, "Our read QPS is 120K, so a single Redis node gets us most of the way there." The number without the implication is trivia. The number with the implication is engineering.
Common Mistakes
You can nail the framework, memorize every number on the cheat sheet, and still bomb the estimation portion of your interview. These are the mistakes that actually sink candidates, and most of them have nothing to do with math ability.
Doing Real Arithmetic on the Whiteboard
You're standing at the whiteboard writing out 317 × 86,400 and trying to long-multiply it in front of a stranger. Your hand is shaking. You carry the wrong digit. Thirty seconds pass in silence. The interviewer checks their phone.
This is what it looks like when a candidate forgets that estimation means estimating. The interviewer doesn't want the exact answer. They want to see that you can get to the right order of magnitude quickly and keep moving. When you burn two minutes on precise multiplication, you signal that you don't understand what this exercise is actually testing.
Don't do this: "317 million users times 86,400 seconds per day... let me work this out..."
Do this: "Roughly 300 million users, about 10^5 seconds in a day, so that's 3 × 10^13." Done. Move on.
The fix: Round every number to one significant digit and a power of 10 before you multiply anything.
Silent Math
The candidate stares at the whiteboard, scribbles numbers, crosses things out, scribbles more numbers, then turns around and announces: "So we need about 20TB."
The interviewer has no idea how you got there. They don't know what you assumed for user count, payload size, or retention period. They can't tell if you made a reasonable assumption and a math error, or a wild assumption and got lucky. So they can't give you credit for any of it.
This is the estimation equivalent of writing code without explaining your approach. It's a red flag because senior engineers need to make their reasoning legible to other people. If the interviewer wanted a calculator, they'd use one.
Key insight: Stating assumptions out loud isn't just good communication. It's your safety net. If you say "I'm assuming 500 bytes per record" and the interviewer thinks it should be 5KB, they'll tell you. That redirect saves you from being off by an order of magnitude.
The fix: Say every assumption before you write the number, and pause long enough for the interviewer to nod or correct you.
The Orphan Number
"So we'll need about 50 terabytes of storage per year." Then silence. Or worse, the candidate moves on to a completely different topic.
50TB of storage is not an insight. It's a number floating in space. The entire reason you estimated it was to make an architectural decision, and you just... didn't. The interviewer is sitting there waiting for the punchline that never comes.
This mistake is especially painful because you did all the hard work. You stated assumptions, you chained the multiplication, you got a reasonable answer. Then you stopped one sentence short of the payoff.
Don't do this: Announce a number and move on.
Do this: Always land the estimation with "...which means [specific design decision]." For example: "50TB per year means a single machine won't hold even two years of data, so we need to partition across multiple nodes, and I'd shard by user ID."
The fix: Treat every estimation as an incomplete sentence until you've connected it to a design choice.
Unit Confusion
A candidate estimates 500 Mbps of bandwidth needed, then provisions a server with a 500 MB/s network card and declares it sufficient. Except megabits and megabytes differ by 8x, and they just under-provisioned by nearly an order of magnitude.
This one is sneaky because the math itself can be perfect. The logic can be airtight. And a single unit mismatch at the end invalidates everything. Interviewers watch for this specifically because it's the kind of mistake that causes real production incidents.
The most common confusions: - Bytes vs. bits (off by 8x) - Per-second vs. per-day (off by 10^5) - MB vs. MiB (usually doesn't matter in interviews, but bytes vs. bits absolutely does)
Do this: Write the unit next to every single number on the whiteboard. Not just the final answer. Every intermediate step. "300M users/day × 2 requests/user = 600M requests/day ÷ 10^5 sec/day = 6000 requests/sec." When units are visible, mismatches become obvious.
The fix: Treat units like types in a programming language. If they don't match, the expression doesn't compile.
Over-Engineering the Estimation
"OK so we have 10TB of raw data, but with a replication factor of 3 that's 30TB, plus we need B-tree indexes which add roughly 20% overhead so that's 36TB, and if we account for write-ahead logs and compaction headroom we should budget 1.5x so really it's 54TB, and then with snappy compression at a 2:1 ratio..."
Stop. The interviewer's eyes glazed over at "B-tree indexes."
When you layer on five correction factors, two things happen. First, each factor introduces a new assumption that might be wrong, so your error compounds instead of canceling out. Second, you burn time and attention on refinements that don't change the architectural decision. Whether it's 30TB or 54TB, you're still sharding. The answer is the same.
Common mistake: Candidates add complexity to seem thorough. It backfires. It makes you look like someone who can't distinguish between what matters and what doesn't, which is the opposite of what a senior engineer does.
The fix: Start with the simplest possible estimation, state your result, then ask the interviewer: "Would you like me to refine this further?" Let them decide if replication overhead matters for the discussion.
Picking the Wrong Thing to Estimate
This one is subtle. The interviewer asks you to design a chat application, and you spend three minutes estimating the storage needed for user profile photos. Meanwhile, the actual bottleneck is message delivery throughput and connection management for millions of concurrent WebSocket connections.
Estimating the wrong quantity isn't just wasted time. It tells the interviewer you can't identify the hard part of the problem. You're optimizing a dimension that won't influence any meaningful design decision.
Do this: Before you start any estimation, spend five seconds asking yourself: "What's the architectural decision this number will drive?" If you can't name one, you're estimating the wrong thing.
The fix: Always estimate the number that's closest to the system's bottleneck, not the number that's easiest to calculate.
Quick Reference
Print this out. Screenshot it. Scribble it on a napkin before you walk in. These are the numbers and shortcuts that let you estimate anything under pressure without stalling.
Numbers You Must Have Memorized
Powers of 2:
| Power | Value | Label |
|---|---|---|
| 2^10 | ~1,000 | 1 Thousand (1 KB) |
| 2^20 | ~1,000,000 | 1 Million (1 MB) |
| 2^30 | ~1,000,000,000 | 1 Billion (1 GB) |
| 2^40 | ~1,000,000,000,000 | 1 Trillion (1 TB) |
Time conversions:
| Period | Seconds |
|---|---|
| 1 day | ~10^5 (86,400) |
| 1 month | ~2.5 × 10^6 |
| 1 year | ~3 × 10^7 |
You'll use the "1 day = 10^5 seconds" conversion constantly. It's the single most useful number in estimation interviews.
Latency Cheat Sheet
| Operation | Latency |
|---|---|
| L1 cache reference | ~1 ns |
| RAM access | ~100 ns |
| SSD random read | ~100 μs |
| HDD seek | ~10 ms |
| Same-datacenter round trip | ~0.5 ms |
| Cross-continent round trip | ~150 ms |
The gap between RAM (100 ns) and SSD (100 μs) is 1,000x. The gap between SSD and HDD seek is 100x. When an interviewer asks "why add a cache here?", these ratios are your answer.
Throughput Anchors
| Resource | Throughput |
|---|---|
| Single server, simple read QPS | 10K–50K |
| Single MySQL, write TPS | 1K–5K |
| Single Redis instance | ~100K ops/sec |
| 1 Gbps network link | ~125 MB/s |
If your estimation lands at 120K writes/sec hitting a single MySQL instance, you already know you need sharding or a different storage engine. That's the whole point of memorizing these.
Two Conversion Tricks to Burn Into Your Brain
Daily to per-second: divide by 10^5. So 1 million requests/day is about 10 requests/sec. 100 million/day is about 1,000/sec.
Per-second to per-month: multiply by 2.5 × 10^6. So 1,000 writes/sec becomes about 2.5 billion writes/month.
These two conversions, chained together, handle 90% of the time-scale math you'll encounter.
Framework Phases at a Glance
| Phase | Time to Spend | What You Do |
|---|---|---|
| 1. Clarify the goal | 10 seconds | Say what you're estimating and what decision it drives |
| 2. State assumptions | 20 seconds | Anchor on DAU, payload size, ratios. Round aggressively. Say them out loud |
| 3. Decompose | 15 seconds | Write the multiplication chain: users × actions × size × time |
| 4. Compute | 20 seconds | Convert to powers of 10, add exponents, show work visibly |
| 5. Sanity check | 15 seconds | Compare against a known benchmark, then state the design implication |
The whole thing should take about 90 seconds. If you're spending more than two minutes on a single estimation, you're going too deep.
Phrases to Use
- Starting the estimation: "Let me estimate [X] so we can decide whether we need [Y architectural choice]."
- Stating an assumption: "I'll assume roughly 300 million DAU. Happy to adjust if you have a different number in mind."
- Rounding: "I'm going to round 86,400 to 10^5 to keep the math clean."
- Showing your chain: "So that's 3 × 10^8 users, times 10 actions each, times 500 bytes per action, which gives us about 1.5 × 10^12 bytes, so roughly 1.5 TB per day."
- Sanity checking: "A single SSD holds about 1 TB, so we'd fill one disk every day. That tells me we definitely need distributed storage."
- Recovering from a mistake: "Actually, let me revisit that assumption. If we account for media URLs, the average size is closer to 1 KB, which doubles our estimate."
Red Flags to Avoid
- Doing multiplication in silence for more than five seconds. Talk through every step.
- Writing numbers without units next to them. Bytes? Bits? Per second? Per day? Label everything.
- Landing on a number and moving on without connecting it to a design decision.
- Trying to be precise. If you're computing 317 × 86,400, stop and round.
- Adding complexity nobody asked for (compression ratios, replication factors, index overhead) before nailing the base estimate.
Key takeaway: The interviewer isn't testing your arithmetic; they're testing whether you can turn rough numbers into confident engineering decisions, so memorize the anchors, round everything, and always end with "...and that's why we need X."
