Explain Pre-computation vs On-Demand.
Pre-computation vs On-demand refers to the choice between computing results in advance (and storing them for fast retrieval) or calculating them dynamically when requested.
When to Use
Use pre-computation for repetitive, high-latency operations where low response time is critical—like pre-rendering dashboards or caching popular queries. Use on-demand when results change frequently or data freshness outweighs latency—like user-specific analytics or ad recommendations.
Example
Netflix pre-encodes videos in multiple formats (pre-computation) for instant playback. Generating encodings during playback (on-demand) would be slower but save space.
If you’re preparing for interviews, explore Grokking System Design Fundamentals, Grokking the Coding Interview, and practice with Mock Interviews with ex-FAANG engineers for hands-on system design prep.
Why Is It Important
Choosing between the two affects latency, cost, and data freshness. Pre-computation improves speed and user experience, while on-demand ensures real-time accuracy with less storage overhead.
Interview Tips
In interviews, frame this as a compute vs storage trade-off.
Mention caching, freshness, and workload type. Use examples like materialized views or newsfeed generation to show practical understanding.
Trade-offs
- Pre-computation: + low latency, – higher storage, possible staleness.
- On-demand: + real-time accuracy, – higher compute cost, slower response.
Pitfalls
- Over-precomputing rarely used data wastes resources.
- On-demand under load can cause latency spikes.
- Failing to refresh precomputed caches can lead to stale results.
For deeper learning, check out Grokking the System Design Interview or Grokking Database Fundamentals for Tech Interviews.
GET YOUR FREE
Coding Questions Catalog
$197

$78
$78