Latency Numbers Everyone Should Know
The Orders of Magnitude
Derived from Jeff Dean's seminal talk, these numbers reveal the staggering performance gaps in systems. A main memory reference is not just "slower" than an L1 cache reference—it is 200x slower.
Humanized Function
If 1 CPU cycle (0.3 ns) was 1 second...
| Operation | Actual Time | "Human" Time | Distance / Comparison |
|---|---|---|---|
| L1 Cache Reference | 0.5 ns | ~1.5 sec | Picking up a pen |
| L2 Cache Reference | 7 ns | ~23 sec | Reading a paragraph |
| Main Memory Ref | 100 ns | ~5.5 min | Making coffee |
| Send 2K bytes over 1Gbps | 20,000 ns | ~18.5 hours | A full work day |
| SSD Random Read | 150,000 ns | ~5.8 days | A vacation |
| Roundtrip same datacenter | 500,000 ns | ~19 days | Month-long project |
| Disk Seek | 10,000,000 ns | ~1 year | A sabbatical |
| Ping CA -> Netherlands | 150,000,000 ns | ~15 years | Raising a child |
Visual Scale
Logarithmic visualisation of latency.
*Visual representation is illustrative (log-ish scale).
Implications for Design
- Avoid Disk Skeeks: Sequential reads are ~50-100x faster than random seeks.
- In-Memory Caching: Saving a roundtrip to the DB (SSD/Disk) is worth thousands of operations.
- Data Locality: Cross-datacenter calls are incredibly expensive. Keep data close to compute.