← Back to Home

Data Read Visualizer

Visualize the massive speed difference between CPU cache, RAM, and storage. Watch as orbs loop continuously to demonstrate relative read speeds.

25 GB
1 GB10 GB100 GB1 TB
L1 Cache2,000,000 MB/s
Too fast to show
L3 Cache500,000 MB/s
Too fast to show
DDR5 RAM60,000 MB/s
DDR4 RAM25,600 MB/s
NVMe Gen 512,000 MB/s
NVMe Gen 47,000 MB/s
NVMe Gen 33,500 MB/s
SATA SSD550 MB/s
HDD160 MB/s

About this tool

The Data Read Visualizer animates how long it takes to read from each tier of the modern memory hierarchy, from L1 cache up to spinning HDD, on the same time axis. Reading tables of nanoseconds and microseconds makes the gaps abstract; watching the bars grow in real-time makes them visceral. A ~1 ns L1 hit completes while the HDD bar is barely starting to move.

Use it to teach, debate, or justify architectural decisions: why keeping working sets cache-resident matters, why swapping to disk is catastrophic, or why a memory upgrade can sometimes beat an SSD upgrade for a specific workload. The visualiser uses real numbers from modern hardware (current-gen Intel/AMD cache latencies, DDR5 timing, PCIe Gen 4 NVMe) so the relative gaps are accurate.

Typical latencies

L1 cache: ~1 ns. L2 cache: ~3-5 ns. L3 cache: ~10-15 ns. DDR5 RAM: 50-80 ns. NVMe Gen 4 SSD (random read): 20-100 µs. SATA SSD: 100-200 µs. 7200 RPM HDD: 5-15 ms. Each step up is typically 5-100x slower than the last, and the HDD to L1 gap is roughly ten million to one.

When to use it

Perfect for explaining memory hierarchies to new engineers, settling arguments about "is an SSD basically RAM now?" (no), or demonstrating why algorithm-level cache optimisation matters. Pair with the RAM Latency Calculator to dig into exactly what your DIMM contributes, and the RAID Calculator for disk-tier planning.

Frequently asked questions

How much faster is L1 cache than RAM?
Roughly 40-60x faster for random access. L1 cache access is about 1 ns; DDR5 RAM is about 50-80 ns end-to-end including controller overhead. That's why keeping hot data in cache dominates real-world CPU performance.
Is NVMe SSD faster than RAM?
No. NVMe is fast for storage, but for random access its latency is around 50-150 µs versus RAM's 50-80 ns. That's a 1,000x gap. NVMe beats RAM only on raw capacity and persistence, never on latency.
Why is HDD so much slower than SSD?
A spinning HDD has to physically move a read head to the right track and wait for the platter to rotate into position, typically 5-15 ms per random access. An SSD has no moving parts and returns data in under 100 µs, roughly 100x faster for random reads.
What's the point of L3 cache?
L1 and L2 are tiny (KB to low MB) and per-core. L3 is much larger (tens of MB) and shared across cores, acting as a buffer before requests go to main RAM. It catches data evicted from L1/L2 and data shared between cores.
How many nanoseconds is one CPU cycle?
At 4 GHz, one cycle is 0.25 ns. At 5 GHz, 0.2 ns. Cache hits are measured in single-digit cycles; main memory access costs hundreds of cycles, which is why optimising for cache locality matters enormously in performance-critical code.
Does DDR5 have lower latency than DDR4?
Not usually at the same relative tier. DDR5 improved bandwidth and capacity significantly, but true latency (in ns) for mainstream kits is similar to late-stage DDR4. The gains from DDR5 come from bandwidth and larger capacities, not lower memory latency.