Quincy Labs LogoQuincy Labs
Join ourFollow on

Ethereum State Access is Getting a Makeover

TL;DREthereum’s current state storage rides on general-purpose KV databases (LevelDB/RocksDB/MDBX). Great tools—wrong shape for a Merkle-Patricia-Trie.

Substack
3 min read

Ethereum State Access is Getting a Makeover

Henry
via Substack
View original

TL;DR
Ethereum’s current state storage rides on general-purpose KV databases (LevelDB/RocksDB/MDBX). Great tools—wrong shape for a Merkle-Patricia-Trie. A trie-aware engine (“TrieDB”) cuts state access from ~O(log² N) down to O(log N) by laying data out the way the trie actually works. That unlocks faster reads/writes, cleaner parallelism, and smoother stateless execution


Over the last decade, the state layer has quietly been one of Ethereum’s biggest bottlenecks. Every swap, mint, and rollup proof boils down to reading and writing the MPT. Most clients persist that trie on top of generic KV stores. They’re phenomenal for unstructured data, but they fight Ethereum’s path-heavy, proof-driven access patterns.

What you get:

  • Write amplification — touching a leaf forces updates up the path.

  • Poor cache locality — related nodes live far apart on disk.

  • Latency ceilings — EVM speed gets gated by storage, not compute.

Enter TrieDB: storage that speaks “trie”

Base has been pushing a trie-aware storage engine (“TrieDB”) that organizes state by the trie itself, not by arbitrary keys. The practical upshot: instead of doing a KV lookup for every node along a path (log N lookups for each log N step), you follow a handful of page pointers. In practice, this can drop disk IO per read from “dozens” to “single digits.”

What changes under the hood

  1. Structure-aware paging
    Related nodes are packed into contiguous 4 KB subtrie pages. Traversal tends to stay within a page or two, slashing random IO.

  2. Zero-overlap subtries
    After a branch splits, its pages don’t overlap. That makes branched, concurrent writes possible—key for parallel EVM.

  3. Copy-on-Write MVCC
    The MPT already re-hashes root→leaf on updates. TrieDB leans into that: copy only the affected pages, flip the root pointer, and get many readers + one writer without WAL gymnastics.

  4. Stateless-friendly by design
    Because pages mirror trie paths, you can pull just-in-time proofs/witnesses and still compute a new state root efficiently—perfect for stateless builders and fault-proof systems.

Why this matters now

Over the last year I’ve dug into where modern clients are going:

  • Alloy-EVM – a clean abstraction over revm with a universal BlockExecutor.

  • Bera-Reth – extends Reth via NodeBuilder (not a fork), activates Prague at genesis (1 gwei min base fee).

  • Kona – OP Stack’s stateless engine that streams state via proofs.

  • Reth-BSC – chain-specific primitives + blob sidecars.

  • Reth-Gnosis – custom EVM handlers, fee logic, and POSDAO nuances.

Across all of them, state access is the choke point. A trie-aware engine improves:

  • Throughput — fewer IOs per access.

  • Memory pressure — better locality → better caching.

  • Parallelism — zero-overlap writes and CoW make multi-reader/single-writer sane.

  • Statelessness — fetch proofs on demand without tanking performance.

With TrieDB, designs like Kona can stream state on-demand while still hitting practical throughput—and parallel EVM moves from slideware to roadmap.

Is this the same “TrieDB” as Kona’s?

No—complementary, not identical.

  • Base’s TrieDB is a disk storage engine for full nodes (page-based, CoW).

  • Kona’s TrieDB is an in-memory, stateless shim that implements revm::Database, fetching preimages/witnesses as needed.
    You can run the persistent engine underneath and expose the witnesses that Kona consumes on top.

My take

Ethereum talks a lot about throughput (4844, danksharding, rollups). Storage is the silent limiter. Moving from log² to log access, plus page-aware caching and branched CoW, is a foundational shift that benefits L1 validators, L2 sequencers, and app-chains alike.

TrieDB isn’t just “faster reads.” It’s a step toward EVM-native storage that’s built for parallelism, statelessness, and proof-centric execution. Expect client architectures to evolve around it—and early adopters to reset the performance baseline.

Where we’re taking it (Monmouth)

At Monmouth, we’re an L2 secured on EigenLayer—and we’re building on this research

:

  • Trie-aware storage for high-throughput state access.

  • Pre-execution, stateless pipelines that stream proofs instead of hoarding state.

  • Parallel EVM strategies enabled by page-level zero-overlap and CoW.

  • Verification & ZK: active collaborations with Othentic (AI inference verification) and Lagrange (ZK coprocessor) to close the loop from execution → proofs.

Our goal: set a new standard for execution performance and developer experience for AI-native, DeFi-heavy, and real-time apps.


If you’re building an execution client, L2, or app-chain and want to jam on TrieDB integration (or stateless/parallel pipelines), my DMs are open.

Subscribe for deep dives as we ship more of this stack.

Enjoyed this post?

Get our latest research insights and technical deep dives delivered to your inbox.