How Netflix ingests 10 Million events per second with ClickHouse
Mark Needham
Netflix processes a staggering five petabytes of observability data daily—that's 10.6 million events per second flowing into ClickHouse. At a ClickHouse meetup in March 2025, Netflix's Daniel Muino shared three hard-won lessons from building and scaling their observability platform. These aren't theoretical best practices; they're real solutions to real problems encountered at massive scale.
- Going low-level when you need to - Netflix ditched JDBC batch inserts for custom native protocol encoding because sometimes the standard approach just isn't fast enough
- Keep the hot path lean - They swapped out regex patterns for compiled lexers during ingestion because doing expensive work on 10 million events per second is a recipe for trouble
- Design for how ClickHouse actually works - Their map storage optimization using low cardinality types and smart sharding shows why understanding ClickHouse internals matters for query performance
Blog post: https://clickhouse.com/blog/netflix-petabyte-scale-logging
