Skip to main content
Software Engineering Daily

Cilium, eBPF, and Modern Kubernetes Networking with Bill Mulligan

57 min episode · 2 min read
·

Episode

57 min

Read time

2 min

AI-Generated Summary

Key Takeaways

  • Kube-proxy replacement performance: Replacing kube-proxy with Cilium's eBPF-based implementation switches service routing from linear O(n) IP tables traversal to O(1) hash map lookups. At scale with 10,000+ services, this difference is substantial — Turkish e-commerce company Trendal reported a 40% increase in cluster throughput after making this single change.
  • Identity-based networking over IP-based rules: Cilium assigns workload identities using Kubernetes labels rather than IP addresses, so when containers restart and receive new IPs, network policies remain valid automatically. Labeling a pod "frontend" grants it connectivity to all "backend" pods instantly, eliminating constant rule updates and reducing cluster churn in dynamic environments.
  • Layer 7 network policy for data isolation: Cilium extends standard Kubernetes layer 3/4 network policies to layer 7, enabling domain-level rules like blocking specific URLs or restricting egress entirely. Bloomberg used this capability to build a multi-tenant financial data sandbox, preventing cross-tenant traffic and stopping data exfiltration without rebuilding their underlying Kubernetes infrastructure.
  • Live CNI migration via node-by-node rollover: Teams running existing CNIs like Flannel or AWS VPC CNI can migrate to Cilium without cluster downtime using CNI chaining and the CiliumNodeConfig flag. Cilium layers on top for observability or policy first, then traffic routing shifts node-by-node as new nodes come online, as demonstrated by DB Schenker's live production migration.
  • Hubble observability eliminates networking blind spots: Because Cilium's eBPF programs route all packets directly between sockets, traditional tools like tcpdump miss traffic entirely. Hubble surfaces this data as network flow logs and a visual service map, enabling engineers to identify dropped packets and policy violations — ESnet reported reducing multi-day debugging tasks to under thirty seconds.

What It Covers

Bill Mulligan, maintainer in the Cilium ecosystem at Isovalent, explains how eBPF rewrites Linux kernel networking for Kubernetes environments, covering Cilium's architecture across network policy enforcement, kube-proxy replacement, service mesh integration, and cluster observability through Hubble over its ten-year development history.

Key Questions Answered

  • Kube-proxy replacement performance: Replacing kube-proxy with Cilium's eBPF-based implementation switches service routing from linear O(n) IP tables traversal to O(1) hash map lookups. At scale with 10,000+ services, this difference is substantial — Turkish e-commerce company Trendal reported a 40% increase in cluster throughput after making this single change.
  • Identity-based networking over IP-based rules: Cilium assigns workload identities using Kubernetes labels rather than IP addresses, so when containers restart and receive new IPs, network policies remain valid automatically. Labeling a pod "frontend" grants it connectivity to all "backend" pods instantly, eliminating constant rule updates and reducing cluster churn in dynamic environments.
  • Layer 7 network policy for data isolation: Cilium extends standard Kubernetes layer 3/4 network policies to layer 7, enabling domain-level rules like blocking specific URLs or restricting egress entirely. Bloomberg used this capability to build a multi-tenant financial data sandbox, preventing cross-tenant traffic and stopping data exfiltration without rebuilding their underlying Kubernetes infrastructure.
  • Live CNI migration via node-by-node rollover: Teams running existing CNIs like Flannel or AWS VPC CNI can migrate to Cilium without cluster downtime using CNI chaining and the CiliumNodeConfig flag. Cilium layers on top for observability or policy first, then traffic routing shifts node-by-node as new nodes come online, as demonstrated by DB Schenker's live production migration.
  • Hubble observability eliminates networking blind spots: Because Cilium's eBPF programs route all packets directly between sockets, traditional tools like tcpdump miss traffic entirely. Hubble surfaces this data as network flow logs and a visual service map, enabling engineers to identify dropped packets and policy violations — ESnet reported reducing multi-day debugging tasks to under thirty seconds.

Notable Moment

Mulligan argues that "service mesh" should be retired as a category because Cilium already delivers roughly 80% of what service meshes provide through its CNI layer. The remaining 20% — layer 7 routing — gets added incrementally, making a standalone service mesh architecturally redundant and potentially blind to eBPF-optimized traffic paths.

Know someone who'd find this useful?

You just read a 3-minute summary of a 54-minute episode.

Get Software Engineering Daily summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Software Engineering Daily

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best Cybersecurity Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into Software Engineering Daily.

Every Monday, we deliver AI summaries of the latest episodes from Software Engineering Daily and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime