AI Summary
→ WHAT IT COVERS Bill Mulligan, maintainer in the Cilium ecosystem at Isovalent, explains how eBPF rewrites Linux kernel networking for Kubernetes environments, covering Cilium's architecture across network policy enforcement, kube-proxy replacement, service mesh integration, and cluster observability through Hubble over its ten-year development history. → KEY INSIGHTS - **Kube-proxy replacement performance:** Replacing kube-proxy with Cilium's eBPF-based implementation switches service routing from linear O(n) IP tables traversal to O(1) hash map lookups. At scale with 10,000+ services, this difference is substantial — Turkish e-commerce company Trendal reported a 40% increase in cluster throughput after making this single change. - **Identity-based networking over IP-based rules:** Cilium assigns workload identities using Kubernetes labels rather than IP addresses, so when containers restart and receive new IPs, network policies remain valid automatically. Labeling a pod "frontend" grants it connectivity to all "backend" pods instantly, eliminating constant rule updates and reducing cluster churn in dynamic environments. - **Layer 7 network policy for data isolation:** Cilium extends standard Kubernetes layer 3/4 network policies to layer 7, enabling domain-level rules like blocking specific URLs or restricting egress entirely. Bloomberg used this capability to build a multi-tenant financial data sandbox, preventing cross-tenant traffic and stopping data exfiltration without rebuilding their underlying Kubernetes infrastructure. - **Live CNI migration via node-by-node rollover:** Teams running existing CNIs like Flannel or AWS VPC CNI can migrate to Cilium without cluster downtime using CNI chaining and the CiliumNodeConfig flag. Cilium layers on top for observability or policy first, then traffic routing shifts node-by-node as new nodes come online, as demonstrated by DB Schenker's live production migration. - **Hubble observability eliminates networking blind spots:** Because Cilium's eBPF programs route all packets directly between sockets, traditional tools like tcpdump miss traffic entirely. Hubble surfaces this data as network flow logs and a visual service map, enabling engineers to identify dropped packets and policy violations — ESnet reported reducing multi-day debugging tasks to under thirty seconds. → NOTABLE MOMENT Mulligan argues that "service mesh" should be retired as a category because Cilium already delivers roughly 80% of what service meshes provide through its CNI layer. The remaining 20% — layer 7 routing — gets added incrementally, making a standalone service mesh architecturally redundant and potentially blind to eBPF-optimized traffic paths. 💼 SPONSORS [{"name": "GuardSquare", "url": "https://www.guardsquare.com"}, {"name": "Retool", "url": "https://retool.com/sedaily"}] 🏷️ eBPF, Kubernetes Networking, Cilium, Container Security, Cloud Native Infrastructure
