Skip to main content
KO

Kunle Olukotun

1episode
1podcast

We have 1 summarized appearance for Kunle Olukotun so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Kunle Olukotun explains how SambaNova's reconfigurable dataflow architecture achieves 5-10x better performance per watt for AI inference by eliminating instruction fetching, maximizing memory bandwidth utilization, and enabling microsecond model switching across trillion-parameter systems. → KEY INSIGHTS - **Dataflow vs Instructions:** Reconfigurable dataflow architectures configure hardware to match PyTorch computation graphs rather than fetching instructions each cycle, using token-based synchronization instead of locks and barriers, achieving 2-3x higher HBM bandwidth utilization than GPUs through asynchronous parallel execution. - **Decoder Fusion Strategy:** Mapping an entire LLama decoder across 16 RDU chips in space eliminates intermediate data movement across HBM boundaries, creating a fused kernel that provides flash attention benefits across the whole decoder rather than just attention mechanisms, dramatically reducing memory bandwidth requirements. - **Multi-Model Serving:** The SN40L chip includes 1.5TB DDR memory alongside 64GB HBM, enabling 5 trillion total parameters resident simultaneously with millisecond model switching latency, allowing high utilization while serving custom fine-tuned models without dedicating separate accelerators per model. - **Dynamic Architecture Evolution:** Research focuses on dynamic reconfigurable dataflow using streaming tensor programs to handle mixture-of-experts models, variable context lengths, and sparse computations by enabling runtime graph reconfiguration at sub-microsecond latency rather than static microsecond-scale mapping used in current generation systems. → NOTABLE MOMENT Olukotun reveals SambaNova maintains 5x lower latency than GPUs even at high batch sizes because tensor parallelism with overlapped communication remains efficient on dataflow architectures, while GPUs cannot effectively hide communication latency, fundamentally changing the throughput-latency tradeoff curve. 💼 SPONSORS [{"name": "Capital One", "url": null}, {"name": "NotebookLM", "url": "notebooklm.google.com"}] 🏷️ Dataflow Computing, AI Inference Optimization, Hardware Architecture, Agentic AI Systems

Explore More

Never miss Kunle Olukotun's insights

Subscribe to get AI-powered summaries of Kunle Olukotun's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available