Skip to main content
TS

Terry Sinovsky

1episode
1podcast

We have 1 summarized appearance for Terry Sinovsky so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode
Radiolab

The Alien in the Room

Radiolab
61 minProfessor at the Salk Institute for Biological Studies

AI Summary

→ WHAT IT COVERS RadioLab explores how artificial intelligence actually works under the hood, tracing the evolution of neural networks from simple pattern recognition to large language models like ChatGPT through mathematical learning processes rather than programmed rules. → KEY INSIGHTS - **Neural Network Architecture:** AI learns through layers of connected nodes (like neurons) that adjust connection strengths via calculus-driven feedback. A simple circle-recognition network uses 1,000 parameters; GPT-3 uses 175 billion parameters that get tweaked through repeated training cycles to minimize prediction errors. - **Learning Through Prediction:** Modern AI systems don't categorize inputs but predict what comes next—whether the next word in a sentence, pixel in an image, or note in music. They average connection strengths across thousands of training examples to generalize patterns they've never seen before. - **The Transformer Breakthrough:** Google's 2017 attention mechanism solved AI's context problem by processing entire sentences simultaneously in parallel rather than word-by-word. This allows systems to identify which words matter most (like distinguishing "dog" from "door" in "what sound does my dog make"). - **GPU Parallel Processing:** Graphics processing units originally designed for video games enabled AI to multiply and add numbers simultaneously across billions of parameters. This hardware upgrade allowed training on the entire internet rather than limited datasets, unlocking emergent capabilities at massive scale. - **Temperature Controls Creativity:** AI systems include a temperature setting that determines prediction precision. Lower temperatures select the most statistically likely next word; higher temperatures choose second or third most likely options, introducing controlled randomness that mimics creative spontaneity through intentionally less accurate mathematical answers. → NOTABLE MOMENT European Go champion Fan Hui lost all five games to AlphaGo after confidently predicting zero percent chance of defeat. He describes the experience as seeing himself clearly for the first time—realizing humans constantly make mistakes while AI executes flawless mathematics without emotional interference. 💼 SPONSORS [{"name": "AT&T", "url": null}, {"name": "National Forest Foundation", "url": "https://nationalforests.org"}, {"name": "omges.com", "url": "https://omges.com"}, {"name": "Kleenex", "url": "https://kleenex.com"}, {"name": "State Farm", "url": null}] 🏷️ Artificial Intelligence, Neural Networks, Machine Learning, Large Language Models, AI Architecture

Never miss Terry Sinovsky's insights

Subscribe to get AI-powered summaries of Terry Sinovsky's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available