#340 Steffen Cruz: Training AI Without Data Centres
Episode
46 min
Read time
2 min
Topics
Artificial Intelligence, Science & Discovery
AI-Generated Summary
Key Takeaways
- ✓Distributed Pretraining Economics: Training large language models through geographically distributed nodes enables cost arbitrage unavailable to centralized data centers. When a facility builds out thousands of GPUs, training costs are fixed at construction. Distributed systems can target surplus energy pockets — such as Icelandic renewable energy available only 12 hours daily — reducing training costs to roughly 10–20% of conventional rates.
- ✓Model Parallelism Architecture: Macrocosmos's IOTA system (Incentivized Orchestrated Training Architecture) splits models into small slivers across nodes rather than running full model copies on each machine. This approach allows training of frontier-scale models — targeting 70 billion parameters by mid-2025 and 100 billion-plus by 2026 — using consumer-grade hardware like Mac minis and CUDA-enabled GPUs.
- ✓Supply-Side GPU Utilization Strategy: Cloud providers and neo-clouds with idle GPU inventory can plug surplus capacity into IOTA's network during rental gaps. Since training commands higher margins than inference token sales, providers earn better returns on underutilized hardware than selling compute at discounted spot rates, creating a direct bottom-line improvement without additional capital expenditure.
- ✓Consumer Passive Income via Train-at-Home: Individuals with idle Mac minis, MacBooks, or consumer GPUs can download a one-click app, set availability windows — for example, 10PM to 6AM — and earn passive income contributing to model training runs. Macrocosmos reports 2,500 app downloads within the first two weeks, with the payout system rewarding participation proportionally to hours of compute contributed daily.
- ✓Blockchain as Coordination Layer, Not Compute: The blockchain in BitTensor functions as an identity registry, synchronization clock, and transparent payout trigger — not as a compute or storage layer. Off-chain tracking records each node's contribution, then pushes verified totals on-chain to trigger token payouts. This architecture allowed Macrocosmos to scale beyond BitTensor's native 256-node limit to support thousands of simultaneous participants.
What It Covers
Steffen Cruz, CTO of Macrocosmos, explains how his company uses BitTensor's blockchain infrastructure to train large language models through distributed compute nodes worldwide, eliminating the need for centralized data centers and enabling cost arbitrage through surplus energy and idle consumer hardware like Mac minis and spare GPUs.
Key Questions Answered
- •Distributed Pretraining Economics: Training large language models through geographically distributed nodes enables cost arbitrage unavailable to centralized data centers. When a facility builds out thousands of GPUs, training costs are fixed at construction. Distributed systems can target surplus energy pockets — such as Icelandic renewable energy available only 12 hours daily — reducing training costs to roughly 10–20% of conventional rates.
- •Model Parallelism Architecture: Macrocosmos's IOTA system (Incentivized Orchestrated Training Architecture) splits models into small slivers across nodes rather than running full model copies on each machine. This approach allows training of frontier-scale models — targeting 70 billion parameters by mid-2025 and 100 billion-plus by 2026 — using consumer-grade hardware like Mac minis and CUDA-enabled GPUs.
- •Supply-Side GPU Utilization Strategy: Cloud providers and neo-clouds with idle GPU inventory can plug surplus capacity into IOTA's network during rental gaps. Since training commands higher margins than inference token sales, providers earn better returns on underutilized hardware than selling compute at discounted spot rates, creating a direct bottom-line improvement without additional capital expenditure.
- •Consumer Passive Income via Train-at-Home: Individuals with idle Mac minis, MacBooks, or consumer GPUs can download a one-click app, set availability windows — for example, 10PM to 6AM — and earn passive income contributing to model training runs. Macrocosmos reports 2,500 app downloads within the first two weeks, with the payout system rewarding participation proportionally to hours of compute contributed daily.
- •Blockchain as Coordination Layer, Not Compute: The blockchain in BitTensor functions as an identity registry, synchronization clock, and transparent payout trigger — not as a compute or storage layer. Off-chain tracking records each node's contribution, then pushes verified totals on-chain to trigger token payouts. This architecture allowed Macrocosmos to scale beyond BitTensor's native 256-node limit to support thousands of simultaneous participants.
Notable Moment
Cruz describes a near-future scenario where a personal AI agent, after completing its assigned tasks by mid-morning, autonomously decides to contribute the machine's idle compute to a training network and earns money before the owner returns home — reframing personal computers as proactive economic participants rather than passive tools.
You just read a 3-minute summary of a 43-minute episode.
Get Eye on AI summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Eye on AI
#339 Eamonn Maguire: Your Child Has a Data Profile Before They're Born
Apr 28 · 45 min
Masters of Scale
How Poppi’s founders built a new soda brand worth $2 billion
Apr 30
More from Eye on AI
#338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take
Apr 24 · 46 min
Snacks Daily
🦸♀️ “MAMA Stocks” — Zuck’s Ad/AI machine. Hilary Duff’s anti-Ozempic bet. Bill Ackman’s Influencer IPO. +Refresher surge
Apr 30
More from Eye on AI
We summarize every new episode. Want them in your inbox?
#339 Eamonn Maguire: Your Child Has a Data Profile Before They're Born
#338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take
#337 Debdas Sen: Why AI Without ROI Will Die (Again)
#336 Professor Mausam: Why India Is Losing the AI Race and What It Will Take to Catch Up
#335 Sriram Raghavan: Why IBM Is Betting Everything on Small AI Models
Similar Episodes
Related episodes from other podcasts
Masters of Scale
Apr 30
How Poppi’s founders built a new soda brand worth $2 billion
Snacks Daily
Apr 30
🦸♀️ “MAMA Stocks” — Zuck’s Ad/AI machine. Hilary Duff’s anti-Ozempic bet. Bill Ackman’s Influencer IPO. +Refresher surge
The Mel Robbins Podcast
Apr 30
Eat This to Live Longer, Stay Young, and Transform Your Health
The Rest is History
Apr 29
665. Britain in the 70s: The Bailout from Hell (Part 4)
The Tim Ferriss Show
Apr 29
#863: Elad Gil, Consigliere to Empire Builders — How to Spot Billion-Dollar Companies Before Everyone Else, The Misty AI Frontier, How Coke Beat Pepsi, When Consensus Pays, and Much More
Explore Related Topics
This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Eye on AI.
Every Monday, we deliver AI summaries of the latest episodes from Eye on AI and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime