Skip to main content
Eye on AI

#340 Steffen Cruz: Training AI Without Data Centres

46 min episode · 2 min read
·

Episode

46 min

Read time

2 min

Topics

Artificial Intelligence, Science & Discovery

AI-Generated Summary

Key Takeaways

  • Distributed Pretraining Economics: Training large language models through geographically distributed nodes enables cost arbitrage unavailable to centralized data centers. When a facility builds out thousands of GPUs, training costs are fixed at construction. Distributed systems can target surplus energy pockets — such as Icelandic renewable energy available only 12 hours daily — reducing training costs to roughly 10–20% of conventional rates.
  • Model Parallelism Architecture: Macrocosmos's IOTA system (Incentivized Orchestrated Training Architecture) splits models into small slivers across nodes rather than running full model copies on each machine. This approach allows training of frontier-scale models — targeting 70 billion parameters by mid-2025 and 100 billion-plus by 2026 — using consumer-grade hardware like Mac minis and CUDA-enabled GPUs.
  • Supply-Side GPU Utilization Strategy: Cloud providers and neo-clouds with idle GPU inventory can plug surplus capacity into IOTA's network during rental gaps. Since training commands higher margins than inference token sales, providers earn better returns on underutilized hardware than selling compute at discounted spot rates, creating a direct bottom-line improvement without additional capital expenditure.
  • Consumer Passive Income via Train-at-Home: Individuals with idle Mac minis, MacBooks, or consumer GPUs can download a one-click app, set availability windows — for example, 10PM to 6AM — and earn passive income contributing to model training runs. Macrocosmos reports 2,500 app downloads within the first two weeks, with the payout system rewarding participation proportionally to hours of compute contributed daily.
  • Blockchain as Coordination Layer, Not Compute: The blockchain in BitTensor functions as an identity registry, synchronization clock, and transparent payout trigger — not as a compute or storage layer. Off-chain tracking records each node's contribution, then pushes verified totals on-chain to trigger token payouts. This architecture allowed Macrocosmos to scale beyond BitTensor's native 256-node limit to support thousands of simultaneous participants.

What It Covers

Steffen Cruz, CTO of Macrocosmos, explains how his company uses BitTensor's blockchain infrastructure to train large language models through distributed compute nodes worldwide, eliminating the need for centralized data centers and enabling cost arbitrage through surplus energy and idle consumer hardware like Mac minis and spare GPUs.

Key Questions Answered

  • Distributed Pretraining Economics: Training large language models through geographically distributed nodes enables cost arbitrage unavailable to centralized data centers. When a facility builds out thousands of GPUs, training costs are fixed at construction. Distributed systems can target surplus energy pockets — such as Icelandic renewable energy available only 12 hours daily — reducing training costs to roughly 10–20% of conventional rates.
  • Model Parallelism Architecture: Macrocosmos's IOTA system (Incentivized Orchestrated Training Architecture) splits models into small slivers across nodes rather than running full model copies on each machine. This approach allows training of frontier-scale models — targeting 70 billion parameters by mid-2025 and 100 billion-plus by 2026 — using consumer-grade hardware like Mac minis and CUDA-enabled GPUs.
  • Supply-Side GPU Utilization Strategy: Cloud providers and neo-clouds with idle GPU inventory can plug surplus capacity into IOTA's network during rental gaps. Since training commands higher margins than inference token sales, providers earn better returns on underutilized hardware than selling compute at discounted spot rates, creating a direct bottom-line improvement without additional capital expenditure.
  • Consumer Passive Income via Train-at-Home: Individuals with idle Mac minis, MacBooks, or consumer GPUs can download a one-click app, set availability windows — for example, 10PM to 6AM — and earn passive income contributing to model training runs. Macrocosmos reports 2,500 app downloads within the first two weeks, with the payout system rewarding participation proportionally to hours of compute contributed daily.
  • Blockchain as Coordination Layer, Not Compute: The blockchain in BitTensor functions as an identity registry, synchronization clock, and transparent payout trigger — not as a compute or storage layer. Off-chain tracking records each node's contribution, then pushes verified totals on-chain to trigger token payouts. This architecture allowed Macrocosmos to scale beyond BitTensor's native 256-node limit to support thousands of simultaneous participants.

Notable Moment

Cruz describes a near-future scenario where a personal AI agent, after completing its assigned tasks by mid-morning, autonomously decides to contribute the machine's idle compute to a training network and earns money before the owner returns home — reframing personal computers as proactive economic participants rather than passive tools.

Know someone who'd find this useful?

You just read a 3-minute summary of a 43-minute episode.

Get Eye on AI summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Eye on AI

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Eye on AI.

Every Monday, we deliver AI summaries of the latest episodes from Eye on AI and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime