Skip to main content
Equity

Who's really running AI? Inside the billion-dollar battle over regulation, with Alex Bores

22 min episode · 2 min read
·

Episode

22 min

Read time

2 min

Topics

Fundraising & VC, Artificial Intelligence, Economics & Policy

AI-Generated Summary

Key Takeaways

  • Regulatory landscape split: Public opinion on AI breaks into three distinct camps: roughly 10% want AI eliminated entirely, 10% (represented by Leading the Future) want zero regulation, and 80% want measured oversight ensuring broad societal benefit. Politicians targeting that 80% majority face disproportionate financial opposition from the minority "let it rip" faction funding super PACs.
  • RAISE Act scope: The law applies only to AI companies with over $500 million in revenue that have built sufficiently large frontier models — currently just Google, Meta, OpenAI, xAI, and Anthropic. Requirements include publishing and adhering to a safety plan and reporting catastrophic safety incidents that cause or could imminently cause injury or death.
  • Money asymmetry in AI politics: Leading the Future has raised $125 million and pledged at least $10 million specifically against Bores, while the pro-regulation Public First Action PAC (backed by $20 million from Anthropic) has spent under $500,000 supporting him — a 20-to-1 spending ratio that signals deliberate intimidation of state legislators.
  • Deepfake policy solution: Bores argues deepfakes are a solvable problem through mandatory adoption of C2PA, a free, open-source metadata standard already created by industry. New York legislation requiring content provenance via C2PA is near passage, with the governor including a version in her budget — a replicable model other states can adopt immediately.
  • AI training data transparency: Bores is advancing a bill requiring AI models to disclose basic training data information, including whether copyrighted material or personally identifiable information was used. A similar California bill passed the state assembly unanimously before stalling in the senate, suggesting broad bipartisan support exists for this specific, narrow disclosure requirement.

What It Covers

New York Assembly Member Alex Bores discusses his sponsorship of the RAISE Act, an AI safety law, and the resulting $10 million campaign against him by Leading the Future, a Silicon Valley super PAC backed by a16z, OpenAI's Greg Brockman, and Joe Lonsdale, revealing the financial scale of industry opposition to state-level AI regulation.

Key Questions Answered

  • Regulatory landscape split: Public opinion on AI breaks into three distinct camps: roughly 10% want AI eliminated entirely, 10% (represented by Leading the Future) want zero regulation, and 80% want measured oversight ensuring broad societal benefit. Politicians targeting that 80% majority face disproportionate financial opposition from the minority "let it rip" faction funding super PACs.
  • RAISE Act scope: The law applies only to AI companies with over $500 million in revenue that have built sufficiently large frontier models — currently just Google, Meta, OpenAI, xAI, and Anthropic. Requirements include publishing and adhering to a safety plan and reporting catastrophic safety incidents that cause or could imminently cause injury or death.
  • Money asymmetry in AI politics: Leading the Future has raised $125 million and pledged at least $10 million specifically against Bores, while the pro-regulation Public First Action PAC (backed by $20 million from Anthropic) has spent under $500,000 supporting him — a 20-to-1 spending ratio that signals deliberate intimidation of state legislators.
  • Deepfake policy solution: Bores argues deepfakes are a solvable problem through mandatory adoption of C2PA, a free, open-source metadata standard already created by industry. New York legislation requiring content provenance via C2PA is near passage, with the governor including a version in her budget — a replicable model other states can adopt immediately.
  • AI training data transparency: Bores is advancing a bill requiring AI models to disclose basic training data information, including whether copyrighted material or personally identifiable information was used. A similar California bill passed the state assembly unanimously before stalling in the senate, suggesting broad bipartisan support exists for this specific, narrow disclosure requirement.

Notable Moment

Bores revealed that when Leading the Future announced its campaign against him, he proactively sought meetings with all founding members through mutual contacts. Only one agreed to a fifteen-minute conversation. The group then shifted its ad strategy from opposing AI regulation to attacking his prior employment at Palantir over its ICE contracts — despite Palantir co-founder Joe Lonsdale funding the PAC.

Know someone who'd find this useful?

You just read a 3-minute summary of a 19-minute episode.

Get Equity summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Equity

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Equity.

Every Monday, we deliver AI summaries of the latest episodes from Equity and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime