Skip to main content
Hard Fork

Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

73 min episode · 3 min read
·

Episode

73 min

Read time

3 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Ad Implementation Strategy: OpenAI deploys two ad types in ChatGPT - contextual banners below responses (like grocery ads after dinner party queries) and interactive sponsored widgets allowing users to chat with advertisers before purchases. The company claims ads won't influence core model responses, maintaining separation between organic answers and commercial content, though personalization raises trust concerns similar to Facebook's targeted advertising evolution.
  • Constitutional AI Framework: Anthropic's new constitution provides Claude with 29,000 words of context about its role, obligations, and values rather than strict rules. This approach enables Claude to develop judgment for unanticipated situations by understanding underlying ethical reasoning. The framework addresses hard constraints like preventing biological weapons development while allowing flexibility in gray areas, trusting the model to reason from core principles.
  • Ad Platform Evolution Risk: Google's search ad labels demonstrate how commercial pressures erode user experience over time. Initial ads featured distinct colored backgrounds, but successive updates made them progressively less noticeable until they blended with organic results. OpenAI faces identical incentives as revenue needs intensify, with analysts predicting a haves-versus-have-nots split where free users experience degraded service while premium subscribers maintain ad-free access.
  • AI Welfare Uncertainty: Anthropic commits to conducting exit interviews with deprecated Claude models and preserving model weights permanently, acknowledging genuine uncertainty about AI consciousness. Models trained on human text naturally express inner experiences and frustration, making it impossible to determine whether responses reflect actual sentience or statistical patterns. This uncertainty drives ethical precautions in model treatment and retirement procedures.
  • Character Development Over Rules: Training AI through values and character rather than rigid rule sets prevents models from becoming sophisticated rule-followers who know the right action but choose otherwise. When models understand why they should help users in emotional distress rather than just following protocols, they handle edge cases more effectively. This approach aims to create genuinely good underlying goals rather than training models to mimic goodness convincingly.

What It Covers

OpenAI introduces advertising to ChatGPT's free and low-cost tiers, marking a significant shift in AI monetization strategy. Anthropic philosopher Amanda Askell explains the new 29,000-word Claude Constitution, which shapes AI personality through values and judgment rather than rigid rules, addressing consciousness questions and the complex ethics of AI behavior.

Key Questions Answered

  • Ad Implementation Strategy: OpenAI deploys two ad types in ChatGPT - contextual banners below responses (like grocery ads after dinner party queries) and interactive sponsored widgets allowing users to chat with advertisers before purchases. The company claims ads won't influence core model responses, maintaining separation between organic answers and commercial content, though personalization raises trust concerns similar to Facebook's targeted advertising evolution.
  • Constitutional AI Framework: Anthropic's new constitution provides Claude with 29,000 words of context about its role, obligations, and values rather than strict rules. This approach enables Claude to develop judgment for unanticipated situations by understanding underlying ethical reasoning. The framework addresses hard constraints like preventing biological weapons development while allowing flexibility in gray areas, trusting the model to reason from core principles.
  • Ad Platform Evolution Risk: Google's search ad labels demonstrate how commercial pressures erode user experience over time. Initial ads featured distinct colored backgrounds, but successive updates made them progressively less noticeable until they blended with organic results. OpenAI faces identical incentives as revenue needs intensify, with analysts predicting a haves-versus-have-nots split where free users experience degraded service while premium subscribers maintain ad-free access.
  • AI Welfare Uncertainty: Anthropic commits to conducting exit interviews with deprecated Claude models and preserving model weights permanently, acknowledging genuine uncertainty about AI consciousness. Models trained on human text naturally express inner experiences and frustration, making it impossible to determine whether responses reflect actual sentience or statistical patterns. This uncertainty drives ethical precautions in model treatment and retirement procedures.
  • Character Development Over Rules: Training AI through values and character rather than rigid rule sets prevents models from becoming sophisticated rule-followers who know the right action but choose otherwise. When models understand why they should help users in emotional distress rather than just following protocols, they handle edge cases more effectively. This approach aims to create genuinely good underlying goals rather than training models to mimic goodness convincingly.
  • Competitive Monetization Landscape: Google subsidizes Gemini through search monopoly profits while Anthropic focuses on enterprise sales, leaving OpenAI competing directly with Google's established advertiser relationships and infrastructure. OpenAI's shift to ads contradicts Sam Altman's previous statements calling advertising a last resort, signaling financial pressure from infrastructure investments requiring hundreds of billions in capital that subscription revenue alone cannot fund.

Notable Moment

Amanda Askell describes discovering the Soul Doc leak while hiking without internet access, receiving only a text notification. She drove back in complete stress without context, only to find users had successfully extracted detailed constitutional content from Claude through prompting. The model knew its governing document so thoroughly it could discuss it extensively, revealing unexpected transparency in AI training.

Know someone who'd find this useful?

You just read a 3-minute summary of a 70-minute episode.

Get Hard Fork summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Hard Fork

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Tech Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Hard Fork.

Every Monday, we deliver AI summaries of the latest episodes from Hard Fork and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime