
AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI
The Diary of a CEOAI Summary
→ WHAT IT COVERS Journalist Karen Hao, author of *Empire of AI*, draws on 300+ interviews — including 90+ with OpenAI employees and executives — to argue that major AI companies including OpenAI, Anthropic, and xAI operate as modern empires: extracting data, exploiting labor, monopolizing research, and manufacturing existential narratives to consolidate power while avoiding democratic oversight. → KEY INSIGHTS - **AGI Definition Manipulation:** OpenAI has redefined "artificial general intelligence" at least four times depending on the audience — a cancer cure for Congress, a revenue engine for Microsoft, a digital assistant for consumers, and an autonomous economic replacement for the public. Recognizing this pattern allows observers to evaluate AI claims critically: when a company redefines its core goal based on who is funding or regulating it, treat stated missions as fundraising rhetoric rather than scientific commitments. - **The Imperial Power Framework:** AI companies replicate four historical empire behaviors: claiming resources not their own (training data, land for data centers), exploiting contracted labor across hundreds of thousands of workers globally, monopolizing knowledge production by bankrolling most AI researchers, and deploying existential threat narratives to justify anti-democratic control. Understanding this four-part framework helps policymakers and citizens identify which AI company behaviors require structural regulation rather than voluntary ethical commitments. - **Research Censorship via Funding Capture:** Because AI companies employ or fund the majority of AI researchers worldwide, they effectively set the research agenda by directing money toward favorable findings. Google's 2020 firing of ethical AI co-leads Timnit Gebru and Margaret Mitchell after they co-authored a critical paper on large language model harms is a documented example. Independently funded AI safety research should be weighted more heavily than industry-sponsored studies when evaluating model risk. - **The Myth-Belief Blur Among Founders:** AI executives simultaneously manufacture existential risk narratives as a strategic tool to block regulation and attract capital, while also genuinely internalizing those narratives over time — a cognitive pattern consistent with the psychology of cognitive dissonance. Dario Amodei of Anthropic publicly states a 10–25% probability of civilizational catastrophe while continuing to scale models. Treat probability claims from executives with direct financial stakes in continued scaling as strategically motivated, not purely scientific assessments. - **Job Displacement Mechanics Are More Complex Than Automation:** Klarna reduced headcount from 6,000 to under 3,000 over two to three years while doubling revenue, relying on 15% annual natural attrition rather than direct layoffs. However, displaced workers — including award-winning Hollywood directors — are increasingly taking data annotation jobs that train the very models that eliminated their original roles, breaking the career ladder by eliminating mid-tier rungs while creating only high-expertise and low-wage positions at the extremes. - **Sam Altman's Removal and Reinstatement Mechanics:** OpenAI co-founder Ilya Sutskever and CTO Mira Murati independently compiled Slack messages and emails documenting Altman's behavior — including a startup fund structured in Altman's name rather than OpenAI's — and presented this to three independent board members. The board fired Altman without notifying Microsoft, OpenAI's primary investor, until moments before execution. That procedural failure — not the underlying concerns — triggered the stakeholder backlash that reinstated Altman within days, after which both Sutskever and Murati departed. - **AI Capability Scaling Is Targeted, Not General:** AI companies do not advance all model capabilities simultaneously. Internal development priorities are set based on which industries — finance, law, medicine, commerce — will pay the most for specific capabilities. This means the "race to AGI" framing obscures that companies are building narrow commercial tools, not general intelligence. For business leaders evaluating AI adoption, this means assessing whether a model has been specifically trained on your industry's data and tasks, rather than assuming general capability claims apply to your use case. → NOTABLE MOMENT During a live call patched into the episode, Klarna CEO Sebastian Siemiatkowski confirmed the company had halved its workforce from roughly 6,000 to under 3,000 people over two to three years while doubling revenue — but clarified this was achieved through natural attrition, not mass layoffs, and that the company is now doubling down on premium human customer service as a differentiator. 💼 SPONSORS [{"name": "Progressive Insurance", "url": "https://www.progressive.com"}, {"name": "Whisperflow", "url": "https://wisprflow.ai/steven"}, {"name": "LinkedIn Ads", "url": "https://www.linkedin.com/diary"}, {"name": "Salee eSIM", "url": "https://www.sale.ly"}] 🏷️ AI Regulation, OpenAI, AGI Definition, Labor Displacement, AI Research Ethics, Sam Altman, Tech Industry Power
