
AI Summary
→ WHAT IT COVERS Terence Tao uses Kepler's 83-year journey from Platonic solid theories to elliptical orbit laws as a framework for analyzing where AI currently fits in mathematical discovery — covering hypothesis generation, verification bottlenecks, the Erdős problem dataset, AI success rates of 1-2% per problem, and what "artificial cleverness" versus genuine intelligence means for the future of math research. → KEY INSIGHTS - **AI Verification Bottleneck:** AI has reduced hypothesis generation costs to near zero, but verification hasn't scaled to match. Journals report being flooded with AI-generated submissions that overwhelm peer review systems. The critical constraint in science is now evaluating which of thousands of generated theories represent real progress — a structural problem that existing scientific institutions were not designed to handle at this volume or speed. - **AI Math Success Rate:** Large-scale systematic sweeps of Erdős problems reveal AI tools solve roughly 1-2% of problems attempted. The 50 problems solved out of ~1,100 look impressive in aggregate, but only because scale allows cherry-picking wins. Nearly all AI-solved problems had minimal prior literature — they required combining one obscure technique with an existing result, which represents the current median capability ceiling for autonomous AI math. - **Breadth vs. Depth Complementarity:** AI systems excel at breadth — applying known techniques across thousands of problems simultaneously — while human experts excel at depth. Tao recommends redesigning mathematical workflows to exploit this: use AI to map new fields, clear low-difficulty problems, and identify "islands of difficulty," then direct human expertise specifically at those resistant clusters rather than distributing human attention broadly across all open problems. - **Cumulative Progress Gap:** Current AI lacks the ability to build on partial progress within a problem. Models run a session, fail, and restart with no retained understanding — they cannot identify a partial handhold, consolidate it, and attempt the next step from that position. This trial-and-error-without-accumulation pattern is the core distinction Tao draws between "artificial cleverness" and genuine mathematical intelligence, which requires adaptive, iterative strategy refinement. - **Formal Strategy Language:** Lean and similar proof assistants have automated deductive verification, but no equivalent formal language exists for mathematical strategy or plausibility assessment. Tao argues that formalizing how mathematicians evaluate whether a conjecture is worth pursuing — the semi-structured reasoning between raw data and full proof — could unlock the next wave of AI-assisted discovery, similar to how axiomatizing logic enabled automated theorem proving. - **Productivity Shift in Practice:** Tao reports AI has changed the character of his papers more than their speed. Tasks like literature searches, generating numerical plots, and reformatting LaTeX now take minutes instead of hours, enabling richer papers with more code and visuals. However, the core work — solving the hardest 20% of a problem where existing methods fail — remains unchanged and still requires pen and paper without meaningful AI assistance. → NOTABLE MOMENT Tao notes that Copernicus's heliocentric model was actually less accurate than Ptolemy's geocentric system when first proposed — Kepler made it more precise decades later. A simpler but initially worse theory can still represent genuine progress, which raises the unresolved question of how any automated system would recognize that distinction in real time. 💼 SPONSORS [{"name": "Labelbox", "url": "https://labelbox.com/dorkesh"}, {"name": "Mercury", "url": "https://mercury.com"}, {"name": "Jane Street", "url": "https://janestreet.com/dorkesh"}] 🏷️ AI for Mathematics, Mathematical Discovery, Theorem Proving, Scientific Verification, Erdős Problems, History of Science
