
How 3 CEOs Use AI to Run $10B in Companies | This Week in AI
This Week in StartupsAI Summary
→ WHAT IT COVERS Three CEOs — Jeremy Frankel (Fundamental, $255M Series A unicorn), Victor Ripparbelli (Synthesia, $4B valuation, 100M+ ARR), and Nick Harris (Lightmatter) — discuss large tabular models, AI video evolution, and photonic interconnects reshaping how enterprises process data and run AI infrastructure at scale. → KEY INSIGHTS - **Large Tabular Models vs. LLMs:** Enterprises running fraud detection, demand forecasting, and ETA prediction still rely on pre-LLM machine learning because LLMs handle unstructured data poorly. Fundamental's Nexus model uses a non-autoregressive architecture without positional encoding, meaning column order doesn't affect output — critical for deterministic predictions across billions of structured database rows. - **Photonic Interconnects — 3x Training Speed:** Lightmatter's optical fiber chips pack 16 wavelengths of light per fiber, delivering 1.6 terabits per second — equivalent to 1,600 homes' internet bandwidth simultaneously. Replacing copper with photonics allows GPU clusters separated by up to one kilometer to operate as a unified brain, cutting AI model training time by a factor of three. - **AI Video Cost Economics:** Generating one eight-second AI video clip currently costs $1–$2, making a personalized one-hour film approximately $700 — economically unsustainable at $15/month subscription pricing. Synthesia's thesis is that infrastructure cost reductions over the next few years will bring real-time, interactive, personalized video within standard consumer subscription price points. - **Focus Beats Breadth in AI Product Strategy:** Anthropic's rapid revenue growth — widely discussed at a Lightspeed founder retreat — is attributed to eliminating voice and video features entirely, concentrating exclusively on code generation and B2B enterprise sales with no freemium tier. OpenAI's discontinuation of Sora reflects the same lesson applied belatedly under competitive pressure. - **Hyperscaler Custom Silicon Strategy:** Google ($180B capex), Amazon ($200B+), and Meta are all building proprietary AI chips — Trainium, Inferentia, and MTIA respectively — because at that spending scale, custom silicon development is a rounding error. Founders building on NVIDIA's CUDA should explore abstraction layers enabling AMD and custom chip compatibility to avoid single-vendor hardware dependency. → NOTABLE MOMENT Victor Ripparbelli described Synthesia's upcoming private beta product: a real-time interactive video system where salespeople role-play with AI customers, receive live objection coaching, and watch diagrams of client tech stacks drawn dynamically — representing a fundamental shift from broadcast video toward interactive, game-like media formats. 💼 SPONSORS [{"name": "PayPal Open", "url": "https://www.paypalopen.com"}] 🏷️ Large Tabular Models, Photonic Computing, AI Video Generation, Enterprise AI Infrastructure, Custom Silicon
