Close Menu
The Washington FeedThe Washington Feed

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    LARRY KUDLOW: Will a supply-sider run the Fed?

    October 29, 2025

    San Francisco mayor: ‘We should be the testbed for emerging tech’

    October 29, 2025

    West Seattle Blog… | HOLIDAY MUSIC: Want to sing ‘Messiah’ in West Seattle this year? Two opportunities this year, with rehearsals starting soon

    October 29, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    The Washington FeedThe Washington Feed
    Subscribe
    • Home
    • World
    • US
    • seattle
    • Politics
    • Business
    • Tech
    • Contact Us
    The Washington FeedThe Washington Feed
    Home»Tech»Why Cohere’s ex-AI research lead is betting against the scaling race
    Tech

    Why Cohere’s ex-AI research lead is betting against the scaling race

    adminBy adminOctober 22, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    AI labs are racing to build data centers as large as Manhattan, each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in “scaling” — the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.

    But a growing chorus of AI researchers say the scaling of large language models may be reaching its limits, and that other breakthroughs may be needed to improve AI performance.

    That’s the bet Sara Hooker, Cohere’s former VP of AI Research and a Google Brain alumna, is taking with her new startup, Adaption Labs. She co-founded the company with fellow Cohere and Google veteran Sudip Roy, and it’s built on the idea that scaling LLMs has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, quietly announced the startup this month to start recruiting more broadly.

    I’m starting a new project.

    Working on what I consider to be the most important problem: building thinking machines that adapt and continuously learn.

    We have incredibly talent dense founding team + are hiring for engineering, ops, design.

    Join us: https://t.co/eKlfWAfuRy

    — Sara Hooker (@sarahookr) October 7, 2025

    In an interview with TechCrunch, Hooker says Adaption Labs is building AI systems that can continuously adapt and learn from their real-world experiences, and do so extremely efficiently. She declined to share details about the methods behind this approach or whether the company relies on LLMs or another architecture.

    “There is a turning point now where it’s very clear that the formula of just scaling these models — scaling-pilled approaches, which are attractive but extremely boring — hasn’t produced intelligence that is able to navigate or interact with the world,” said Hooker.

    Adapting is the “heart of learning,” according to Hooker. For example, stub your toe when you walk past your dining room table, and you’ll learn to step more carefully around it next time. AI labs have tried to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today’s RL methods don’t help AI models in production — meaning systems already being used by customers — to learn from their mistakes in real time. They just keep stubbing their toe.

    Some AI labs offer consulting services to help enterprises fine-tune their AI models to their custom needs, but it comes at a price. OpenAI reportedly requires customers to spend upwards of $10 million with the company to offer its consulting services on fine-tuning.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “We have a handful of frontier labs that determine this set of AI models that are served the same way to everyone, and they’re very expensive to adapt,” said Hooker. “And actually, I think that doesn’t need to be true anymore, and AI systems can very efficiently learn from an environment. Proving that will completely change the dynamics of who gets to control and shape AI, and really, who these models serve at the end of the day.”

    Adaption Labs is the latest sign that the industry’s faith in scaling LLMs is wavering. A recent paper from MIT researchers found that the world’s largest AI models may soon show diminishing returns. The vibes in San Francisco seem to be shifting, too. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with famous AI researchers.

    Richard Sutton, a Turing award winner regarded as “the father of RL,” told Patel in September that LLMs can’t truly scale because they don’t learn from real world experience. This month, early OpenAI employee Andrej Karpathy told Patel he had reservations about the longterm potential of RL to improve AI models.

    These types of fears aren’t unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pretraining — in which AI models learn patterns from heaps of datasets — was hitting diminishing returns. Until then, pretraining had been the secret sauce for OpenAI and Google to improve their models.

    Those pretraining scaling concerns are now showing up in the data, but the AI industry has found other ways to improve models. In 2025, breakthroughs around AI reasoning models, which take additional time and computational resources to work through problems before answering, have pushed the capabilities of AI models even further.

    AI labs seem convinced that scaling up RL and AI reasoning models are the new frontier. OpenAI researchers previously told TechCrunch that they developed their first AI reasoning model, o1, because they thought it would scale up well. Meta and Periodic Labs researchers recently released a paper exploring how RL could scale performance further — a study that reportedly cost more than $4 million, underscoring how expensive current approaches remain.

    Adaption Labs, by contrast, aims to find the next breakthrough, and prove that learning from experience can be far cheaper. The startup was in talks to raise a $20 million to $40 million seed round earlier this fall, according to three investors who reviewed its pitch decks. They say the round has since closed, though the final amount is unclear. Hooker declined to comment.

    “We’re set up to be very ambitious,” said Hooker, when asked about her investors.

    Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Compact AI systems now routinely outperform their larger counterparts on coding, math, and reasoning benchmarks — a trend Hooker wants to continue pushing on.

    She also built a reputation for broadening access to AI research globally, hiring research talent from underrepresented regions such as Africa. While Adaption Labs will open a San Francisco office soon, Hooker says she plans to hire worldwide.

    If Hooker and Adaption Labs are right about the limitations of scaling, the implications could be huge. Billions have already been invested in scaling LLMs, with the assumption that bigger models will lead to general intelligence. But it’s possible that true adaptive learning could prove not only more powerful — but far more efficient.

    Marina Temkin contributed reporting.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    San Francisco mayor: ‘We should be the testbed for emerging tech’

    October 29, 2025

    2024’s Startup Battlefield runner-up geCKo Materials reveals four new products at TechCrunch Disrupt

    October 29, 2025

    Box CEO Aaron Levie on how AI is changing the enterprise SaaS landscape

    October 29, 2025
    Leave A Reply Cancel Reply

    Demo
    Our Picks
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    LARRY KUDLOW: Will a supply-sider run the Fed?

    Business October 29, 2025

    I certainly hope a supply-sider will run the Fed. As Art Laffer has said again…

    San Francisco mayor: ‘We should be the testbed for emerging tech’

    October 29, 2025

    West Seattle Blog… | HOLIDAY MUSIC: Want to sing ‘Messiah’ in West Seattle this year? Two opportunities this year, with rehearsals starting soon

    October 29, 2025

    As Trump Weighs Sale of Advanced A.I. Chips to China, Critics Sound Alarm

    October 29, 2025

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us

    At TheWashingtonFeed.com, we are committed to delivering accurate, timely, and relevant news from around the world. Whether it’s breaking developments in U.S. politics, major international affairs, or the latest trends in technology, our mission is to keep our readers informed with fact-driven journalism and insightful analysis.

    Email Us: Confordev@gmail.com

    Our Picks

    China sacks officials over viral Arc’teryx fireworks in Tibet

    October 16, 2025

    Man who set fire to Pennsylvania governor’s mansion sentenced

    October 16, 2025

    how big is the task of rebuilding Gaza?

    October 16, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Contact Us
    • About Us
    • Privacy Policy
    • Terms and Condition
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.