honnibal.dev
|
ksl
|
|
Matthew Honnibal, the developer behind spaCy who has been publishing NLP research since 2004, makes a focused technical case against the AI plateau narrative. His argument sidesteps the financial bubble question entirely and zeroes in on two levers he sees as underexploited: extended inference-time compute and reinforcement learning that isn’t bottlenecked by data availability. The piece traces a clean line from GPT-1’s pattern matching through current reasoning models like Claude Opus, where systems are trained to generate their own intermediate questions rather than just predict next tokens. Honnibal’s framing is measured — he’s not predicting AGI timelines, just noting that no publicly known constraint forces a ceiling. Coming from someone adjacent to but not financially invested in the LLM scaling race, the argument carries a different weight than the same claim from lab insiders.
