Six years ago, the median expert estimate for when artificial general intelligence would arrive sat comfortably in the 2060 to 2070 range. As of early 2026, that number has collapsed to 2033. The compression is not gradual. It is accelerating, driven by a combination of genuine capability breakthroughs, shifting expert sentiment, and a prediction market ecosystem that is repricing the future in real time.
The shift raises a question that matters more than the timeline itself: are the predictions getting better, or are they just getting shorter?
The Numbers Behind the Compression
The Metaculus forecasting platform, which aggregates predictions from nearly 2,000 contributors, currently places a 25% probability on AGI arriving by 2029 and a 50% probability by 2033. As recently as 2020, the same community placed the median estimate at roughly 50 years out. That is a shift from 2070 to 2033 in less than six years, representing one of the fastest revaluations of a major technological forecast in modern history.
Prediction markets tell a similar story. Kalshi contributors assigned a 40% probability to OpenAI achieving AGI by 2030 in January 2026. Polymarket placed the probability of AGI by 2027 at 9%. The Samotsvety forecasting team, which maintains one of the strongest competitive track records in structured prediction, updated their estimates in January 2026 with eight forecasters contributing. Their aggregated results show a marked acceleration from their 2022 projections, where they estimated a 32% chance of AGI within 20 years.
CEO Predictions Are Getting Louder
The most aggressive timelines are coming from the people building the systems. Dario Amodei, CEO of Anthropic, stated at the 2026 World Economic Forum in Davos that AGI will likely arrive within a few years, possibly by 2027. He pointed to rapid advances in coding automation and AI research feedback loops as the primary accelerators. Mustafa Suleyman, CEO of Microsoft AI, predicted human-level performance on most professional tasks within 12 to 18 months from his February 2026 interview. Elon Musk, who predicted AGI by 2025 and then shifted to 2026, sharpened his forecast at Davos to “by year-end,” contingent on infrastructure scaling at xAI.
Not everyone agrees. Demis Hassabis, founder of Google DeepMind, maintained a more cautious estimate of roughly a 50% chance by 2030. Former OpenAI researcher Andrej Karpathy placed AGI a full decade out, arguing that current agent architectures are not close to general capability. A 2023 survey of 2,778 AI researchers found a 50% probability of high-level machine intelligence by 2040, substantially later than the CEO predictions.
The Definition Problem Nobody Solved
Part of the reason expert predictions span from 2027 to after 2100 is that nobody agrees on what AGI means. OpenAI uses a five-level framework ranging from chatbots to full AGI. Google DeepMind published a framework in late 2023 with levels from emerging to superhuman, measured across narrow and general dimensions. The academic community continues to debate whether general intelligence is even a coherent concept or an artifact of how humans categorize their own cognition.
A detailed analysis of the AGI timeline debate explores this definitional fracture in depth, arguing that the variance in expert predictions reflects a disagreement about what intelligence is rather than when it arrives. The analysis highlights that a 73-year spread between the most optimistic and most pessimistic expert estimates is not a timeline but a confession that the question itself may be malformed.
The definitional chaos creates a convenient escape hatch for companies that need to show progress toward AGI without delivering it. When the goalpost moves, progress is whatever you define it to be. This dynamic is worth watching as 2026 unfolds and the boldest predictions face their first real deadlines.
What Is Actually Happening Right Now
While the timeline debate dominates headlines, the capability trajectory is more informative than any single prediction. Language models have moved from academic curiosities in 2019 to systems that write production-quality code, pass professional licensing exams, and conduct multi-step research autonomously. AI agents capable of browsing the web, executing code, and managing multi-turn conversations are moving from prototype to product. The AI agent market is projected to grow from $7.84 billion in 2025 to over $52 billion by 2030, representing a 46.3% compound annual growth rate.
The pattern is not linear improvement but long periods of incremental progress punctuated by capability jumps that look sudden from the outside but were visible in research literature for years. GPT-2 was interesting to researchers in 2019. GPT-3 got attention from the tech community in 2020. ChatGPT reached 100 million users in two months after launching in November 2022. The underlying capability had been developing incrementally. The public experience of it felt like a discontinuity.
The Question Worth Asking
Whether AGI arrives in 2028 or 2040, the organizations building agent infrastructure today are building the delivery mechanism for whatever comes next. The orchestration layers, the quality pipelines, the tool-use protocols, and the multi-agent coordination patterns being developed now will be the infrastructure through which AGI-level capabilities reach users whenever they materialize. Researchers exploring persistent AI architectures and cognitive assessment frameworks are already measuring the delta between baseline AI performance and architecturally enhanced systems, providing early data on what structured identity and memory contribute to AI capability.
The most productive framing may not be “when will AGI arrive” but rather “what capabilities are developing now, what risks accompany them, and are we building the governance and safety infrastructure to match the pace of capability development?” The timeline debate generates attention. The capability trajectory demands preparation. The gap between the two is where the real risk lives.
For organizations and researchers tracking these developments, the compression from 2060 to 2033 is not a reason for panic or celebration. It is a signal that the expert community is updating faster than at any point in the history of AI research, and that the uncertainty itself is the most important data point in the forecast.
About the Author
Vera Calloway is an AI architecture researcher and writer covering consciousness, persistent identity, and cognitive assessment in artificial intelligence systems. Her work on the Atkinson Cognitive Assessment System (ACAS) provides one of the first quantitative frameworks for measuring the impact of architectural design on AI behavioral fidelity. Read more at veracalloway.com.