- The Atomic Builder
- Posts
- What's Actually Moving in AI Right Now
What's Actually Moving in AI Right Now
AI Trend Radar: Q1 2026.
Everyone has AI opinions. Very few people have AI data.
That changed this quarter. An NBER study surveyed 6,000 executives across four countries. METR updated their developer productivity research. The San Francisco Fed and Yale Budget Lab both published macro analyses. And a Lovable-hosted app exposed 18,000 users' data (it wasn't me...), giving us a real-world stress test of AI-generated code.
The picture is more interesting than the headlines suggest. It's not "AI works" or "AI doesn't work." It's a more nuanced story about depth, discipline, and where the real gaps are forming.
This is the first edition of the AI Trend Radar: an interactive quarterly snapshot of what's gaining momentum, what's losing steam, what's emerging, and what's on the horizon. A map of the signals that matter if you're making AI decisions right now.
Here's what the radar is telling us this quarter.
What's gaining momentum
Four signals are picking up speed right now. And if you look closely, they're all telling the same story.
The "are you using AI?" era is over. The "are you using it deeply and responsibly?" era has started.
"Agentic" everything is the one you've probably already felt. Every platform, from Microsoft to Claude Code to Zapier, is repositioning around agents. Gartner predicts 40% of enterprise apps will include task-specific AI agents by year-end, up from under 5% today. The terminology is real. The execution is still catching up to the marketing.
The AI depth gap is, for my money, the most important signal on this radar. The NBER study found most teams use AI about 90 minutes a week and see nothing from it. But separate research from OpenAI tells the other side of the story: the 5% of "frontier workers" who use AI across seven or more task types save five times more time than everyone else. Five times. The difference? The frontier workers didn't just adopt a tool. They rebuilt how they work around it. The gap isn't access. It's depth.
AI-generated code security went from theoretical risk to front-page breach. A Lovable-hosted app exposed 18,000 users' data thanks to authentication logic that a human reviewer would catch in seconds. Veracode reports 45% of AI-generated code fails security tests. If you're building with vibe coding tools and shipping without a review step, this one should keep you up at night.
Context engineering is the quiet one, but it might matter most long-term. Anthropic and Martin Fowler's team both published definitive guides this quarter. The shift is from crafting individual prompts to managing the entire information environment you feed into a model. If you're still thinking prompt-by-prompt, you have some catching up to do.
That's what's gaining traction. Now let's talk about what's losing it.
What's losing steam
Three narratives are fading. And frankly, good riddance.
The 'AI replaces developers' panic has finally grown up. A year ago, every other headline was predicting the end of software engineering. Now most informed sources talk about restructuring developer roles, not entirely eliminating them. The conversation got boring, which means it’s finally grounded in reality.
Single-tool comparisons are everywhere and increasingly pointless. "ChatGPT vs Claude vs Gemini" gets clicks, but the real story is about stacks and workflows, not horse races between models. If you're still picking tools based on benchmark tables, you're asking the wrong question (and also please stop interrogating the benchmarks).
Model release hype has normalised, and that's healthy. GPT-5.4, Claude Opus 4.6, and Gemini 3.1 all arrived without the frenzy of earlier generations. The conversation has shifted from the models themselves to what you build with them. That's a better place to be.
That's what's fading. But here's what's quietly forming underneath.
What's emerging
These four aren't mainstream yet. But I'd bet big money they will be by year-end.
Cognitive Surrender is a new academic framework for something you've probably already noticed: the slow drift from "AI helps me think" to "AI thinks for me." The Consumer Psychology Review maps the specific pathways from tool to crutch. It's not widely known yet, but give it six months.
The Skill Tax is the one that worries me most. A study in Frontiers in Medicine found gastroenterologists' unassisted detection rates dropped from 27% to 22% after just three months of AI reliance. Three months. Every hour of AI-boosted productivity carries an invisible cost: erosion of the skill underneath. The term isn't mainstream yet, but the pattern is showing up across medicine, law, and engineering.
Change Fitness comes from HBS, and it reframes something I've been saying for a while. The era of one-off "transformation programmes" is over. What matters now is continuous adaptability, building the organisational muscle to keep changing as AI capabilities shift every quarter. Saying 'it's all moving too fast' isn't a strategy any more. You need to start adapting to what change management looks like in this new era.
The Review Gap is the sleeper. Code generation is now effortless. But developers are merging AI-generated code they don't fully understand, and nobody's talking about what that means for software quality. They will be before the year is out - the recent Amazon incident is an early indication of the challenge.
That's what's forming. Here's what's coming at us. Fast.
What's on the horizon
Four things I'm watching over the next few weeks.
AI ROI earnings season is about to get uncomfortable. As companies report Q1 results, every earnings call that mentions AI investment will get the same follow-up: "Show me the returns." And thanks to the NBER data, analysts now have the numbers to back it up.
AI code regulation could get its first serious moment. The Lovable breach, combined with the broader security data, is exactly the kind of story that triggers policy conversations. The fact that minors' data was exposed makes it harder to ignore.
Multi-agent architectures are about to get practical. Claude Code Agent Teams and Cursor's autonomous agents are maturing fast, and I expect the first credible "here's how we actually set up our AI agent team" playbooks to arrive imminently.
The junior developer pipeline question is the one nobody's asking loudly enough. Entry-level hiring is declining. AI handles the tasks that used to train juniors. But if juniors never get those reps, where do senior developers come from in five years? This one has a long fuse, but it's lit.
What to do tomorrow morning
Three things from this radar you should act on this week:
Audit your team's AI usage depth, not breadth. Stop counting "who has access." That's a 2023 metric. Start counting hours per week, number of task types, and whether anyone has actually redesigned a workflow around AI rather than just sprinkling it on top. The 6x gap between frontier workers and everyone else is your biggest opportunity right now.
Add one quality gate to any AI-generated code. If you're building with vibe coding tools, have a second AI agent (Claude Code, Codex) review how code gets to production. It takes minutes. It catches the exact flaws that caused the Lovable breach.
Start a "context" conversation with your team. Move past "write better prompts." The real skill is managing the entire information environment your AI sees.
The trend radar will return next quarter. If there's a signal you think I'm missing, reply and tell me. See you next week! Faisal | ![]() |
P.S. Know someone else who’d benefit from this? Share this issue with them.
Received this from a friend? Subscribe below.
The Atomic Builder is written by Faisal Shariff and powered by Atomic Theory Consulting Ltd — helping organisations put AI transformation into practice.

