← Back to BlogAgentic Engineering

Why Your Default AI Choice Is Costing You the Productivity Upgrade

Anthropic, Perplexity, and NousResearch all launched hardware-independent agents this week. The market reacted immediately: Fastly -18%, Akamai -13%, Cloudflare -11% in a single session. Microsoft and Google are spending billions but producing a patchwork of SKUs. The cost of the default is the lost lead user feedback loop.

Dr. Florian Steiner

Claude AI Consultant & Trainer

6 min read
Why Your Default AI Choice Is Costing You the Productivity Upgrade

From Flo's AI Lab

This morning I shipped a skill that codifies the full pipeline for this newsletter. Input gathering from my vault, a draft written to a strict template, two parallel review subagents, cross-post generation for four channels, Buffer scheduling. All of it in one session, under two hours. The agency equivalent would be a three-week engagement plus a recurring subscription bill. The issue you are reading now is the first one produced through that skill. I am not telling you this because the skill is clever. I am telling you because it is representative. Every week in my lab, agentic engineering moves another job from "I pay someone for this" to "I codify it once in claude code and it runs forever." That shift is not a curiosity. It is the quiet rewriting of how a one-person business competes with a team of ten.

Most CEOs I speak with assume this kind of leverage is a privilege reserved for solo operators or well-funded startups. It is not. The tools that made this week's skill possible share one property, and every mid-market company has to decide whether they want it.

The Default AI Choice Is Quietly Expensive

Three announcements landed this week, and none of them got the boardroom coverage they deserved.

On Wednesday, Anthropic launched Claude Managed Agents in public beta, a fully managed infrastructure layer where Claude sessions run autonomously for hours and survive disconnections (SiliconANGLE). The same day, NousResearch shipped Hermes Agent, an open-source agent that lives on your own server with persistent memory and auto-generated skills (release notes). Two weeks earlier, Perplexity pushed its multi-model Computer agent into the enterprise, positioning itself directly against Microsoft and Salesforce (VentureBeat).

Two days after the Anthropic launch, the market noticed. On Friday, Fastly dropped 18%, Akamai fell 13%, and Cloudflare fell 11% in a single session (Yahoo Finance). Investors read Managed Agents as a direct competitive threat to the CDN and edge-compute stack: if the smart layer (the agent) and the execution layer (sandbox, state, tools) are bundled by the model provider, the middle disappears.

Three companies, three business models, one identical message: the container for serious AI work is no longer the employee's laptop. It is cloud infrastructure, or a managed server, or a sandboxed session you operate the way you operate a database. Agents run 24/7, they can be distributed across a team, and they can be managed centrally the way you already manage cloud workloads.

Now hold that against what most Mittelstand companies are actually running. Microsoft Copilot in the Office suite. Google Gemini in Workspace. Safely procured, already on the default slide of the IT strategy deck, but rarely as cheap or as integrated as the procurement memo suggests. Microsoft 365 Copilot is a $30 per user per month add-on on top of your base Microsoft 365 licence, and GitHub Copilot and Copilot Studio are separate subscriptions again (Microsoft Learn). Google bundled Gemini into Workspace in January 2025, then raised Workspace prices by 17 to 22 percent to pay for it, and from March 2026 any serious AI usage inside Workspace requires a separate AI Expanded Access add-on on top (Google Workspace Updates). The default AI choice in 2026 is the incumbent office suite plus a growing licence stack on top, not a single seat.

The problem with that default is that it is quietly costing you the productivity upgrade your competitors are already getting. Eric von Hippel called this dynamic lead user innovation in his 2005 book Democratizing Innovation (MIT Press). The users at the sharpest edge of a trend build breakthrough patterns before the manufacturer understands them. In my own 2005 book on business webs (Springer), I argued the related point: the platforms that win network markets are the ones that watch which modular components gain traction in their community and pull them into the core faster than rivals respond. I wrote that about i-mode and eBay. Anthropic is running the same playbook at a speed neither ecosystem ever had.

Take the Ralph Wiggum loop. Geoffrey Huntley wrote a five-line bash script in late 2025 that fed an agent its own output back into its next prompt, and named it after the relentlessly persistent Simpsons character. Y Combinator startups picked it up, the technique spread through Twitter group chats, and by January Anthropic had shipped a native Ralph Wiggum plugin inside claude code (The Register). That is not theft. That is a platform operator watching its sharpest users and closing the distance. The vibe coding era did not end because the practice died. It ended because the platform watched it and absorbed the best of its habits.

None of this is Anthropic-exclusive. The same mechanism is why Perplexity Computer is dangerous in the enterprise, why Hermes Agent has been pulling traction without a marketing budget, and why the vibe coding community keeps producing the next wave of patterns before any vendor catches up. It is also the single test Microsoft and Google are currently failing.

Microsoft's AI portfolio runs five separate licence lines: Microsoft 365 Copilot as an add-on, GitHub Copilot, Copilot Studio, plus Copilot surfaces in Word, Excel, Teams and Windows, each owned by a different product group. Google runs Gemini inside Docs, Slides, Sheets and Gmail, a separate Gemini Enterprise platform aimed at the same buyer, an AI Expanded Access add-on that becomes mandatory for higher usage from March 2026, an AI Ultra Access tier above that, and NotebookLM stapled to Slides with the Nano Banana image model on the side. Both companies are spending enough to fund a unified strategy. Neither has produced a single surface a CEO can point at and say, "this is where our AI strategy lives." It is a patchwork of SKUs wearing a Copilot or Gemini label, not a platform, and patchworks do not compound.

The consequence for a mid-market CEO: when IT tells you Copilot is the safe choice because it is integrated, they are really saying that no surface in the Microsoft stack is learning from the sharpest users in the wider market. You are buying the default, and the default has lost the lead user feedback loop. That loop is where the productivity upgrade is produced. Watching whether the platform-community balance holds every quarter, rather than assuming it, is the actual job.


If you are running a business on the default AI stack, the question for this week is not "which tool should we switch to?" It is "which of our teams is already experimenting with something our vendor has not shipped yet, and are we listening to them?" That is where your productivity upgrade actually comes from. Forward this to the one colleague in your leadership team who still thinks the procurement conversation is about licences, not about platforms.

If your team is ready to stop being the default and start being the lead user, I run a Claude Code Enablement Sprint that takes a mid-market team from curious to productive with agentic engineering in two weeks, for a fixed fee. Details at drfloriansteiner.com.

Dr. Florian Steiner

Claude AI Consultant, Trainer and Speaker. Anthropic Community Ambassador Munich. I help product teams adopt Claude Code productively.

Book a call →