The Future of Software Development

Anthropic didn't get lucky with MCPs and Claude Code. They played a very deliberate game of chess, and it's reshaping who gets to build software and what 'developer' even means.

I've been running a software company for over a year where most of the code was written by AI. My business depends on understanding where this space is going. And something clicked recently about what Anthropic has been doing that I think is worth writing down.

Developers used to have a moat

If you could write code, you had something genuinely valuable. Most people couldn't, so being a developer meant you had leverage.

The sharp developers started experimenting with agents early on. In June 2023, OpenAI released function calling - a way for AI models to interact with other software rather than just generating text (OpenAI, 2023). Suddenly you could build tools that actually did things. Connect to databases, send emails, update spreadsheets. That was the first hint of where things were heading.

But it was still a developer's game. You needed to know what you were doing to wire it all together.

What Anthropic actually did

Anthropic's sequence of moves over the past eighteen months wasn't a series of lucky product launches. It was a chess game.

They were almost certainly building Claude Code behind the scenes before any of us knew about it. Claude Code can write code, read files, run commands - basically act as a developer inside your terminal. But they knew it needed to do more than just write code. It needed to connect to project management tools, databases, GitHub, Slack. Everything.

So what do you need for that? A universal adapter. One standard plug that lets AI connect to anything.

That's what MCP is. Model Context Protocol. Anthropic themselves call it "a USB-C port for AI applications" (Anthropic, 2024). Instead of building a custom integration for every tool, you build one MCP connector and any AI can use it.

Here's the strategic bit. They didn't release Claude Code first and then figure out how it would connect to things. They open-sourced MCP in November 2024, shipping it with connectors for GitHub, Slack, Google Drive, and Postgres (Anthropic, 2024). They put the universal adapter out first and let everyone adopt it.

And adopt it they did. Within thirteen months: 10,000+ active public servers, 97 million monthly SDK downloads (Anthropic, 2025). OpenAI added MCP support to ChatGPT in March 2025 (TechCrunch, 2025). Google followed in April (TechCrunch, 2025). Microsoft and GitHub joined in May (TechCrunch, 2025). AWS built 60+ official MCP servers. By December 2025, Anthropic donated MCP to a neutral foundation with OpenAI and Block as co-founders (Anthropic, 2025).

Anthropic got their biggest competitors to adopt their protocol, then open-sourced it. All in just over a year.

Then Claude Code drops. Research preview in February 2025, generally available by May. Six months later it hit a billion dollars in annual run-rate revenue (Anthropic, 2025). By early 2026 that had grown past $2.5 billion with roughly 54% market share in AI coding (Anthropic, 2026). And it could already talk to everything, because everything already spoke MCP. The ecosystem was ready and waiting, because Anthropic built the ecosystem first.

If they'd done it the other way round - launch Claude Code first, then release MCP, then wait for adoption - some other company might have built the integration layer. Anthropic would have been dependent on someone else's standard. Instead, they own the standard, they got universal adoption, and then they launched the product that uses it. That's not luck. That's strategy.

The market is stretching in both directions

The developer market isn't just changing. It's stretching at both ends.

At the top, experienced developers are moving into agent development - figuring out how to orchestrate AI systems that do the actual work. Gartner predicts 40% of enterprise applications will include AI agents by end of 2026, up from under 5% in 2025 (Gartner, 2025). Someone has to build those agents, and it's a genuinely new skill set.

At the bottom, people who could never code are building real things. Andrej Karpathy coined the term "vibe coding" in February 2025 - it became Collins Dictionary's Word of the Year (Collins, 2025). A quarter of Y Combinator's Winter 2025 batch had codebases that were 95% AI-generated (TechCrunch, 2025).

I know this is real because it's how CodeSpring started. I've never written a line of code myself. The entire product was built by describing what I wanted to AI tools. We went from nothing to six figures a month on a codebase that was entirely AI-generated.

84% of developers are now using or planning to use AI tools, up from 76% the year before (Stack Overflow, 2025). GitHub Copilot writes about 46% of the average user's code (GitHub, 2025). Nearly half. That was unthinkable two years ago.

So you've got more people entering from below, and existing developers either moving up into agent orchestration or getting squeezed. The ones who keep doing things the way they've always done them will find their work automated by the very agents their peers are building.

Where this is all going

Here's how I see the progression:

 THE SHIFT FROM TOOLS TO AGENTS
 ──────────────────────────────────────────────────────────────────

 SaaS (now)                You pay for a tool. You do the work.
   │                       "Here's a dashboard. Go analyse your data."
   │
   ▼
 AAAS                      You pay for an agent. It does the work.
   │                       "Here's your weekly report. It's already done."
   │
   ▼
 Skilled agents            Agent comes pre-loaded with skills.
   │                       No prompting. No chat. It pulls your data,
   │                       runs the analysis, and shows you the output.
   │                       You just approve or redirect.
   │
   ▼
 Self-improving agents     Agent gets better every time you use it.
                           Thumbs up? It updates its own skills.
                           Thumbs down? It adjusts and tries again.
                           Thousands of users doing this simultaneously
                           means the whole system improves recursively.

 ──────────────────────────────────────────────────────────────────
  YOU ARE HERE ──▶ Somewhere between SaaS and AAAS.
                   Most people haven't noticed the ground shifting yet.

My predictions

I've spent a lot of time researching where this is going. Here's what I'd bet on.

Agents replace SaaS tools, not just assist them. The business model for software is shifting from selling subscriptions to selling work outcomes. Sierra pioneered outcomes-based pricing for customer service. Salesforce now tracks "Agentic Work Units." Analysts predict agents will get "job titles, budgets, limits" - and by 2026, one-third of B2B payment transactions will involve autonomous agents handling invoicing or reconciliation (Forrester, 2026). When an AI agent can do the work that a $50/month SaaS tool helps a human do, why pay for the tool? Analysts have already coined the term "SaaSpocalypse" for what's coming.

The real money isn't in IT budgets, it's in labour budgets. Companies spend about 1% of GDP on IT software. They spend 13% on business labour. Vertical AI agents - ones built for specific industries like legal, finance, or healthcare - tap into that 13%, not the 1%. Bessemer Venture Partners argues this makes vertical AI a 10x larger opportunity than vertical SaaS ever was. Harvey AI, a legal agent, hit $190M in annual revenue with over 1,000 customers. Goldman Sachs is deploying autonomous agents for trade accounting and client vetting. Corporate legal AI adoption doubled from 23% to 52% in a single year (Wolters Kluwer, 2025). This is where the serious money flows.

AI agent capability doubles every seven months. METR's tracking data shows AI's ability to complete long-horizon tasks is doubling roughly every seven months. Right now, agents can reliably handle complex tasks for about 30 minutes. By end of 2026, that's predicted to extend to 8+ hours (Theory Ventures). Dario Amodei, Anthropic's CEO, predicted at Davos that AI may enable a single individual to operate a billion-dollar company by 2026. Whether you think that's hype or not, the direction of travel is clear.

Most of what gets built with AI will be rubbish, and that creates its own industry. 41% of all global code is now AI-generated, and studies show AI-assisted code has 1.7x more major issues and 2.74x higher security vulnerabilities (CodeRabbit, 2025). A new profession has already emerged - "vibe coding cleanup specialist" - with companies charging $200-400/hour to turn AI-built prototypes into production-ready software. The average large enterprise now runs 2,191 applications, with 61% outside formal IT oversight (Torii, 2026). Governance, security scanning, and cleanup tools for AI-generated code is becoming a market in itself.

The companies that win won't build better AI models. Neither Anthropic nor OpenAI is building everything. They're explicitly leaving domain-specific agents, agent orchestration, security infrastructure, and data quality to the ecosystem. The winners will build what sits in between - the connective tissue that makes agents actually work inside real organisations with messy data, compliance requirements, and legacy systems. Think of it like plumbing. The water company provides the water. Someone still needs to build the pipes that bring it into every building.

SaaS is dying. Agents as a Service is what comes next. This is the prediction I feel strongest about. Right now, you pay $50/month for a SaaS tool that helps you do a job. Soon you'll pay for an agent that does the job. The subscription model made sense when software was a tool that a human operated. When the agent IS the operator, why would a VC fund another SaaS dashboard? They won't. The money moves to AAAS - Agents as a Service - where you're buying outcomes, not access. We're not in the IT budget anymore. We're in the labour market. That's a fundamentally different game.

Skill packs become the new product. This one's already happening, just not in the mainstream yet. Right now, when you use an AI agent, the quality of what you get depends almost entirely on how well you can ask questions. If you don't know how to prompt it, you get rubbish output. That's where vibe coding came from - write a better prompt, get a better result.

But that's already changing. People are building "skills" for AI agents - instruction files that teach an agent how to do a specific job. Think of them like apps for AI. Instead of interrogating an agent and hoping you ask the right questions, you load it with a skill pack and it already knows what to do. It already knows what questions to ask, what format to output, what to check for. On GitHub right now, developers are building and sharing these skill files. None of this has diffused beyond the software engineering crowd yet, but it will. When agents come pre-loaded with skills for accounting, legal review, marketing analysis, customer support - the whole interaction model flips. You stop teaching the AI. You start reviewing its work.

Chat goes away as the main interface. Right now everyone interacts with AI through chat. You type, it responds, you type again. The entire experience depends on the person using it knowing what to ask. That's a massive bottleneck, and it won't last. The next step is agents that take data in automatically, apply their skills, generate their own interface to show you the output, and then improve based on whether you say the output was good or bad. No chat needed. You're not interrogating it anymore - you're supervising it. The agent does the work, you approve or redirect. That's a fundamentally different relationship with software, and it changes who can use these tools and what they can accomplish with them.

Agents that improve themselves through use. This is the bit that gets really interesting. Right now, improving an AI agent is a manual process. You go back and forth, refine the prompts, test the outputs, and eventually save what works. But agents are starting to build their own skills as they go. You give feedback - this output was good, this one wasn't - and the agent updates its own instructions. It gets better at its job the more it does it, the same way a new employee does. The difference is an agent can do this across thousands of interactions simultaneously. The recursive improvement loop - where the agent uses feedback to update its own skills, which makes the next output better, which generates more feedback - is going to be one of the most powerful dynamics in software over the next few years.

What this means if you're building something

If you're a developer, learn agent orchestration. Learn MCPs. Learn how skills work. Internally at Anthropic, their engineers saw a 67% increase in merged pull requests per developer per day after adopting Claude Code (Anthropic, 2025). That kind of productivity gain is coming for the entire industry. The developers who understand how to build agents, design skill packs, and orchestrate multi-agent systems will have enormous leverage.

If you're a non-technical founder like me, the tools available right now are extraordinary. You can build real products without writing code. But the window where "I built this with AI" is impressive is closing fast. Soon everyone will be building with AI. The differentiator won't be that you used AI to build it. It'll be whether your product can do the work itself rather than just helping someone else do it.

And if you're Anthropic, well played. Genuinely well played.


Sources