Software is Frozen Search. AI is Live Search.

Every piece of software we use today represents a problem space that someone explored once, encoded into instructions, and now replays cheaply billions of times. When you open Salesforce, you’re running the result of thousands of decisions about how CRM should work – decisions made years ago, frozen in code, distributed to millions of users who all get the same thing.

This is the entire economic logic of software: explore the problem space once, define a solution and then amortize the cost across everyone. The better your exploration, the more users you captured. The more users you captured, the higher the switching costs. Print money, repeat.

I think that era is ending.

The shift isn’t from SaaS to “AI as a service” (although I’m sure there’s a market for that.) It’s more fundamental – we’re moving the search from design-time to runtime. Instead of encoding one path through the solution space, we’re deploying systems that can search it live, with full context, every time.

The Great Inversion

I think we’re seeing a flip:

Traditional tech value:

  • Commodity: infrastructure and compute (AWS, Azure)
  • Differentiation: software features (Salesforce, SAP)

AI era:

  • Commodity: foundational model intelligence
  • Differentiation: context, constraints, and adaptation speed

In a SaaS company, you sell frozen solutions amortized across users. Your improvement velocity is limited by how fast you can ship features. Next-generation systems sell adaptive capability within boundaries—their improvement velocity is limited by model capability growth plus how fast you can learn to exploit them.

And they still ship feature improvement. This is a significant phase-change.

What History Teaches

The Electricity Trap (1880s-1930s)

When electric motors arrived, most factories did the obvious thing: they ripped out their steam engines and put in electric motors. But they kept the same layout—massive line shafts running the length of the building, with machines arranged around them. They treated electricity as “better steam.”

These companies mostly failed.

Henry Ford won by asking a different question: If I didn’t have steam’s constraints, how would I design a factory? His answer: unit drive. One motor per machine. This enabled flexible layouts, which enabled assembly lines, which enabled just-in-time manufacturing, which enabled modern industry.

But as with every new technology, it took 30+ years from the first electric motors to Ford’s transformation. Why? Because organizational structure was the bottleneck. Factories were built – physically and conceptually -around steam’s constraints. The technology changed in 1882. The organizations changed in 1913.

Technology doesn’t create innovation in and of itself, but the application of it absolutely does.

Personal Computers (1970s-1990s)

In the late 1970s, Steve Jobs described computers as “bicycles for the mind” – tools that amplify human capability with minimal energy input. Not faster calculators. Not better typewriters. Something categorically new.

DEC, Wang, and the minicomputer giants treated PCs as exactly what Jobs said they weren’t: faster calculators for existing workflows. They optimized for known use cases. And they were extremely successful initially, but now…

They’re gone.

Microsoft won by building a platform for others to innovate on. Apple won by reimagining the interface itself. Lotus, Adobe, and hundreds of others won by discovering entirely new categories of work that couldn’t exist before – spreadsheets that let non-programmers model complex systems, desktop publishing that democratized design. As the cost of computation and other resources went down, brand new solutions opened up (in the same way that once the last ice-age ended, brand new solutions to life were possible, and new species proliferated.)

Winners don’t optimise existing workflows. They create new possibility spaces.

The Internet’s False Starts (1990s-2000s)

Remember “brochureware”? Companies in the late 90s put their catalogs online and called it innovation. Pets.com shipped dog food through the Internet. Encyclopedias shipped on CD-ROMs.

All dead.

Amazon won by exploiting the Internet’s unique economics: infinite shelf space, personalization at scale, network effects in reviews. Google won by organising Internet-scale information—a task literally impossible in the physical world. eBay created markets that couldn’t exist without real-time global coordination.

The pattern is consistent: Winners exploit the unique property of the new medium. Losers replicate the old model on new infrastructure.

What’s Unique About AI?

If electricity’s unique property was flexible power distribution, and the Internet’s was zero marginal cost coordination, what’s AI’s?

Live adaptation to context.

Not “automate the workflow you have.” Not “faster execution of the process you designed.” But: “Understand the problem space, constraints, and available tools well enough to solve novel instances without predetermined paths.”

This is why chat interfaces are primitive – they’re the command-line era of AI. We’re still typing explicit instructions, negotiating who adapts to whom. The breakthrough comes when AI disappears into the substrate of work. When you don’t “use the AI,” you just think, and the environment responds intelligently.

I am super excited to see what this looks like as it plays out. I still use the terminal extensively, and the chat UI for foundational models is delightful, but this feels like just the starter of a tasting menu for AI.

The SaaS Extinction Event

Here’s a question: what percentage of SaaS survives on switching costs and multi-year contracts? How much is there because it’s good enough and there are limited options? If adaptation speed matters more than stability, those aspects become liabilities.

Why commit to Salesforce for three years when an AI-native competitor adapts to your evolving process weekly? Why pay for features you don’t use when a system generates only what you need?

The 30-day contract hypothesis: Everything becomes rolling monthly agreements. The value of long-term enterprise contracts evaporates. Every SaaS company built on switching costs faces existential threat. The market forces better solutions. (Much like pre-penicillin bacteria had stable host environments and therefore evolved to be best at nutrient metabolisation, once antibiotic antibiotics proliferated speed of adaptation became critical.)

But it’s subtler than “AI companies replace SaaS companies.” The transformation is in what gets centralised versus decentralised:

Centralised: Capability boundaries (what’s possible/safe), fundamental primitives (the TCP/IPs and databases), trained foundation models

Decentralised: Actual implementation, workflows, features, optimization targets, even data schema

Salesforce today: “Here’s our CRM. Everyone uses these objects, fields, and workflows.”

Tomorrow: “Here’s a system that understands CRM problems. It generates your CRM, personalized to your sales process, and evolves as you do.”

The company’s velocity no longer limits your improvement. Model capability growth plus your learning velocity does.

The New Moat

If intelligence becomes commodity (everyone licenses GPT/Claude), what prevents commoditization?

1. Proprietary context: Your institutional knowledge, workflows, accumulated data. The model is generic; your application of it is unique. This is similar to businesses in general – in a competitive market businesses must have an edge.

2. Trusted constraint specifications: Healthcare AI that provably follows HIPAA isn’t just smart, it’s certified safe. Legal AI that respects privilege isn’t just capable, it’s trustworthy. The constraints become the product.

3. Learning velocity: The speed of your human+AI adaptation loop. How fast can you discover new possibilities, attempt them, generate training signal, and improve?

4. Continuous re-specialisation machinery: Not the specialised system itself, but the tooling to stay specialized as general intelligence improves and eats the bottom of your stack.

I think this last point is crucial – as foundation models improve, today’s “requires expertise” becomes tomorrow’s “general model handles it.” Specialisation is a moving target. The moat isn’t being specialized—it’s the capacity to re-specialize continuously.

The Biological Analogy

Life started as single-cell organisms and evolved into hyper-differentiated species solving the problem space in creative ways. The SaaS industry evolved from homogenized solutions into differentiated point solutions.

Species specialise through divergence, but they remain part of cohesive ecosystems defined by their environment and resource constraints. They’re incompatible yet interdependent.

AI applications will likely follow this pattern:

General intelligence wins: High-volume, low-stakes, high-variance problems where “good enough” suffices – customer service, content generation, basic analysis. The tasks you give an intern or grad.

Specialization wins: High-stakes, narrow domains where expertise compounds – medical diagnosis, legal strategy, chip design, financial modeling. Where experience matters.

But the boundary moves. Continuously. What requires specialisation today becomes commodity tomorrow.

The winners will be organizations that can specialise fastest – that have the tools to rapidly encode domain knowledge, specify constraints, build evaluation harnesses, and create feedback loops.

The Tools for Rapid Specialisation

What’s needed:

1. Domain knowledge crystallization: Not just data, but judgment, edge cases, failure modes. How do you capture “what good radiology looks like” versus “good legal discovery”? How good are specialised AI systems are capturing this themselves?

2. Constraint specification languages: Human-legible, machine-verifiable ways to say “these are the invariants.” The language between “natural language is too ambiguous” and “formal verification is too brittle.” A large chunk of the legal system exists purely because of ambiguity of natural language.

3. Evaluation harnesses: General benchmarks mean nothing. You need continuous measurement of “is this medical advice safe” or “is this code performant” in your specific context. Second and third order effects are beyond the computational limits of even the most sophisticated systems – as much as we’d like to think that we can distill everything into inputs and outputs the truth is hyper-dimensionality rules.

4. Context orchestration: Feeding the right information at the right time. Does the system need last quarter’s data or last decade’s? Your customer’s history or industry trends? When Big Data was taking off many referred to it as the new oil, perhaps the value per barrel skyrockets for proprietary information.

5. Feedback loops: Outcomes feeding back into specialization. The faster this cycle, the faster you improve.

The companies building these tools – the infrastructure for rapid AI specialisation – are building the platforms of the next era. And there are likely several multi-billion dollar companies living in and between each of those domains.

The Human+AI Co-Evolution

I think it’s easy when exploring these questions to get trapped in a myopic view of how these things will play out – purely thinking in terms of the machine. Learning isn’t just a machine learning problem – it’s not even primarily a machine learning problem.

The real loop is:

  • Humans discover new possibility spaces (“wait, we can do that now?”)
  • Which changes what they attempt (experimenting with the impossible)
  • Which generates novel training signal (defining “good” in unexplored territory)
  • Which expands AI capability
  • Which opens new possibilities…

This is combinatorial, not linear. Like jazz improvisation – each move reveals options that weren’t visible before.

Traditional organisations are built for stability: annual planning cycles, role specialization, documented processes, change management as a special event.

But if the game is adaptation speed, everything inverts:

  • Annual planning → continuous constraint-setting
  • Role specialization → fluid expertise
  • Documented processes → experiments with feedback
  • Change management → change as the steady state

The ultimate bottleneck isn’t compute, isn’t model capability, isn’t even human creativity. It’s organisational metabolism. Can human organisations change fast enough to exploit AI’s adaptation speed?

The Primitives Remain

None of this means software disappears entirely. The solved problems stay solved.

TCP/IP, database ACID properties, double-entry bookkeeping – these are frozen optimisations where the search is done (until the problem space changes, these will not shift.) They become tools that AI orchestrates, like a human uses a calculator. You don’t need AI to reinvent how packets route or how transactions commit; you need AI to compose these primitives with sophisticated context-awareness.

In all cases, the middle layer dissolves. The feature factories, the one-size-fits-all workflows, the “everyone gets the same CRM fields”—that’s what AI replaces with generative, adaptive systems. Albeit the proven strategies – the best-practice that actually wins – still remains, and informs the novel workflows.

The Unanswered Questions

Some fundamental questions remain open after exploring these ideas:

1. The legibility-ambience tradeoff: How do we build AI systems responsive enough to be useful but legible enough to trust? Too ambient, no auditability. Too explicit, we’re back to programming.

2. The interface problem: Chat is primitive. What comes after? How do we “lean” our intent without specifying execution? What’s the equivalent of the GUI breakthrough?

3. Composability versus specialization: Do specialized AI systems remain composable because they share an intelligence substrate? Or do they diverge into silos like traditional SaaS? Can legal AI and medical AI coordinate on a malpractice case?

4. The moving target problem: How do you build businesses on shifting ground when general intelligence continuously eats the bottom of your specialization stack?

5. Who writes the constraints? Engineers? End users? Some hybrid role we haven’t named? What’s the new craft? Does software engineering transform into “constraint archaeology”—surfacing implicit rules?

The Bet

Organisations that build machinery for learning velocity rather than optimised solutions will win the AI era.

Just as Ford won not by building better cars, but by building a system for building better cars faster.

Just as Microsoft won not by building better applications, but by building a platform for others to build applications.

Just as Amazon won not by selling books cheaper, but by building infrastructure for continuous experimentation and adaptation.

The coming decade belongs to organizations that can metabolize change – that treat adaptation as the core competency, that build systems for staying specialised, that measure themselves by learning velocity rather than feature counts.

Software isn’t disappearing. It’s thawing.

The question is: can we change as fast as it does?


Minor postscript: I think there’s similarity between Lamarckian and Darwinian evolutionary theory here: the systems we’re developing are adaptive, not necessarily better – but almost all discussions are predicated on the notion that they are better.


Posted

in

by

Tags: