Where Application-Layer AI Still Wins When Infrastructure Owns the Cycle

The first phase of the AI cycle rewarded abstraction. The second phase is rewarding control, causing sector-specific repricing.
Capital has moved decisively down the stack – toward power, silicon, data centers, grids, cooling, and sovereign compute. That shift is not cyclical. It is structural. Intelligence is no longer scarce; the conditions required to deploy it at scale are.
This is the turn.
Software stocks have now fallen sharply while the broader market has held. Every prior software selloff of comparable magnitude tracked the Nasdaq in lockstep. This one has not. That divergence is the signal: this is not a macro correction. It is a structural repricing of software itself.
For AI application companies, the implication is uncomfortable but clarifying: the old SaaS playbook is finished. Growth stories untethered from physical constraint, labor substitution, or balance-sheet impact are being repriced – often brutally. Per-seat pricing models are under pressure as buyers hold headcount flat or reduce it. Budgets are shifting to AI-native initiatives. And the labs themselves appear to be generating more new ARR in weeks than the entire public software universe generates in years.
At the same time, a smaller class of application-layer companies is quietly becoming more valuable, not less.
The dividing line is no longer model quality or UX. It is whether the application captures economic rents hyperscalers cannot, or rationally will not, pursue. Understanding that boundary is now the central question in AI application-layer investing.
The Inversion: From Intelligence to Throughput
AI, Infrastructure, and the Control of Scarcity argued that AI is less a technological revolution than an economic reallocation of power. As the cost of intelligence collapses, the value of the systems that support intelligence – energy, compute, infrastructure, capital – rises.
This inversion is now visible in capital markets. Hyperscalers dominate the horizontal layer of the AI stack because they control the scarcest inputs: compute infrastructure, data-center real estate, energy access, global distribution, developer ecosystems, and capital intensity. They can subsidize AI services almost indefinitely because AI is not merely a product for them; it is strategic infrastructure.
Application companies that merely sit on top of this stack are structurally vulnerable. They are downstream of costs they do not control and platform decisions they cannot influence.
But that does not mean hyperscalers will capture all the economic value. History suggests the opposite. When infrastructure consolidates, economic rents migrate to places where scale is undesirable, liability is real, or specificity is unavoidable. The cloud era offers the relevant precedent: AWS, Azure, and GCP controlled infrastructure that theoretically positioned them to encroach up the stack. That didn’t happen – hundreds of billions in enterprise value was created by companies building best-in-class solutions on the hyperscalers’ infrastructure. But those companies earned their position by owning workflows, not merely providing access to them. The same principle applies now, under stricter conditions.
The investor question has shifted. It is no longer who builds the best AI. It is who controls what AI cannot bypass — and which application companies sit in the territory hyperscalers will not enter.
The Quiet Architectural Shift: From Models to Agents
For most of the past two years, the AI conversation focused on foundation models. GPT-4, Claude, Gemini, Llama, and their successors dramatically reduced the cost of generating intelligence. They made reasoning, coding, analysis, and content creation accessible at marginal cost.
But these systems largely remain stateless reasoning engines. They answer questions. They generate text. They assist humans. They do not own processes.
That boundary is beginning to dissolve.
Recent developments in agent frameworks signal the emergence of a new layer in the AI stack: systems that do not merely produce outputs but plan, execute, and manage multi-step workflows across diverse software environments. Instead of generating a response to a prompt, these systems break objectives into tasks, call external tools and APIs, evaluate outcomes, adapt their strategy, and continue execution until the objective is complete.
In other words, AI is moving from conversation to operation. This architectural shift is critical because it allows AI to move beyond assistance into process ownership. And process ownership is where durable economic value lives.
Why Agentic AI Raises the Bar, Not Just the Stakes
Agent frameworks do not eliminate application-layer value. They raise the threshold for earning it.
In the early days of the AI boom, many companies created products that simply served as model interfaces. These tools delivered novelty and productivity but rarely displaced real costs. The market rewarded speed. User interfaces proliferated. Features multiplied. But most of these products were structurally thin – and the bear case for software is precisely that they will stay thin. If creating code is incredibly cheap, anyone can replicate existing software products. The mere specter of in-house development is enough to limit vendors’ ability to maintain prices. If agents autonomously writing code is the future, who is better positioned to win than the labs developing the coding models themselves?
We do not dismiss this argument. But we believe it is being over-extrapolated. The hard parts of building software – identifying the right problem, designing the right solution, building a go-to-market, earning the trust required to deploy inside a complex enterprise – still exist in a world of abundantly cheap code. AI raises the bar for what software can accomplish; it does not eliminate the judgment required to deploy it usefully.
Once agent frameworks allow AI systems to autonomously execute workflows, the market’s evaluation criteria change in a specific and observable way. The question is no longer: does this AI produce useful answers? It becomes: does this AI replace something expensive? If the system does not control a workflow, an agent framework eventually will. Which means the competitive frontier has shifted from model capability to process control.
The companies at risk are not those who face AI. It is those who face it without workflow ownership, without contextual depth, and without a credible claim to consequence. Companies in the middle – moats depreciating, execution not yet accelerated, AI products bolted on rather than core – face terminal risk, not cyclical pressure.
Where Hyperscalers Rationally Stop
Hyperscalers are exceptional at scale. But they are not effective at intimacy.
Their economics favor horizontal capability, not domain accountability. As a result, they systematically avoid markets where:
- operational liability attaches to outcomes
- regulatory or fiduciary exposure is significant
- domain specificity overwhelms scale economics
- labor displacement creates customer-side political risk
- value capture requires owning a workflow rather than providing a tool
These are not technical limitations. They are business model constraints. Agent frameworks will make horizontal AI capabilities more powerful, but they will not change these structural boundaries. And those boundaries are precisely where application-layer companies still have room to win.
The surviving companies are not those who compete with hyperscalers on their own terms. They are those who have moved into territory hyperscalers have structurally vacated.
The New Test: Who Owns the Consequence?
Capital now applies a single, clarifying filter to AI applications: who owns the downside if the system fails?
If the answer is the customer, the product is optional. If the answer is the vendor, the system begins to resemble infrastructure.
Winning AI applications increasingly take responsibility for tangible business outcomes, sit contractually between the enterprise and operational risk, replace vendors, headcount, or external services, and accept liability hyperscalers prefer to avoid.
This is why agentic AI matters – but only in applied form. Autonomy is not the product. Process ownership is.
The Four Economic Rents
Rent 1: Regulated Workflows
Hyperscalers sell capability. They do not sell accountability.
Application-layer AI can capture durable rents by embedding itself inside workflows where decisions must be auditable, errors carry legal or financial consequences, human fallback is expensive, and regulators care who made the decision. Compliance monitoring, financial reconciliation, insurance claims adjudication, trade surveillance, industrial safety systems, and quality control operations all share this property: AI cannot be just a tool. It must become a system of record.
Regulatory and compliance moats are widening in the current environment. Accumulated certifications, audit histories, and jurisdiction-specific expertise impose lengthy qualification cycles on new entrants. These moats compound with deployment scale in a way that generic data access no longer does. Hyperscalers avoid these markets because liability caps scalability. Application companies can capture them precisely because they accept the accountability.
Rent 2: Quiet Labor Replacement
Despite the public narrative, capital is underwriting labor replacement aggressively – just quietly. The most successful AI systems today do not market themselves as replacing workers. Instead, they eliminate outsourced BPO contracts, compress junior labor layers, remove categories of external spend, and autonomously execute operational processes.
This is not productivity software. It is organizational restructuring disguised as software.
Hyperscalers will not sell this directly. The optics are wrong, the sales cycles are complex, and the liability is real. Application companies willing to absorb that friction can capture the rent. The companies best positioned are those that can demonstrate displacement rather than assistance, own workflow risk, and articulate a credible reason why a hyperscaler would not replicate the product.
Rent 3: Workflow Memory
Agent frameworks introduce another underappreciated moat: context accumulation. Systems that learn company-specific operating rules, encode exception handling, accumulate contextual memory, and improve through use inside a specific environment become expensive to remove even when cheaper alternatives exist.
Hyperscalers optimize for stateless scale. Application-layer winners optimize for stateful entrenchment. The deeper the workflow memory, the higher the switching cost and the larger the moat. Deployment friction is no longer necessarily a red flag – it is often evidence of a moat forming.
Building this kind of memory requires more than AI capability. It requires investment in data architecture: the transition from data access to contextual reasoning, from historical records to knowledge representation and enrichment, is a multi-year effort. Most companies looking to accelerate AI delivery run into roadblocks at the first step. The companies that complete this transition earn compounding advantages that are structurally difficult to replicate.
Rent 4: Domain Depth
Hyperscalers build platforms, not professions. They do not want to encode industry-specific heuristics, manage regulatory nuance across jurisdictions, maintain vertical-specific edge cases, or carry reputational risk for specialized failures.
Application companies that internalize domain logic can still earn infrastructure-like returns. The tradeoff is explicit: less total addressable market, more inevitability. Companies that own domain depth trade TAM for durability – and in the current environment, durability is what capital is pricing.
Proprietary data remains an advantage, but raw data ownership is a weaker moat than it was. The competition has shifted from data control to actionable context – from having data to understanding how it translates to business value through knowledge representation, enrichment, contextual reasoning, and learning systems. Companies that own data outside the reach of the models, and build context graphs on top of it, earn widening advantages. Those that merely sit on historical records do not.
The New Business Model: Synthetic Operators
Across the companies succeeding in this environment, a pattern is emerging. The shift is not primarily technological. It is commercial and organizational.
| Old Model | New Model |
| Per-seat SaaS pricing | Outcome-linked economics |
| Product adoption | Process replacement |
| Feature velocity | Reliability and accountability |
| Stateless inference | Persistent memory |
| Data access | Contextual depth and learning |
| Vendor relationship | Operational ownership |
| GTM: land and expand | GTM: embed and entrench |
These companies resemble synthetic service providers: AI systems that replace consulting firms, outsourced operations, internal teams, or entire workflows. Capital understands this model because it resembles infrastructure economics – slower growth curves, but stronger entrenchment.
The business model transition is real but incomplete. Monthly seat-based subscriptions will no longer be the dominant pricing model as AI can do more economically valuable work than it did when those contracts were signed. Hybrid structures that combine a base platform commitment with consumption-based and outcome-based components have become the norm for AI-native offerings. There will be a long period of experimentation across the input-throughput-output continuum. The companies that lock in outcome-linked economics before the market standardizes will accumulate structural advantages that compound. First movers in commercial architecture matter as much as first movers in product capability.
The GTM Architecture Matters as Much as the Product
Building the right product is necessary but not sufficient. The commercial infrastructure required to serve buyers on outcome terms – forward-deployed engineering, ecosystem depth, sales capability, partner channels – is a distinct and demanding build that AI-native startups frequently underinvest in. This is especially true in the enterprise space.
Incumbents with strong existing moats and enterprise relationships have some real advantages here. But those advantages are depreciating assets if the product and operating models do not evolve in parallel. AI-native companies have execution velocity; many have yet to prove they can sustain it through the commercial complexity of deployment at scale.
The companies that close both product and GTM architecture gaps simultaneously are the ones that will compound. Neither position is fully secure without the other.
What Capital Now Underwrites
For application-layer AI to attract growth capital after the turn, it must demonstrate:
- displacement rather than assistance
- outcome-linked economics, not seat-based pricing
- ownership of workflow risk
- accumulating contextual memory
- domain depth that requires accountability, not just access
- a credible reason hyperscalers will not replicate the product
- commercial infrastructure capable of serving buyers at scale
Failure to answer any one of these questions is usually fatal. Not because the technology is weak. Because the product is optional.
The signals capital is watching most closely: measurable AI-native revenue contribution as a percentage of total revenue; margin expansion as companies optimize AI infrastructure delivery; net dollar retention that reflects whether AI products expand or cannibalize existing spend; headcount-to-revenue ratios that indicate whether efficiency gains are real; and the pace at which companies ship new capabilities relative to frontier model releases. Companies showing acceleration on these dimensions will separate from the pack.
Final Thought: Applications Are Not Dead – They’re Getting Heavy
The future of AI is not better chat interfaces. It is autonomous systems running real processes.
That shift does not eliminate application-layer opportunity. It transforms it – and raises the price of entry significantly. Infrastructure has raised the bar. AI applications must now behave like operators rather than tools, replace costs rather than augment tasks, embed inside workflows that cannot pause, accumulate context that cannot be ported, and accept liability hyperscalers avoid.
The application-layter companies that emerge will not look like SaaS companies. They will resemble quiet operators, embedded systems of record, AI-driven service replacements, and infrastructure-adjacent platforms. They will grow more slowly but unwind with difficulty. They will trade narrative for inevitability.
We believe the global software market will continue to grow, as it has across every previous platform shift. Buyers will still pay for technology that enable them to better deliver on their own core competencies. The companies that clear the bar – whether AI-native startup or incumbent that has made the structural transition – will survive this period of creative destruction and emerge to take more share of an expanded market.
The future of application-layer AI is not thinner. It is heavier, slower, harder to unwind, and far more valuable. That is where the rents still are.
About Moneta
Moneta is an investment banking firm that specializes in advising growth stage companies through transformational changes including major transactions such as mergers and acquisitions, private placements, public offerings, obtaining debt, structure optimization, and other capital markets and divestiture / liquidity events. Additionally, and on a selective basis, we support pre-cash-flow companies to fulfill their project finance needs.
We are proud to be a female-founded and led Canadian firm. Our head office is located in Vancouver, and we have presence in Calgary, Edmonton, and Toronto, as well as representation in Europe and the Middle East. Our partners bring decades of experience across a wide variety of sectors which enables us to deliver exceptional results for our clients in realizing their capital markets and strategic goals. Our partners are supported by a team of some of Canada’s most qualified associates, analysts, and admin personnel.
