
How Bottlenecks Will Shape Returns and Why Hyperscalers Are Still Likely to Win
The recent decline in hyperscaler stocks has been interpreted by some as the first crack in the AI supercycle. Capex budgets are under scrutiny, AI monetization timelines are being pushed outward, and investors are questioning whether infrastructure buildout has run ahead of demand.
This interpretation mistakes volatility for reversal.
Artificial intelligence is widely described as a technological revolution. In economic terms, it is more accurately a reallocation of scarcity.
AI dramatically reduces the cost of cognition while increasing reliance on assets that cannot scale frictionlessly; compute, energy, semiconductor fabrication, data centers, physical infrastructure, capital, regulatory permission, and distribution. As intelligence becomes abundant, pricing power shifts away from knowledge production and toward the owners of constrained physical and institutional capacity.
The central thesis of this report is that AI reallocates value toward bottlenecks rather than innovation. The dominant long-term winners are unlikely to be those who build the most sophisticated models, but those who control the scarcest and least replicable inputs in the AI value chain.
This reframes the core investor question. The key issue is no longer who builds the best AI, but who controls what AI cannot bypass.
AI Is an Economic Shift Before It Is a Technological One
For decades, economic advantage flowed to firms that monetized intelligence. Software companies, advisory firms, financial services platforms, and knowledge-intensive businesses benefited from the scarcity and cost of cognition. Intelligence was expensive, slow to scale, and defensible.
AI fundamentally changes this cost structure. Research, legal drafting, financial modeling, analytics, coding, and strategic synthesis can increasingly be produced at near-zero marginal cost. Demand for these outputs may persist or even grow, but differentiation erodes as supply expands.
Scarcity determines pricing power. When scarcity collapses, margins compress. AI does not eliminate intelligence; it eliminates exclusivity over intelligence.
At the same time, AI intensifies dependence on inputs that remain capital-intensive, slow to expand, politically constrained, and physically bounded. The economic center of gravity shifts from cognition toward infrastructure, energy, fabrication capacity, real estate, regulatory approval, and capital access.
This creates a structural transfer of value away from intelligence producers and toward infrastructure owners.
Where Scarcity Actually Lives in the AI Stack
Although AI is often discussed in terms of models and applications, the most durable economic leverage sits lower in the stack, where constraints are hardest to relieve.
Compute capacity is bounded by semiconductor fabrication cycles that require multi-year build timelines, tens of billions in capital, and geopolitically fragile supply chains. Data center expansion is constrained by power interconnection queues, land availability, permitting, cooling requirements, and network proximity. Energy generation and grid transmission scale on timelines that trail AI adoption curves by years. Cooling, water access, and thermal management increasingly dictate where compute can physically exist. Regulation, export controls, and data sovereignty requirements introduce institutional bottlenecks that only large, embedded firms can navigate effectively.
Taken together, AI does not scale like software. It scales like heavy industry.
As a result, economic returns migrate toward layers of the stack that are slow to build, hard to replicate, and expensive to replace.
Why the Application Layer Faces Structural Margin Compression
AI applications will remain commercially valuable, but many face fragile long-term economics.
As base-model intelligence becomes commoditized, differentiation at the application layer becomes increasingly difficult unless a company controls unique data, regulated workflows, deeply embedded enterprise functions, or system-level integration. Features become replicable, switching costs are low, and reliance on upstream compute and model providers weakens pricing power.
Applications that control workflows or replace structural costs may retain defensibility. Applications that merely enhance productivity face persistent margin pressure.
In this environment, many AI companies risk becoming features rather than platforms; valuable, but structurally subordinate to infrastructure and distribution owners.
Why Hyperscalers Are Structurally Positioned to Capture Value
Hyperscalers sit at the convergence point of AI scarcity. They control compute infrastructure, capital deployment, data center real estate, enterprise distribution, developer ecosystems, and vertically integrated hardware-software stacks.
Their scale allows them to amortize fixed costs globally and operate at structurally lower unit economics than smaller competitors. Their vertical integration enables them to internalize margins, design custom silicon, optimize infrastructure, bundle services, and compress competitor pricing. Their distribution reduces customer acquisition friction and increases switching costs. Their balance sheets allow sustained multi-tens-of-billions annual capital expenditure; a barrier few challengers can match.
Critically, hyperscalers can treat AI as a strategic platform investment rather than a standalone profit center, subsidizing compute, absorbing innovation, bundling competing functionality, and shaping market economics over long time horizons.
Even when innovation originates elsewhere, economic capture is likely to concentrate at the hyperscale infrastructure layer.
Innovation Does Not Guarantee Economic Capture
Technological leadership has historically failed to guarantee durable economic returns.
In prior cycles, hardware innovators lost value to platform owners, content creators lost leverage to distributors, and software vendors lost pricing power to ecosystem controllers. AI is likely to follow the same pattern.
Many startups will build compelling AI products, but unless they control infrastructure, distribution, proprietary data, regulatory positioning, or system orchestration, they risk being replicated, bundled, underpriced, acquired, or marginalized.
Long-term surplus accrues to those who control compute, capital, regulation, and distribution; not merely those who build intelligence.
Energy, Geography, and Sovereignty as Binding Constraints
As AI scales, electricity becomes a gating input rather than a background cost. Compute demand is accelerating faster than grid expansion, generation buildouts, transmission projects, and permitting pipelines.
Across jurisdictions, power infrastructure expands on multi-year or multi-decade timelines while AI adoption compounds on far steeper curves. This transforms grid access, power availability, and infrastructure siting into scarce strategic assets.
At the same time, AI increasingly intersects with national security, data sovereignty, and regulatory policy, meaning not all jurisdictions are equally viable compute locations. Trusted, stable, geopolitically aligned regions gain structural advantage; not because of innovation, but because AI must physically exist somewhere, and not everywhere is equally feasible.
Canada, the United States, Northern Europe, and select Middle Eastern regions illustrate this dynamic in different ways. The point is not geography; the point is scarcity of permitted, powered, trusted compute capacity.
AI scales at the speed of infrastructure, regulation, and capital, not code.
The Likely End-State: AI as Infrastructure, Not Software
Over time, AI is likely to resemble electricity or cloud computing; a foundational utility rather than a standalone product.
In this equilibrium, base intelligence becomes commoditized, margins concentrate in infrastructure and platforms, and economic value accrues to owners of capital-intensive, scarce assets. Distribution, ecosystems, and integration matter more than algorithms.
Hyperscalers, by virtue of owning infrastructure, capital, and distribution, are structurally positioned to internalize the largest share of long-term returns.
Investment Implications
AI reallocates capital toward scarcity rather than creativity.
Stronger risk-adjusted returns are likely to accrue to businesses that control compute infrastructure, power generation, grid capacity, semiconductor fabrication, hyperscale platforms, regulatory positioning, sovereign compute capacity, and deeply embedded enterprise distribution.
By contrast, firms whose primary moat is generalized intelligence or knowledge production may face margin compression, platform dependence, valuation pressure, and rising competitive intensity.
The investment mandate is to own bottlenecks, not novelty.
Canada has a strong opportunity in the AI economy because it offers stable, low-carbon energy, political reliability, and proximity to U.S. tech hubs. Its abundant power, natural resources, and mature capital markets make it an attractive place for building AI data centers and compute clusters. In a world where AI value increasingly depends on scarce physical and regulatory resources, Canada’s stability and resources give it a clear advantage.
That said, growth in Canada is slowed by practical constraints. Permitting delays, grid capacity limits, environmental reviews, and transmission bottlenecks all make it harder to scale AI infrastructure quickly. Even countries with plentiful energy and strong institutions face these challenges. Canada shows that in AI, the real value goes to places that can reliably provide the physical space, energy, and regulatory certainty needed to run large-scale compute.
Conclusion
AI is not merely a race to build smarter systems; it is a race to control what intelligence depends on.
As intelligence becomes cheaper, power concentrates among those who own compute, energy, infrastructure, capital, regulation, and distribution. Hyperscalers, through scale, vertical integration, balance sheet strength, and platform control, are structurally positioned to emerge as the dominant long-term winners of the AI cycle.
For investors, the task is clear: Follow scarcity, not innovation – and own the constraints rather than the models.
About Moneta
Moneta is an investment banking firm that specializes in advising growth stage companies through transformational changes including major transactions such as mergers and acquisitions, private placements, public offerings, obtaining debt, structure optimization, and other capital markets and divestiture / liquidity events. Additionally, and on a selective basis, we support pre-cash-flow companies to fulfill their project finance needs.
We are proud to be a female-founded and led Canadian firm. Our head office is located in Vancouver, and we have presence in Calgary, Edmonton, and Toronto, as well as representation in Europe and the Middle East. Our partners bring decades of experience across a wide variety of sectors which enables us to deliver exceptional results for our clients in realizing their capital markets and strategic goals. Our partners are supported by a team of some of Canada’s most qualified associates, analysts, and admin personnel.
