Big Tech earnings this week put a number on something investors have been guessing about for the last six months.

Combined 2026 capital expenditure plans from Alphabet, Amazon, Microsoft, and Meta now total roughly $725 billion , according to Financial Times reporting on the latest guidance updates . That figure is up 77% from last year’s record of $410 billion. Microsoft alone, in its Wednesday earnings call , guided to $190 billion in 2026 capex, a 61% jump and well ahead of the $155 billion analysts had been modeling. Amazon committed to $200 billion. Alphabet raised to $185 billion. Meta raised to $115 to $135 billion.

The reaction has been predictable. Everyone is asking the same question. How much of this $725 billion will flow to Nvidia.

That is the wrong question.

Nvidia will be fine. The more interesting question is what happens inside the data centers Nvidia chips get shipped to, because that is where the actual bottleneck has moved. And the company describing the new bottleneck most clearly is one most readers have probably never heard of.

The Phrase That Should Reset The Frame

On Astera Labs' Q4 2025 earnings call in February, CEO Jitendra Mohan said something that the AI infrastructure narrative has not fully caught up to yet:

"Increasingly, the bottleneck is shifting from compute to connectivity, and connectivity is where we play."

That sentence is the thesis of the next phase of AI infrastructure spending. For the last three years, the binding constraint on training and serving large AI models has been compute. Specifically, GPUs. Whoever could get more H100s, B200s, or now Blackwell parts, won. That shaped Nvidia's stock chart, the hyperscaler capex announcements, and most of the financial press coverage of AI.

That phase is ending. Not because compute is no longer scarce, but because compute scarcity is being replaced by a different scarcity. The rack-scale architectures hyperscalers are now deploying require GPUs to talk to each other at speeds and latencies that conventional data center connectivity cannot deliver. A single training run for a frontier model now requires synchronizing data across thousands of GPUs in real time. The compute exists, in many cases. The connective tissue that lets compute work in concert at that scale is what has run short.

This is the simplest way to explain it. Imagine a factory with a thousand workers. For years the constraint was hiring enough workers. Now you have the workers but the hallways between their stations are too narrow, the elevators are too slow, and the messages they need to send each other to coordinate get stuck in queues. The number of workers is no longer the limit. The infrastructure between them is.

That infrastructure, in an AI rack, is silicon. Switches, retimers, signal conditioners, smart fabric controllers, and high-bandwidth memory interfaces. It is not glamorous. It is not what gets covered when a hyperscaler announces a new model. But it is now the determining factor in how fast AI compute can actually be deployed, and the companies that build it are the ones the $725 billion is about to flow through next.

The $6.5 Billion Detail Almost Nobody Reported

Astera Labs reported Q4 2025 results on February 10. The headlines focused on the revenue beat: $270.6 million, up 92% year-over-year, with full-year 2025 revenue at $852.5 million, up 115%. Solid numbers, beat consensus, raised guidance. The kind of quarter every AI infrastructure company has been delivering.

What got buried in the same release was something more strategically significant. Amazon issued Astera Labs a warrant tied to up to $6.5 billion in cumulative product purchases , expanding a previous $466 million warrant by more than 13 times. That was not a small expansion. That was Amazon writing into a public filing that it expects to spend more on Astera's chips over time than Astera's entire trailing-twelve-month revenue line. Given that AWS is one of the four hyperscalers whose 2026 capex just collectively crossed $725 billion, the warrant tells you something specific about where AWS thinks the technical constraints of the buildout are.

Despite the beat, the stock fell as much as 10% in extended trading , driven mostly by the simultaneous announcement that the CFO was stepping down. The market reaction is informative the same way the Intellia reaction last week was. The science (or in this case, the technology) was excellent. The corporate story had a wrinkle that traders priced over the underlying business signal. By April, shares had recovered most of the move, but the gap between what the company reported and what the stock did reveals the same pattern that keeps showing up in this AI cycle: most investors are still pricing the wrong layer.

What Astera Actually Builds

Strip away the company's marketing language and the picture is concrete.

A modern AI rack contains 36 to 72 GPUs working in parallel on a single training or inference workload. Those GPUs need to share data continuously, at terabit-per-second speeds, with sub-microsecond latency. The wires alone cannot do this. Data signals degrade over even short distances on copper. Different protocols speak different languages. Memory in one part of the rack needs to be visible to processors in another part of the rack. The components that handle these problems are what Astera ships:

  • Aries retimers : chips that regenerate degraded signals across PCIe connections inside racks, allowing GPUs in different positions to communicate at full speed
  • Taurus active electrical cables : signal-conditioning modules embedded in the cables that link rack components, supporting 400G and 800G Ethernet
  • Scorpio P-Series : PCIe Gen 6 fabric switches for scale-out networking between racks
  • Scorpio X-Series : a new product line, started shipping in Q1 2026, targeting the $10 billion scale-up switching market that connects GPUs within a single rack
  • Leo CXL controllers : enable memory pooling across racks, the underlying technology Microsoft, Intel, and SAP just announced as the first public CXL-attached memory deployment in Azure

The combined product portfolio sits at every choke point inside an AI rack where conventional networking equipment cannot keep up. Nvidia's GPUs do the compute. Astera's chips are part of the reason those GPUs can actually work together as a single supercomputer rather than as 72 individual processors.

That positioning is why Astera went from $396 million in 2024 revenue to $852 million in 2025 (115% growth), with management guiding the 2026 served addressable market to $25 billion by 2030, a 10x expansion. The company is not riding the same Nvidia wave that everyone else in AI semiconductors is riding. It is selling to the layer that has just become the hyperscaler's most pressing problem.

Why This Matters For The $725 Billion

When Microsoft's CFO Amy Hood described the company's path from $120 billion to $190 billion in capex on the April 29 call , she pointed to one specific challenge: a $25 billion impact from higher component prices, including memory and networking silicon. That is not a casual line item. That is the company telling investors that the cost of building rack-scale AI is being dictated, in significant part, by what happens to the prices of components that sit between the GPUs.

Hyperscalers cannot fix this with money alone. The components have to actually exist, ship, and work in production. The supply of high-end connectivity silicon is concentrated among a small number of companies. Astera is one. Marvell is another. Broadcom plays heavily in the same space at the higher end. Companies like Credo and Lumentum sit in adjacent niches.

For investors trying to understand where the $725 billion is going to flow, the question to ask is not which company makes the GPUs. That story is already priced. The question is which companies sell the parts that determine whether those GPUs can function at the scale required. That list is much shorter, the companies are mostly still in the growth-validation phase rather than the index-weight phase, and the demand signal is now visible in the form of multi-billion-dollar customer warrants and 10x TAM expansion forecasts coming directly from management on earnings calls.

The structural read is straightforward: as long as hyperscaler capex continues to expand and rack-scale architectures continue to dominate, the bottleneck-to-connectivity story will keep paying out across the suppliers that sit at the choke points . Astera has named itself as one of those suppliers. Mohan's "bottleneck is shifting" line on the earnings call was not marketing copy. It was a description of where the company's revenue line is pointing.

What The Coverage Is Missing

Most AI investment coverage right now is still organized around two questions. First: how much will hyperscalers spend? Second: how much of that goes to Nvidia? Both questions are reasonable and both have been answered conclusively in the most recent earnings cycle. Spending is going up. A meaningful share goes to Nvidia.

Neither question is the most useful one to be asking now, because both are largely priced. The Nvidia thesis has been working since 2022 and is well understood. The hyperscaler capex thesis has been the dominant narrative for two years.

The more interesting question, and the one that has not been fully priced, is which non-Nvidia components become structurally necessary as racks scale from 8 GPUs to 72 GPUs to 576-GPU clusters running across multiple racks. The answer to that question is what determines whether the next $725 billion translates into linear or accelerating returns for the rest of the supply chain. Connectivity silicon is one of the most credible candidates for that role. Companies positioned in that space, including Astera, Marvell at the high-end, Broadcom across the stack, and a handful of others, are the names that benefit from the same capex wave Nvidia has been riding, but at an earlier point in their public revenue trajectory.

The bottleneck moved. The companies sitting at the new bottleneck are quieter than the chip giants but no less essential to making the buildout work. The investors who recognize that shift early are the ones who tend to do well in the second half of secular trends, after the first-order beneficiary is no longer the only obvious answer.

For now, watch what hyperscalers buy next, not what they announce next. The components order is where the actual money is being directed, and the companies on the other end of those orders are where the next chapter of the AI infrastructure story will be written.