In my last article, I explained why power has become the #1 bottleneck holding back the artificial intelligence (AI) boom.
Despite hundreds of billions being spent on chips and data centers, companies like Microsoft (MSFT) and OpenAI are now running into a new problem: They can’t find enough electricity to keep their data centers running.
To deal with this issue, AI giants like xAI are going off-grid—setting up their own gas turbines to power massive GPU clusters from scratch. If you missed it, you can read that article here.
But power is only one piece of the equation.
As AI scales, the rest of the “plumbing” infrastructure is coming under pressure. I’m talking about cooling systems… memory chips… and fiber-optic cables that move data at the speed of light.
These industries aren’t as flashy as a chip company like Nvidia (NVDA). But they’re quickly becoming just as essential—and just as profitable.
I’ll walk you through each one, and why they’re important, below.
- First up: liquid cooling.
On a recent trip to California, I toured Cerebras’ data center near San Francisco.
The first thing you notice is that it sits right next to a power plant. Every new data center will need a direct line into this kind of dedicated power going forward.
The second thing you notice is the sound. It’s deafening—like walking into a room filled with a thousand vacuums on full blast. They hand you earbuds at the door.
And then you see the cooling systems pumping 100 liters of water per second into each server.
There are literal water pipes running throughout the facility to cool the chips. Blowing cold air down the aisles doesn’t cut it anymore. The chips run too hot.
A few years ago, a standard Intel Corp. (INTC) CPU generated about 125 watts of heat. Nvidia’s next-gen chips generate 14X more heat.
That’s why liquid cooling is becoming a fast-growing slice of AI infrastructure spending.
Essentially, every new AI data center being built today will rely on liquid cooling. It’s the only way to stop the chips from melting down.
- AI makes boring ol’ memory chips critical again.
For decades, memory chips were considered the “dog” of the chip world. They were cheap, commoditized, and cyclical—going through wild booms and busts every few years.
But AI transforms memory chips from “dog” to “top dog.”
A typical AI server now consumes 8X more memory than a classic computer. On Nvidia’s latest Blackwell chips, memory accounts for 60% of the total manufacturing cost.
And yet, more than 90% of the time it takes an AI model to respond is spent shuttling data between compute and memory chips. The industry calls this the “memory wall.”
Compute keeps getting faster… but memory hasn’t kept up. So, $50,000 chips sit idle waiting for data.
The solution is High-Bandwidth Memory (HBM)—a new way of stacking memory chips like skyscrapers and bolting them directly next to the GPU. That shortens the distance data has to travel from inches to millimeters.
Companies like Micron Technology (MU) and SK Hynix now earn most of their revenues from selling HBM to AI firms like Nvidia. Micron’s latest chip is sold out through next year, and customers are signing long-term contracts to lock in supply.
This once-fragmented market has also consolidated into just a few top players. SK Hynix, Micron, and Samsung now control 97% of global memory production—giving them pricing power they haven’t had in decades.
- Fiber-optic companies will also shine in the next stage of the AI buildout.
AI models have grown so large, we can no longer train them on a single chip—or even a single rack.
We now have to string together 100,000 GPUs to act as one “super brain.” That means moving data between chips must happen at lightning speed.
Standard copper wires just don’t cut it anymore.
That’s why fiber optics—yes, like the ones overbuilt during the dot-com era—are suddenly becoming mission-critical.
These specialized cables turn data into laser light and shoot it down glass threads at near-light speed. They allow multiple AI data centers to work together as if they were in the same room—even if they’re miles apart.
On Nvidia’s new GPU racks, the optics bill alone can top $500,000. The cabling has become just as complex—and valuable—as the servers themselves.
- Think of AI as Moore’s Law on steroids…
For the past 50 years, every disruption has surfed on the wave of Moore’s Law. This law basically states that, every two years, we’d be able to squeeze twice as many transistors onto a chip.
This meant computer chips got faster and cheaper year after year, which gave us PCs… laptops… the internet… cloud computing… smartphones… and even enabled dirt-cheap solar panels and DNA sequencing.
My research tells me AI is the new Moore’s Law. Roughly every six months, AI models double in size. This makes them better, faster, cheaper, and more capable.
AI is Moore’s Law on steroids. Moore’s Law had one engine: transistor density. AI has three scaling laws driving it forward:
- Training scaling: Feed an AI more data, and it gets better.
- Reasoning: New reasoning models like Gemini 3 can break problems into steps before giving you an answer, rather than saying the first thing that comes to mind.
- Test-time compute: Let an AI “think” longer, and the quality of the answer it gives you improves.
Moore’s Law put a supercomputer in every pocket. AI will put a genius in every pocket.
The way I see it, the easy-money AI trade is over. It’s not enough to just buy Nvidia anymore. Instead, I believe the next wave of AI spending will come from underrated infrastructure needed to run this new tech.
That means the companies supplying energy, liquid-cooling systems, memory chips, and optical cables should do very well in this next phase of the AI buildout.
At RiskHedge, these are the sectors we’re watching closest… and what we’re positioning our disruption letters around.
Stephen McBride
Chief Analyst, RiskHedge
PS: Join me every week in The Jolt—my free letter covering the biggest disruption trends and moneymaking opportunities I think every investor should know about. You can sign up for free here.

