What’s the biggest risk to the artificial intelligence (AI)-led bull market?
It’s not a recession… big tech companies overspending... or China.
The biggest risk is not being invested in the right AI stocks.
The AI megatrend keeps evolving, finding new bottlenecks and creating new winners.
First it was Nvidia (NVDA)… then utilities selling power to AI data centers surged.
Disruption_X members doubled their money earlier this year when we hit a networking bottleneck.
Credo Technology Group Holding Ltd. (CRDO) is one of the only companies solving high-speed data flow inside data centers. We booked a 118% gain as the market suddenly realized AI couldn’t scale without better connectivity.
- Now the puck is moving from training to inference.
The first wave of AI was dominated by training—teaching giant models like ChatGPT to understand the world. That demanded enormous data centers stuffed with tens of thousands of GPUs running for weeks.
Nvidia ruled this wave because it was the only company with GPUs that could meet AI’s massive compute demands.
We rode that wave beautifully, with Disruption Investor members pocketing +500% NVDA gains.
But now, we’re in the inference era.
Inference is when AI actually does stuff.
When you ask it to answer a question, write an email, summarize a document, plan a vacation, or solve a math problem, that’s all inference.
|
Training happens once. Inference happens trillions of times every day. Google (GOOGL) alone now processes 1.3 quadrillion AI “tokens” each month!
So far, almost all inference still happens inside vast data centers the size of small cities.
More AI use leads to more GPUs, which is why AI giants like Meta Platforms (META) and OpenAI are building data centers the size of Manhattan.
- Say hi to “edge AI”...
Now we’re shifting from a world where all AI models live and run in giant data centers…
To one where you can run top-tier models on your laptop.
Research from top AI analysis firm Epoch AI shows by using a single off-the-shelf Nvidia chip, you can run AI models on your laptop that would have been the world's most advanced just six to 12 months ago.
Algorithmic improvements are making AI models so efficient and cheap, that soon you’ll be able to run “GPT-5” on your phone!
I know that’s a little in the weeds. But it’s one of the biggest risks, and opportunities, in AI today.
Instead of every interaction with your favorite AI assistant being sent to some giant data center, your devices will answer most questions.
Your phone will soon have a built-in personal AI. That’s edge AI.
And only when the local AI gets stuck will it “call Dad”—the massive data center model—for help.
This shift from big, centralized inference to billions of tiny inference engines is the biggest change in the AI landscape since ChatGPT launched.
Not because people stop using AI, but because usage stops mapping 1:1 to data center spending. It completely reshapes who makes money in the next wave.
Instead of the simple “more GPUs, bigger data center” mantra…
- Memory is the big winner from edge AI.
Most investors focus on the GPUs powering AI “brains.” But under the hood, AI depends on two very different kinds of chips:
Logic chips, like Nvidia’s GPUs, that do the thinking.
And memory chips that store and feed the data those GPUs need.
Every AI task—asking ChatGPT a question, generating an image, summarizing a document—boils down to two things happening at extreme speed. A logic chip performing massive amounts of math. And that chip constantly pulling data from memory, then sending results back.
A useful way to think about this is a kitchen. The GPU is the chef. Memory is the pantry.
If the pantry is across the building, the chef wastes most of his time running back and forth for ingredients. But if the pantry sits right next to the stove, cooking speeds up dramatically.
That’s exactly the problem AI faces today.
Compute power has exploded. Memory access hasn’t.
According to SK Hynix, over 90% of the time it takes an AI model to respond is spent moving data between logic and memory, not doing computation. That’s the so-called “memory wall.”
GPUs keep getting faster. But memory bandwidth and proximity haven’t kept up. The result is that $50,000 AI chips sit idle, waiting for data.
As models grow larger and try to “remember” longer conversations, images, and context, the amount of memory they’ll need close to the compute will explode.
And that’s why memory is the next trillion-dollar shift in AI spending.
- In October, we added our first memory stock to our Disruption Investor portfolio.
We’re already up 33% on it. For comparison, Nvidia is down 1% since. Likewise, the S&P 500 return has been flat.
It shows you just how important it is to invest in the right part of the AI market.
The biggest gains come from getting positioned ahead of the next bottleneck—not after it makes headlines.
Stephen McBride
Chief Analyst, RiskHedge


