AI Infrastructure Stocks 2026: The Picks-and-Shovels Playbook

May 12, 2026

Updated: May 2026 | By Jenna Lofton, StockHitter.com

AI Infrastructure Stocks 2026:

The Short Version: The easy AI hype trade is over. What’s replaced it is a capital-intensive infrastructure cycle – a $725 billion CAPEX wave (based on combined hyperscaler fiscal 2026 guidance) being poured into physical networking, semiconductors, data centers, and the software layer that keeps all of it running. I hold positions in this space. The picks-and-shovels layer is where the durable money is, and this guide breaks down exactly where that is in 2026.

Key Takeaways

  • The AI investment narrative has shifted from large language models to the physical infrastructure layer – networking, chips, power, and cooling.
  • Hyperscalers are guiding toward $725 billion in combined AI-related CAPEX for fiscal 2026. That money has to go somewhere.
  • Arista Networks, Palantir, Nvidia, and Datadog represent four distinct layers of the AI infrastructure stack – each with a different risk/reward profile.
  • The picks-and-shovels companies – the ones supplying the tools, not mining the gold – tend to be more durable investments than the AI application layer above them.
  • Observability and operational AI are the two fastest-growing segments of the stack in 2026. Most retail investors are still sleeping on both.

Everyone Is Chasing the Wrong Part of the AI Trade

When I talk to people about AI investing, they almost always want to know about the next ChatGPT. Which company is building the smartest model. Which startup is going to disrupt Google. Which stock is going to 10x because it has “AI” in the pitch deck.

That’s the wrong question. And after 15+ years of watching market cycles play out, I can tell you it’s almost always the wrong question at moments like this one.

The right question is: who’s selling the shovels?

In the 1850s Gold Rush, a handful of miners got rich. The people who got reliably rich were the ones selling picks, shovels, and denim pants to every miner who came through California. The infrastructure suppliers captured the durable value. The speculators, mostly, did not.

The 2026 AI market is structurally identical. The first wave of large language model leaders is clear – Anthropic, OpenAI, Google DeepMind, Meta AI. Getting into that tier at a reasonable valuation, as a retail investor, is largely a closed door. But the physical and software infrastructure that every one of those models depends on? That market is very much open, very much growing, and very much misunderstood by most people who are trying to invest in it.

I hold positions in this space. Nvidia, Palantir, and a few others I’ll cover here. This guide is my attempt to lay out the full AI infrastructure stack – from the hardware layer at the bottom to the operational software layer at the top – so you can make informed decisions about where to invest your own money.

Experience Transparency

I added to my Nvidia position in Q3 2025 and opened a starter position in Palantir shortly after their first “Rule of 40” quarter broke 100. I don’t recommend anything on StockHitter that I wouldn’t put my own money behind. These positions inform my perspective on this article – that’s the point of disclosing them.

What the $725 Billion Number Actually Means

Alphabet, Microsoft, Amazon, and Meta have collectively guided toward approximately $725 billion in AI-related capital expenditure for fiscal 2026. That number gets cited a lot. Most people don’t fully appreciate what it means in practical terms.

CAPEX at that scale means data centers. Lots of them, built fast. It means networking hardware to connect every server in those data centers at speeds that were considered impossible five years ago. It means custom semiconductors designed specifically to train and run AI models more efficiently than general-purpose chips can. It means cooling infrastructure, power contracts, and the software layer required to keep all of it operational and observable.

Every dollar of that $725 billion flows through a supplier. The hyperscalers don’t manufacture their own networking switches. They don’t fab their own chips (mostly). They don’t write their own observability software from scratch. They buy from Arista, Nvidia, Broadcom, Datadog, and a handful of others who have positioned themselves at critical chokepoints in the supply chain.

That’s the picks-and-shovels opportunity. Not one company – a stack of them, each capturing value at a different layer.

The AI Infrastructure Stack: A Layer-by-Layer Breakdown

Think of AI infrastructure as a five-layer stack. Each layer is necessary. Each layer has publicly investable companies. Each layer has a different risk profile.

Layer 1: Semiconductor Fabrication (The Foundation)

Everything starts with chips. You cannot run an AI model without a chip capable of the parallel processing that neural networks require. Graphics Processing Units – GPUs – became the default hardware for AI training because of their architecture, which was originally designed to render video game graphics but turned out to be ideally suited for matrix multiplication at scale.

Nvidia dominates this layer. There’s no polite way to say it differently. The H100 and the successor Blackwell architecture chips represent the gold standard for AI training workloads, and every major hyperscaler and AI lab is competing to get enough of them. Nvidia’s data center revenue has grown at triple-digit year-over-year rates for multiple consecutive quarters. That’s not a fluke – it’s structural demand from an industry that cannot function without the product.

Dynamic Stock Chart for TICKER NVDA

The risk at this layer is concentration. Nvidia’s market position is strong but not unassailable. AMD’s MI300X is gaining traction. Custom ASICs – application-specific integrated circuits designed by Google (TPUs), Amazon (Trainium), and others – are starting to absorb workloads that would otherwise go to Nvidia. The semiconductor layer is not a simple “buy Nvidia and walk away” story in 2026. It requires ongoing monitoring.

Advanced Micro Devices represents the most credible competitive threat to Nvidia in the merchant silicon market. AMD’s MI300X has secured meaningful deployments at several large cloud providers. At a lower valuation multiple than Nvidia, AMD offers a way to get exposure to the same demand wave with a different risk profile.

Dynamic Stock Chart for TICKER AMD

Layer 2: High-Speed Networking (The Nervous System)

A data center full of Nvidia GPUs is useless if the GPUs can’t communicate with each other fast enough to coordinate on training runs. AI workloads are uniquely demanding from a networking perspective. A training cluster might involve tens of thousands of GPUs that need to exchange gradient updates constantly, with latency measured in microseconds.

This is where Arista Networks has built one of the most defensible positions in the entire AI infrastructure stack.

Arista holds the number one market share in high-speed data center switching. Their 7800 series – the “Universal AI Spine” as they position it – is the networking backbone for several of the largest AI training clusters in the world. When Arista reported Q1 2026 earnings, revenue grew 35% year-over-year. That’s not a company riding a wave. That’s a company that is the wave.

Dynamic Stock Chart for TICKER ANET

The technical moat here is real. Arista’s EOS operating system – which runs across their entire switching portfolio – provides a consistent management interface that enterprise customers do not want to replace. Switching costs in enterprise networking are high. Once Arista is in a hyperscaler’s data center, the probability of displacement by a competitor is low.

The specific technology driving AI networking demand is a transition from traditional Ethernet to high-speed alternatives designed for AI cluster communication. InfiniBand – a networking protocol originally developed for high-performance computing – has been the dominant technology for AI training fabrics, primarily through Nvidia’s Mellanox acquisition. But Ultra Ethernet, a new standard backed by a consortium that includes Arista, AMD, and several hyperscalers, is positioning itself as an open alternative. Arista is well-positioned regardless of which standard wins.

Wall Street Reality Check

The networking layer gets less attention than semiconductors because it’s less glamorous. Nobody writes magazine covers about Ethernet switches. That’s exactly why the valuation is more reasonable and the competitive dynamics are more durable. The companies building the nervous system of AI infrastructure are often better investments than the ones building the brain.

Layer 3: Data Center Infrastructure (The Body)

Data centers are the physical housing for every layer of the AI stack. Building them at the scale the hyperscalers are targeting requires real estate, power contracts, cooling systems, and construction timelines that are measured in years, not months.

The constraint at this layer is not capital. The hyperscalers have the capital. The constraints are power and physical space. AI training clusters consume enormous amounts of electricity. A single large GPU cluster might draw as much power as a small city. The availability of grid power – and the ability to contract for it reliably – has become the single biggest gating factor on data center construction timelines.

This creates investment opportunities in power infrastructure. Utilities with exposure to data center load growth – particularly in Virginia, Texas, and the Pacific Northwest where most hyperscaler data centers are concentrated – have seen meaningful revaluation. The copper and cooling equipment suppliers who service these facilities represent another picks-and-shovels angle within the infrastructure layer.

Vertiv Holdings is a company most retail investors have never heard of. They make the thermal management and power distribution equipment that keeps data centers running. In an environment where AI cluster density is increasing and heat dissipation is a genuine engineering challenge, Vertiv’s products are in high demand. This is the kind of unsexy infrastructure company that tends to compound quietly while everyone is looking at the flashier names above it.

Dynamic Stock Chart for TICKER VRT

Layer 4: The Software Infrastructure Layer (The Brain’s Operating System)

Above the hardware layer sits the software that orchestrates it. This includes cloud platforms, AI development frameworks, and – increasingly – the operational AI software that enterprises are deploying to run actual business workflows.

Palantir Technologies represents the most interesting story in this layer in 2026.

Palantir has spent twenty years building software for the most complex data environments on earth – the U.S. intelligence community, NATO allies, large defense contractors. That work gave them a deep understanding of how to build AI systems that operate under strict governance requirements, with full auditability of every decision the system makes.

In 2026, that expertise is directly applicable to the commercial AI market, where enterprises are learning that deploying AI at scale requires governance, not just capability. Palantir’s AIP platform – Artificial Intelligence Platform – is the product that bridges their government heritage with commercial AI deployment. The adoption numbers are significant. Palantir reported Q1 2026 results that included an 85% year-over-year revenue growth figure and a “Rule of 40” score of 145%.

The Rule of 40 is a metric used to evaluate SaaS companies. It adds revenue growth rate and free cash flow margin. A score above 40 is considered healthy. Palantir’s score of 145 is not just good – it’s historically unusual. CEO Alex Karp noted that Palantir’s quarterly profit generation is now approaching what their total annual revenue was just twelve months prior. That kind of financial momentum is what structural market leadership looks like in real-time.

Dynamic Stock Chart for TICKER PLTR

The pushback on Palantir is always the valuation. It trades at a significant premium to most software companies. That valuation reflects the market’s assessment that Palantir is not a software vendor – it’s becoming a critical operating system for AI-driven enterprises. Whether that premium is justified depends on how quickly AIP adoption accelerates in the commercial market. My view is that it does accelerate, which is why I hold the position.

Layer 5: Observability and Operational Intelligence (The Immune System)

The fifth layer is the one most retail investors have never thought about, and it may be the most interesting from a pure investment standpoint.

When you deploy AI at scale in an enterprise environment, something goes wrong eventually. Models behave unexpectedly. Latency spikes. Security vulnerabilities appear in places nobody anticipated. The software that monitors all of this – catches problems before they become outages, identifies security incidents before they become breaches, measures the performance of every component in a complex distributed system – is called observability software.

Datadog is the market leader in observability, and the AI transition has supercharged their business in a way that wasn’t obvious twelve months ago. AI applications are more complex than traditional software. They have more failure modes. They generate more logs, more metrics, more traces. The demand for observability tooling scales with AI complexity, and AI complexity is increasing every quarter.

Datadog’s own data shows that approximately 20% of their customers account for 80% of their annual recurring revenue. That concentration might look like a vulnerability. I’d argue it’s the opposite – it means Datadog is deeply embedded in the workflows of the largest enterprises, which are also the enterprises most aggressively deploying AI. As those enterprises scale their AI deployments, their Datadog spend scales with them.

Dynamic Stock Chart for TICKER DDOG

The March 2026 launch of Datadog’s Bits AI Security Analyst and their MCP Server for AI coding agents signals a deliberate move into the agentic AI space. This matters because agentic AI – systems that take autonomous actions, not just generate text – creates entirely new categories of observability need. You need to know not just that an AI model produced an output, but what actions it took, what data it accessed, and whether those actions were within approved parameters. Datadog is positioning to be the governance and visibility layer for that world.

The Transition That Changed Everything: From Experimentation to Operations

There’s a specific inflection point that defined the 2025-2026 AI investment environment. In 2023 and 2024, most enterprise AI was experimental. Pilot programs. Proof of concepts. Teams evaluating vendors. The CAPEX from hyperscalers was real, but the enterprise software spending was still tentative.

In 2025, that changed. Enterprises started deploying AI into production workflows. Not experiments – actual business processes that depend on AI to function. Customer service automation. Financial analysis. Supply chain optimization. Legal document review. The use cases are broad and the adoption is accelerating.

When AI moves from experiment to production dependency, the requirements change dramatically. Reliability becomes non-negotiable. Governance becomes a compliance issue. Observability becomes an operational necessity rather than a nice-to-have. Security becomes a board-level concern.

This transition is why the infrastructure layer is a better investment than the application layer in 2026. Application layer companies – the ones building AI-powered apps for specific use cases – will win some markets and lose others. The competitive dynamics there are brutal and the outcomes are unpredictable. The infrastructure layer serves everyone. It doesn’t matter which AI application wins a given category. The networking, the semiconductors, the observability software, and the operational AI platforms all get used regardless.

The Neocloud Opportunity: Beyond the Hyperscalers

The four major hyperscalers – AWS, Azure, Google Cloud, Oracle Cloud – dominate the cloud infrastructure market. But a new category has emerged that most retail investors have not yet found: neoclouds.

Neoclouds are specialized cloud providers that focus exclusively on AI workloads. They offer raw GPU compute at scale, without the overhead and complexity of a full-stack cloud platform. For AI labs and research teams that need massive GPU clusters for training runs, neoclouds are often faster to provision and more cost-effective than the traditional hyperscalers.

Nebius Group is the most publicly investable example of this emerging category. Nebius – spun out of the Russian internet company Yandex after its international assets were separated – has reported revenue growth of approximately 500% year-over-year. That growth rate reflects starting from a small base, but the trajectory indicates real demand for specialized AI compute infrastructure at a price point below the hyperscalers.

Dynamic Stock Chart for TICKER NBIS

The neocloud category is higher risk than established infrastructure names like Arista or Datadog. The competitive dynamics are less clear and the moats are thinner. But for investors with a higher risk tolerance, the neocloud exposure represents a way to capture AI infrastructure demand at an earlier stage of the value chain.

Where the Supply Chain Bottlenecks Are (And Why They Matter)

Understanding where the physical constraints in the AI supply chain are located helps identify which companies hold pricing power and which are commoditized.

Semiconductor fabrication is the most acute bottleneck. TSMC in Taiwan and Samsung in Korea are the only companies in the world capable of fabricating the most advanced AI chips at volume. That geographic concentration is a geopolitical risk that gets discussed constantly but hasn’t yet materially disrupted supply. Intel’s attempt to rebuild domestic advanced semiconductor manufacturing is real but years away from competing at the leading edge.

High-bandwidth memory – the specialized memory chips that sit adjacent to AI accelerators and feed them data fast enough to keep them busy – is a second bottleneck. SK hynix and Micron produce HBM. The demand for it has outpaced supply for multiple consecutive quarters. Both companies have significant pricing power as a result.

Optical interconnects – the fiber-based connections that link servers within a data center and data centers to each other – are a third constraint. As GPU clusters scale, the bandwidth requirements for inter-cluster communication have grown faster than traditional copper-based networking can support. The transition to optical is accelerating, and the companies manufacturing optical transceivers and switches are in a strong demand environment.

Coherent Corp (which absorbed II-VI Incorporated in 2022), Lumentum, and Fabrinet are less well-known than Nvidia or Arista, but they sit at genuine chokepoints in the optical transceiver supply chain. That’s the kind of picks-and-shovels positioning that tends to produce durable returns – and the kind of detail that separates serious infrastructure analysis from surface-level stock tips.

The Valuation Question: How to Think About AI Infrastructure Multiples

Every conversation about AI infrastructure investing eventually runs into the valuation question. These companies trade at significant premiums to historical software and semiconductor multiples. Is that justified?

My answer is: it depends on the layer and the company, and the historical comparison is less useful than people think.

Nvidia trading at 30-35x forward earnings looks expensive by traditional semiconductor standards. But traditional semiconductor companies didn’t have pricing power, recurring revenue characteristics, and demand visibility that Nvidia has right now. The comparison isn’t apples to apples.

Palantir trading at a triple-digit revenue multiple looks expensive by SaaS standards. But Palantir’s Rule of 40 score of 145 isn’t a SaaS company metric – it’s a metric from a company that is structurally growing faster and generating more cash than the SaaS comp set warrants. The traditional valuation framework doesn’t have a clean category for what Palantir is becoming.

Arista at roughly 35-40x forward earnings is the most defensible valuation in the group, given the combination of networking moat, enterprise switching costs, and direct CAPEX tailwind from hyperscaler spending.

Datadog is the most valuation-sensitive of the four. Observability is a competitive market. Splunk (now owned by Cisco), New Relic, and Dynatrace all compete for enterprise observability spending. Datadog’s AI-specific positioning is real, but so is the competitive pressure. At current multiples, the stock requires continued strong execution.

The practical approach I use: position size to the conviction level and the valuation cushion. Arista and Nvidia get larger positions because the business quality and valuation are both more comfortable. Palantir is a smaller position because the conviction is high but the valuation leaves less room for error. Datadog is the smallest because the competitive dynamics require the most monitoring.

What I Keep Seeing Retail Investors Get Wrong

I’ve watched enough market cycles to notice patterns. The AI trade is producing a few specific mistakes that I see repeatedly, and they’re worth naming directly.

The most common one is chasing the narrative instead of the cash flow. Every quarter produces a new hot story – a chip architecture announcement, a model benchmark, an enterprise deployment press release. The investors who buy on narrative and sell on narrative get chopped up. The investors who anchor to whether a business is generating cash, and growing that cash generation, tend to survive long enough to actually make money.

There’s also a real blind spot around second and third-derivative beneficiaries. Everyone has heard of Nvidia. Most people know Arista by now. Very few people are tracking Vertiv, or the optical transceiver manufacturers, or the high-bandwidth memory suppliers who are equally capacity-constrained. The headline names carry higher valuations precisely because everyone knows them. The further down the supply chain you’re willing to do the research, the better the entry points tend to be.

The mistake that bothers me most, though, is treating AI infrastructure as a trade rather than a structural shift. The $725 billion in CAPEX being deployed in 2026 is not the peak of a cycle – it’s an acceleration of a multi-year build-out. The physical infrastructure going in the ground this year will need to be expanded in 2027, 2028, and beyond. Internet infrastructure took two decades to fully build out. AI infrastructure won’t be different. The companies supplying that build-out are not trading on one year of elevated demand. They’re being repriced as the permanent suppliers of a permanent need.

The Risk Factors That Actually Matter

No investment thesis is complete without an honest accounting of what can go wrong.

The biggest risk to the AI infrastructure trade is a CAPEX pullback by the hyperscalers. If revenue from AI-powered products disappoints relative to the investment being made, the hyperscalers will reduce their CAPEX guidance. That would hit every layer of the infrastructure stack simultaneously. The magnitude of that hit would depend on how severe and sustained the pullback was.

The second risk is technical disruption. The current infrastructure architecture – GPU clusters connected by high-speed networking, monitored by observability software – is the dominant paradigm today. If a fundamentally different architecture emerges that requires different hardware or software, some of today’s infrastructure leaders could be disrupted faster than their valuations currently contemplate. This is a longer-term risk, measured in years rather than quarters, but it’s real.

The third risk is geopolitical. Semiconductor fabrication concentrated in Taiwan, rare earth minerals controlled by a handful of countries, data center energy contracts subject to local utility regulation – the AI infrastructure supply chain has significant geopolitical exposure that market prices don’t fully reflect. A Taiwan Strait escalation would be a severe disruption to AI chip supply with no short-term mitigation available.

Sizing positions with these risks in mind is how you stay in the trade long enough to capture the upside.

The 2026 AI Infrastructure Watchlist

Based on the framework laid out above, here are the companies I’m watching across each layer of the stack. This is not a buy list – it’s a monitoring list organized by where each company sits in the infrastructure hierarchy.

Semiconductor Layer: Nvidia (NVDA), Advanced Micro Devices (AMD), Broadcom (AVGO), Micron Technology (MU), SK hynix

Networking Layer: Arista Networks (ANET), Cisco Systems (CSCO), Marvell Technology (MRVL), Coherent Corp (COHR)

Data Center Infrastructure: Vertiv Holdings (VRT), Eaton Corporation (ETN), Digital Realty (DLR), Equinix (EQIX)

Operational AI Software: Palantir Technologies (PLTR), Snowflake (SNOW), Databricks (private)

Observability and Security: Datadog (DDOG), CrowdStrike (CRWD), Palo Alto Networks (PANW)

Neoclouds: Nebius Group (NBIS), CoreWeave (CRWV)

How to Build a Position in This Space

The practical question for most readers is how to actually build exposure to AI infrastructure without over-concentrating in any single name or layer.

A reasonable framework: anchor the portfolio exposure in the two most defensible positions – Nvidia for the semiconductor layer and Arista for the networking layer. Both have strong moats, clear demand tailwinds, and relatively clean competitive dynamics. Together they give you foundational exposure to the physical infrastructure layer.

Add a software infrastructure position – either Palantir for the operational AI angle or Datadog for the observability angle, depending on your risk tolerance and time horizon. Palantir is higher conviction but higher valuation. Datadog is more competitively exposed but cheaper on a relative basis.

Consider a small allocation to second-derivative beneficiaries – HBM memory suppliers, optical transceiver manufacturers, data center power infrastructure – for investors who want to go beyond the headline names. Keep these positions sized appropriately given the higher research burden and lower liquidity.

Dollar-cost average into positions rather than trying to time entries. The AI infrastructure build-out is a multi-year theme. The specific entry point matters less than the consistency of the exposure.

Final Thought from My Portfolio

The AI infrastructure trade is not a momentum play. It’s a structural shift in how computing resources are built and deployed, happening at a speed and scale that the market has not fully priced in across every layer. I hold Nvidia and Palantir because I believe the infrastructure build-out is in the early middle innings, not the late ones. That view can be wrong. But it’s informed by 15+ years of watching technology capital expenditure cycles, and this one has characteristics I haven’t seen combined before: genuine product-market fit, enterprise adoption at scale, and hyperscaler CAPEX commitment that makes the demand visible multiple quarters in advance.

One thing I’ve found genuinely useful for staying current on the CAPEX themes covered in this article: a few financial research newsletters that track this space with real analytical depth. Louis Navellier’s Growth Investor has been one of the more focused services on the AI infrastructure thesis – Navellier’s quantitative framework was built for identifying exactly the kind of earnings acceleration Palantir and Arista have been producing. The Skousen Report approaches the same CAPEX cycle from a macro economist’s perspective, which gives you a useful counterweight to the bottom-up stock analysis. I’ve reviewed both in detail if you want the full breakdown before subscribing.

Disclosure: The author holds long positions in Nvidia (NVDA) and Palantir (PLTR) at the time of publication. This article is for informational and educational purposes only and does not constitute investment advice. Always conduct your own research before making investment decisions.

About the author 

Jenna Lofton, MBA is a stock trading and investment expert with over a decade of experience in the financial industry. She began her career as a financial advisor on Wall Street and now helps everyday investors make smarter financial decisions through StockHitter.com.


Her insights simplify complex financial topics into actionable strategies for beginners and seasoned traders alike.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>