The Compute Wall

The recent discussions about the near future of AI aren’t just about building the compute, it’s about how to power it. Right now, we are still in that predictable phase of AI development that follows the trend of “increased parameter count means increased ability.” We all know this concept well; it echoes the history of computing, where more transistors meant more capability. It is the very correlation that birthed Moore’s Law.
However, that law is hitting a wall. Historically, we would expect double the compute every two years, however recent years have seen that slow to a meager 15 to 20% gain every 18 months. To compensate, NVIDIA introduced a brute-force approach: massive parallelism. Instead of just bigger and faster chips, they decided to use more chips. Massive amounts of them. I mean NVIDA has ridden this brute-force approach to be one of the world’s most valuable companies.
There is clearly plenty of financial interest and open wallets to push this technology to its limits. But those limits are arriving as fast as the AI advancements themselves. The current “AI Factories” are hitting a ceiling defined not by silicon, but by physics and infrastructure: power, cooling, water, and public goodwill.

It is no longer just a question of “Can we build the chips?” It is now, “How do we cool them?” and “Where do we find the gigawatts to power them?”
Local governments are becoming wary of hosting these resource-hungry facilities. How do you tell your constituents their local power and water are being siphoned off to fuel the very thing that might take their jobs? This is a hard sell. We are about three years away from a hard stop where public goodwill evaporates and the “Not In My Backyard” (NIMBY) effect grinds progress to a halt.

The Space Solution

And Its Thermal Problem

Elon Musk has acknowledged these constraints and proposed a bold solution: Data Centers in Space, or as I like to coin it, “Space AI”.

On paper, this makes sense. In orbit, you have unlimited space (pun intended), unlimited solar power, and no NIMBY regulations. However, there is a technical catch. While space is cold, it is also a vacuum. A vacuum acts like a thermos; it keeps heat in. On Earth, we use air (convection) to cool chips. In space, you can only use radiation, which is inefficient and requires massive surface area.

Then there is the issue of the location itself: Space. Space is far. Space is slow. Space is expensive. Space is just hard.

To be fair, “hard” is the native language of Tesla and SpaceX. It wasn’t long ago that their corporate missions were mocked as fever dreams. Yet they didn’t just accomplish these incredible engineering feats, they made them profitable. If anyone could pull off orbital data centers, it’s the team that lands rockets on boats or catches them mid-air with giant mechanical arms.

But while they have dominated the engineering, the calendar is a different story. Timelines are not exactly Elon’s strong suit. Doing the impossible is one thing; predicting when you’ll do it is another. Remember 2015? Elon estimated we were two years away from full self-driving. That prediction was off by a factor of five, turning a “soon” into a decade of “almost there.”

There is a lot to be done to get data centers in space working. Elon says they can do it in three years, which in real-life is likely closer to a ten-year-plus problem. That leaves a massive gap between the AI compute wall we are hitting now and the solution that might arrive 7 years too late.

This begs the question: What do we do in the meantime? What is the 10-year bridge that makes sense not just as a backup, but as a superior alternative?

The Arctic Pivot

Please allow me some backstory. In some ways, my mind is susceptible to the same thing a computer is susceptible to: malware.

The malware just installed itself, I swear I didn’t click that link…and suddenly it consumes your CPU cycles, doing someone else’s bidding. Random bits of information and unsolved problems hijack my brain. Occasionally, one such problem drops onto another, and then another, creating a mental snowball of potential solutions that goes careening through my mind. These become “problutions” that is, problems plus potential solutions. The only way to debug the system is to put pen to paper.
So, what convergence of “problutions” have been tumbling down my mental slope? As a mechanical engineer who loves computers, AI, rockets, and EVs, the challenge of cooling CPUs has always occupied its own “mental cubicle.” Since my previous professional work as a building engineer involved massive industrial cooling systems, and not microchips, there didn’t seem to be much opportunity for cross application.
Then came the career pivot. Years ago, I strayed from pure engineering into project management. This path landed me in Alaska, working with giants like Chevron, ConocoPhillips, and BP on North Slope pipelines, oil rigs, and onshore processing. Managing teams of engineers expanded my knowledge of upstream and downstream oil production. And while doing this work, I learned something that immediately demanded its own room in my brain.

 

The Gas Problem – or Opportunity?

In the process of extracting oil, operators bring up massive amounts of natural gas – around 60 million tons a year. So, what happens to it? They process it, clean it, and inject 90% of it right back into the ground. We are talking about three trillion cubic feet (Tcf) of natural gas caught in a loop annually. In 2026 dollars, that is $10 to $30 billion worth of energy buried every year. [^3] [^4]

If you were to burn that gas in turbines at just 50% efficiency, you would generate 500 Terawatt-hours (TWh) of power. That is enough to power one-third of all U.S. homes.[^5][^6] Of course, burning it all at once would deplete the supply in a decade. But a sustainable approach, using 1.3 Tcf per year, would last nearly 30 years and could produce 200 TWh annually. This is the equivalent output of 30 nuclear reactors.

Those are massive numbers. So why isn’t it being sold? Two words: transport logistics. Building the Alyeska oil pipeline was a major engineering achievement, but it was built in an era of supportive regulations and public opinion. Trying to replicate that today for gas is nearly impossible. Even if you could build a pipeline to the coast, liquefying and shipping it is expensive and complicated. The reality is simple: mucho regulation, nada pipeline, nada market access, nada sales. That is why those vast reserves have been sitting there for decades, untapped. Now, what does this have to do with our topic at hand? I’ll get there in a second.

The Cold Reality

The other thought that wiggled its way into its own mental room is just how cold the North Slope (Prudhoe Bay) really is. One day at work, someone from Legal called me about a portable office that had been installed up north. It turned out the mechanical engineer who designed it had never done Arctic work before and specified standard steel. That is a rookie mistake. It gets so cold up there that standard steel undergoes brittle fracture; if you aren’t careful, you can poke a finger right through a frozen piece of sheet metal, maybe a little exaggerated but you can break it with a punch. Everything on the North Slope bows to this cold reality, from the insulated boots you wear, to the exotic alloys you use, to having to keep the oil warm enough to pump 24/7 every day non-stop.
Then there are the geniuses who make the cold work for them. Take the heat pipes along the Alyeska pipeline: they don’t heat the ground; they actually pull heat out of it to keep the permafrost frozen year-round so the pipeline doesn’t sink. So, my mind started turning: “Hey, that’s massive, free cooling. I wonder what else that could be used for?” Well, one slow day at work, those two thoughts finally collided: unlimited stranded gas and unlimited free cooling

The Convergence

Instead of shipping the gas, why not build a power plant and simply transmit electricity? Well, it turns out that has regulation issues too. A single High Voltage Power Line (HVPL) to carry just 1 GW of power costs somewhere between $15 to $20 billion. Scaling that up to the 6 GW required for a major AI cluster would be astronomically expensive, roughly 7x the cost of a gas pipeline per unit of energy transferred. Basically, you’d spend around $15 billion per GW for electrical lines versus only $2 billion per GW for a gas line. [^7] And that gas line, if it were ever built, could deliver the energy equivalent of 22 nuclear reactors. But reality is harsh: In 2008, they hoped to start that gas line “right away.” Nearly 20 years later, it’s still just lines on a drawing board.

So, back into their respective mental rooms, those ideas went. But after years of slumber, the recent explosion of AI – and its insatiable hunger for cooling and power – has awakened those thoughts anew.
Then came the third and fourth collisions. Why not bring the demand to the supply? Why not put the data centers in the Arctic North, right next to the oil fields?
It provides numerous benefits:

  • Free cooling: Leveraging Arctic air can drop energy consumption by 30 to 40% (PUE 1.05 vs 1.4) .
  • Plenty of space: Unlimited available land with absolutely zero “Not In My Backyard” issues.
  • Unlimited energy: A nearly free and unlimited natural gas supply.
  • Infrastructure: Eager energy partners with deep pockets needed to fund infrastructure builds.
  • CO2 Sequestering: Instead of venting emissions into the air, we can capture them and inject them right back into the ground.

 

The Tech Twist: The Allam Cycle

Yet environmental challenges remain. No one wants to see gas generators spewing pollution into the pristine Arctic air. Plus, there is an operational problem: If you burn the gas, what do you use to pressurize the reservoir to keep the oil flowing?
This is where we find an interesting twist and a tie-in to SpaceX. There is a novel, near-production technology for converting natural gas into electricity called the Allam Cycle. Instead of burning natural gas with air (which is mostly nitrogen), this process burns it with pure oxygen. Why does that matter? Because the output isn’t a dirty mix of exhaust gases; it is nearly pure, high-pressured CO2 .
In a standard gas plant, CO2 is a waste product you have employ a messy and expensive process via scrubbers. In the Allam Cycle, the CO2 is the working fluid. It spins the turbine to generate electricity, and it comes out the other end already pressurized and ready to be piped, but where? By now maybe you are thinking, what are you going to do with all that CO2? Have you reached that “ah-ha” moment yet, is it becoming obvious? If you extract the natural gas to power the AI, you need something to take its place in the reservoir to keep the pressure up. Now you have it: the CO2 . It eliminates the pollution typical of gas turbines and solves the reservoir pressure problem in one loop.

Sounds perfect, right? Let’s just go buy a dozen Allam Cycle turbines…Wait, there is another issue, isn’t there? Yes, but I like to think of it as another opportunity. The first commercial-scale Allam Cycle turbine is still undergoing testing at a pilot facility in La Porte, Texas. It isn’t quite ready for prime time. You see, designing this system is incredibly complex. It deals with critical components at massive pressures (300 bar), extreme heat, and supersonic velocities. It requires special metals and materials that can survive devilish conditions.

Enter SpaceX

There is a company that is already mastering adjacent technologies, maybe you have heard of the methane-breathing Raptor 3 engine? Now, I am not saying a rocket engine is a power plant, but there is significant overlap in the physics. Consider this: the Allam Cycle turbine needs to run at roughly 300 bar. The Raptor 3 runs at 350 bar by using SpaceX’s proprietary SX500 superalloy. The metallurgy SpaceX developed to stop their rockets from melting is likely the solution to the Allam Cycle’s material science problem.

So, what is the lighter lift? Solving the logistical and thermal nightmare of launching data centers into orbit? Or figuring out how to build 20 turbines using exotic metals you already invented?

Where has this snowball of a crazy idea landed in my mind?

Alaska Gas Reserves + Allam Gas Turbines + SpaceX Metallurgy + Arctic Free Cooling = The World’s Most Efficient AI Factory.

Depiction of AI datacenter in the Arctic
So the question is who do you think would be a winner: “Space AI,” “Arctic AI,” or both?
A chart comparing Memphis ai to arctic

To keep this AI rocket ship of progress going, many believe we will need to “10x” the current compute capacity in just three years. If that is the goal, we have to respect the physics of scaling. This isn’t just about adding more server racks; we are talking about a monumental jump in energy demand. Even with massive improvements in chip efficiency, a “10x” target brings us to a 6 Gigawatt (GW) deployment. To put that in perspective, that is roughly six nuclear reactors.
So, what does it look like to deploy this massive infrastructure in a current hub like Memphis versus a greenfield site in Alaska? It isn’t just about the cost of land; it’s about the environment the chips have to survive in.

1. The Cooling Penalty vs. The Arctic Advantage
In Memphis, where summer temperatures regularly exceed 95°F with high humidity, you are fighting a losing battle against thermodynamics. We measure this battle using PUE (Power Usage Effectiveness). In simple terms, PUE accounts for the power to run the “brains” plus the cooling. A PUE of 1.35 means for every watt the computer uses, you burn another 0.35 watts just to keep your servers from becoming puddles on the sidewalk along with your melted ice cream.
Memphis (PUE ~1.40): For 6 GW of compute power, you actually need to produce 8.4 GW of total power. Man, that’s just not cool. But you know what is? Alaska. Alaska PUE is just 1.05. Thanks to free air cooling, you only need 6.3 GW. The Difference: Alaska saves 2.1 GW of power. That is two nuclear reactors’ worth of energy saved just by opening the window (and you save massive amounts of water too). Over a 10-year span, Memphis would spend around $40 billion just on electricity. Alaska? This is where the math gets interesting.

2. Why pay for power when you can get paid to make it?
In Memphis, you are a customer buying power from the grid. In Alaska, you are solving a waste problem. Much of the natural gas on the North Slope is an unused byproduct of drilling for oil. In fact, it costs oil companies money to clean it and inject it back into the ground. It is to their benefit to hand that gas to a neighbor.
Ironically if we use the Allam Cycle, converting that gas into electricity generates a pure byproduct: CO2. This is exactly what oil companies need to pressurize their reservoirs, and exactly what the government will pay you to sequester. So, our Alaska data center doesn’t pay for gas; it gets paid to convert it to CO2 and pump it back into the ground. How much? We are talking about $27 billion in tax credits over 10 years. In effect, running this data center in Alaska could cost $71 billion less than running it in Memphis. That’s enough to make Elvis come alive to sing the Memphis blues.

synergies of space x and telsa with Arctic AI center

You might be thinking, “Sure, the math works, but building in the Arctic is a logistical nightmare.”

And you would be right. I was a project manager for big pipeline capital projects up there. I saw the speed, or lack thereof, and the astronomical costs. If we used traditional methods, there would be zero cost savings over Memphis. So why the positive projections?

If only there was a team that thinks “impossible” is breakfast cereal. SpaceX has the mindset, the capital, and the convergence of technologies to make the impossible mundane. This isn’t just about Elon Musk; it’s about a vertically integrated industrial ecosystem that has the “crazy bit” required to pull this off.

 Consider the synergies

  1. Optimus The Arctic Fox

If you didn’t know, the North Slope is brutally cold and hard to get to. That’s why staff generally work 12-hour days, 7 days a week, for 2 to 3 weeks at a time. Remember that “dangerous, repetitive work” Elon said Optimus was built for? This is it. Imagine server halls filled with nitrogen instead of air to prevent fires and oxidation. Humans can’t survive there, but robots can. That’s not a bug; that’s a feature.

  1. Batteries Included

Whether in Memphis or Alaska, 6 GW loads need stabilization. Tesla’s Megapack technology is the glue. In an isolated Arctic grid, these batteries provide the millisecond-level bridging required to keep AI training clusters from crashing during turbine spin-up or load transients.

  1. Raptor Meet The Allam Cycle

This is the hidden ace up the sleeve. The Allam Cycle turbines need to withstand massive pressures (300 bar) and temperatures. Who else is building high-pressure, methane-fueled combustion chambers? SpaceX. The Raptor 3 engine runs at 350 bar because SpaceX developed a proprietary superalloy, SX500, specifically to handle hot, oxygen-rich gas at these pressures. That metallurgy might just be the key to unlocking the Allam Cycle turbine and heat exchanger problem at scale.

  1. Starship Gateway to the Arctic

The Achilles heel of the North Slope is the “sealift window”, you can only barge in heavy equipment for 6 weeks a year when the sea ice melts. If a critical turbine part breaks in January, you are usually out of luck for six months. Enter the SpaceX rocket cargo program. It can deliver 100 tons of cargo anywhere on Earth in under an hour. Need a replacement turbine rotor in the dead of winter? Starship can drop it on a landing pad at Prudhoe Bay, bypassing the ice entirely.