John Michael Greer, avid scholar of the history of ideas (among many other things) observed once how people often came up with stories on how humanity will destroy itself one day — only to use such yarns as a distraction from real life. Apocalyptic stories thus tell a lot more about our culture than our fate. Apparently used to invoke a sense of dread to warn us about some grave danger (an oddly popular way of entertaining ourselves), such terrifying visions, he argues, are actually nothing but an ode to our grandiosity. ‘Look! We have become so powerful, that we could destroy ourselves (and the rest of life on this planet) with a thought! Look at the power we have!’ And while these stories certainly ring truthful in a sense — hence their immense power — they disregard an intolerably high number of the usual nitty-gritty details of everyday life.
It’s no different with AI. Ever since the Terminator movies made a hit in the 1980’s and early 90’s the fear of an artificial intelligence (like Skynet) taking over and destroying humanity has not left mainstream thought. Truth to be told, the plot of these blockbusters still sounds current — even forty years later.
Skynet is positioned in the first film, The Terminator (1984), as a U.S. strategic “Global Digital Defense Network” computer system by Cyberdyne Systems which becomes self-aware. Shortly after activation, Skynet perceives all humans as a threat to its existence and formulates a plan to systematically wipe out humanity itself. The system initiates a nuclear first strike against Russia, thereby ensuring a devastating second strike and a nuclear holocaust which wipes out much of humanity in the resulting nuclear war. In the post-apocalyptic aftermath, Skynet later builds up its own autonomous machine-based military capability which includes the Terminators used against individual human targets and thereafter proceeds to wage a persistent total war against the surviving elements of humanity, some of whom have militarily organized themselves into a Resistance.
Heck, with a little internet search you could find some pretty recent articles explaining how such events could unfold in real life, serving us with yet another iteration of the story above.
Large language models (LLMs) acting as diplomatic agents in simulated scenarios showed hard-to-predict escalations which often ended in nuclear attacks.
Such narratives of an AI induced Armageddon, however, are nothing but convenient distractions. There is actually an even more out of control, but nonetheless very much real, artificial intelligence going rampant as we speak. Large, mindless corporations are busy creating countless copies of themselves, pursuing their agenda of profit and “wealth” creation, while happily devouring the planet at the same time… But no one really cares to stop them. So, we come up with modern myths instead, keeping us scared (and conveniently looking in the wrong direction), while we simultaneously let our egos to be flattered by stories of our intellectual grandiosity. In reality, however, we are nowhere close to build a generative AI, and the much hyped LLMs remain mere language simulators obeying the age old principle of crap in, crap out.
There are also a number of false assumptions when it comes to the technical feasibility of AI. The following quote from oilprice.com sums up the problem pretty well: while the narcissists at the helm keep thinking they are playing a four dimensional chess game by dumping money on some super-duper bleeding edge technology (and thereby “winning” some kind of a new “weapons race”), they remain completely unequipped to grasp the material limits to the human endeavor.
Broad and rapid action is needed on several fronts in order to slow the runaway train of AI’s energy consumption, but the United States also needs to keep up with other nations’ AI spending and development for its own national security concerns. The genie is out of the bottle, and it’s not going back in.
“Certain strategic areas of the US government’s artificial intelligence capabilities currently lag industry while foreign adversaries are investing in AI at scale,” a recent Department of Energy (DoE) bulletin read. “If U.S. government leadership is not rapidly established in this sector, the nation risks falling behind in the development of safe and trustworthy AI for national security, energy, and scientific discovery, and thereby compromising our ability to address pressing national and global challenges.”
So the question now is not how to walk back the global AI takeover, but how to secure new energy sources in a hurry, how to place strategic limits on the intensity of the sector’s growth and consumption rates, and how to ensure that AI is employed responsibly and for the benefit of the energy sector, the nation, the public, and the world as a whole.
Let’s make it clear because its not obvious to most people: neither human nor artificial intelligence can lessen our energy woes, let alone ‘address pressing national and global challenges’. We are facing a mounting net energy dilemma, where less energy means, less resources, and well, a much smaller economy. Mining is still powered by fossil fuels, and vice verse — so, the less minerals we mine, the less energy we can produce. We need low cost energy to mine minerals, and low cost minerals to obtain energy. Take any of these away, and there goes your economy…
Problem is, as rich deposits deplete, it takes more and more energy to get into the next reserve and deliver the next batch of oil, uranium, silicon or copper. Thanks to four decades of rampant globalization, minerals (especially oil) production is in the same net energy dilemma everywhere. Simply put we have used up the best of our resources already — taking millions of years to form in Earth’s crust — and now we’re stuck with the scrapings left behind by an industrial scale exploitation of this planet. (Make no mistake: there plenty of stuff left out there, but who wants to get them at such low returns?) Resource depletion is a one-way street, and cannot hope to be reversed in human timescales.
All this at the same time when we would need more energy than ever to fuel “the runaway train of AI’s energy consumption”. (And we haven’t even mentioned a range of other lunatic ideas in need of vast amounts of energy like building machines to capture and store carbon or starting geoengineering by spreading sulfur aerosols into the high atmosphere.) If the term ‘not gonna happen’ seems to be an understatement to you, Dear Reader, you are not alone.
Energy and resource depletion, resulting in a skyrocketing energy demand just to keep raw material output flat, is thus a predicament with an outcome — not a problem in search of a solution. We have burned through a colossal amount of resources in less than eighty years since the end of WWII, and now we not only face climate deterioration as a result, but risk an outright ecological collapse. Yes, this doesn’t sound nearly as flattering as becoming too dangerous even for ourselves, but who said the truth must be always sexy and appealing?
Understand this, and you understand why we come up with myths like an AI takeover or turning Earth into Venus by the unfettered use of our technology. Sure, such stories are mighty alluring, but in the end disappointingly out of touch with reality. Most likely, and well before any of these apocalyptic events could happen, the rapidly expanding pink chewing gum bubble, called AI, will suck up all excess electricity we can produce, bringing us even closer to an uncontrolled energy and resource depletion scenario. A recent CNBC story (hat tip to Naked Capitalism) confirms how ugly the coming crunch will be:
This strategy of reducing power use by improving compute efficiency, often referred to as “more work per watt,” is one answer to the AI energy crisis. But it’s not nearly enough.
One ChatGPT query uses nearly 10 times as much energy as a typical Google search, according to a report by Goldman Sachs. Generating an AI image can use as much power as charging your smartphone.
This problem isn’t new. Estimates in 2019 found training one large language model produced as much CO2 as the entire lifetime of five gas-powered cars.
The hyperscalers building data centers to accommodate this massive power draw are also seeing emissions soar. Google’s latest environmental report showed greenhouse gas emissions rose nearly 50% from 2019 to 2023 in part because of data center energy consumption, although it also said its data centers are 1.8 times as energy efficient as a typical data center. Microsoft’s emissions rose nearly 30% from 2020 to 2024, also due in part to data centers.
And in Kansas City, where Meta is building an AI-focused data center, power needs are so high that plans to close a coal-fired power plant are being put on hold.
There are more than 8,000 data centers globally, with the highest concentration in the U.S. And, thanks to AI, there will be far more by the end of the decade. Boston Consulting Group estimates demand for data centers will rise 15%-20% every year through 2030, when they’re expected to comprise 16% of total U.S. power consumption. That’s up from just 2.5% before OpenAI’s ChatGPT was released in 2022, and it’s equivalent to the power used by about two-thirds of the total homes in the U.S.
Back in the real world of mining, energy production and manufacturing, the world slowly moves towards consolidation, as the depletion of rich deposits takes its toll and weaker players throw in the towel one by one. While the rest of us is kept busy focusing on large language models BS-ing us into nuclear war or taking over humanity, we are getting ever closer to being led by ever larger corporations controlling an ever larger share of resource extraction and refinement globally.
Adding AI on top — while certainly beneficial in some cases — is the last ditch attempt made at fighting energy and resource depletion and the slow motion collapse of society. Adding such immensely complex machines to the mix is not without its drawbacks, though. Indrajit Samarajiva argues that the rise of AI itself is a sign of collapse, as we keep investing money and energy ostensibly to “solve” “problems”, but in fact creating much larger ones. Simply put we are pouring money into something which would need much more energy and raw materials to finish than we could ever present. (Not unlike electrification itself.) Is it any wonder than, that we are witnessing the enshittification of everything: from software to services, or from products to civilization itself? As Andrew Nikiforuk explains in his brilliant essay (thanks, Dave, for the link):
What’s happened to appliances is a pretty good metaphor for how complexity undermines society. The Utah anthropologist Joseph Tainter has argued that civilizations tend to collapse when they can no longer afford the social and energy costs of maintaining their complexity or, for that matter, their appliances. In other words, societies die when they can’t fix things in an affordable way.
“After a certain point, increased investments in complexity fail to yield proportionately increasing returns,” explains Tainter. “Marginal returns decline and marginal costs rise. Complexity as a strategy becomes increasingly costly, and yields decreasing marginal benefits.” Ergo, enshittification.
So, even if we somehow managed to build generative AI before running out of resources and energy, though, it is very unlikely that it will be more than a flash in the pan, let alone becoming a despot ruling over humanity. (Thanks, we’ve got enough of those corporate overlords already.) Claims of an AI takeover terribly underestimate the amount of human labor which goes into mining, transportation and manufacturing of just about anything. Should AI became self aware and decided to build an army of robots then, it first would have to convince a billion humans not only to obey its will, but to increase resource extraction beyond physically impossible levels…
And no, artificial intelligence will not eat us alive for our atoms either, as it cannot hope to build itself up from carbon, oxygen, nitrogen, hydrogen and calcium — materials our bodies consist of. Instead, it (and the many other supporting technologies like a functioning electric grid) needs silicon and a range of exotic and rare metals like lanthanum (La), cerium (Ce), neodymium (Nd), samarium (Sm), europium (Er), terbium (Tb), and dysprosium (Dy)… All of which, by the way, are still being mined using good old diesel excavators and trucks, and refined by burning copious amounts of coal and natural gas.
Peak oil will thus not only mean peak energy, but peak resources, and yes, peak AI as well.
As the single biggest limiting factor of our civilization — petroleum production — embarks on its long and undulating journey back to irrelevance (starting around 2030) it will become physically impossible to grow global energy production further. Remember, oil is needed for ALL power plants to be built and remain operational (including “renewables”, nuclear and hydro, too), not to mention feeding the electric grid’s insatiable hunger for copper, aluminum and electrical steel. The end of the oil bonanza will thus mean the end of growth for AI (and the rest of the economy) as well — with repercussions to follow.
That, however, is another story for another week.
Until next time,
B
Thank you for reading The Honest Sorcerer. If you would like to support my work, please subscribe for free and consider leaving a tip. Every donation helps, no matter how small. Thank you in advance!