For years now, humans have binge-watched Netflix, doomscrolled Instagram and uploaded cat videos without considering much about where the internet actually lives. In Southern Arizona, that invisible infrastructure has  suddenly become very real, in the form of massive proposed data centers and the controversies they have incited. 

Only recently have hyperscale data centers, and the developers that build them, begun making their many-million-dollar pitches to local communities for the urgent need for more “compute” — as insiders refer to computational power derived from the stacks of servers that populate data centers. Why the sudden surge?

Wasn’t the internet zipping along in its own fast and weird way a couple years ago without all that extra compute?

A large part of why so much extra power is needed is artificial intelligence — the adaptable algorithmic technology that provides homework help, quick translations, questionably moral therapeutic advice and may threaten the existence of humanity.

But what is AI exactly? Why is it increasingly infecting so much of our Internet activities? In Southern Arizona that question has become urgent. Proposed projects like Project Blue or the large-scale project in Marana have forced residents and officials to confront what the internet actually requires, in land, electricity and water.

To understand what AI is, what potentials and threats it may bear, here are three books that may help demystify the future of the Internet. 

The Empire of AI” (2025) by Karen Hao is a journalistic account of Open AI, “data laborers” in Kenya, water activists in Chile, and many other insiders and critics of the industry.

An investigative journalist, Hao has been covering AI, specifically OpenAI, since 2019. While Sam Altman, OpenAI’s CEO, presented the company as a check on the power and greed other industry players were focused on, Hao began to realize that such rosy hope was a figment, and that the burgeoning industry wasn’t only threatening to be disruptive to tech culture and the markets, but to our climate. 

The massive amounts of water and energy needed for the data centers powering AI may be more than our grids, and our aquifers, can handle.

• The next book is AI Snake Oil” (2024) by Arvind Narayana and Sayash Kapoor. 

The authors — a Princeton computer scientist and a Princeton PhD student (as well as former Facebook engineer) respectively — also write a popular newsletter on the topic. 

One key distinction Narayana and Kapoor make in the book is between Generative AI and Predictive AI. The first is what produces text via chatbot as in ChatGPT, or images through Dall-E. Both versions of AI come with social costs and risks in the short term, but Narayana and Kapoor are “cautiously optimistic about the potential of [generative] AI to make people’s lives better in the long run.” 

Predictive AI, meanwhile, relying on statistical analysis, tries to find trends and patterns to make predictions or forecast the future. This is what they refer to as “snake oil.” Problems arise when policy makers or law enforcement use AI to predict human behavior, and sometimes make life-changing decisions based on those — often false, the authors contend — predictions.

• The last book takes a more philosophical approach to AI. The Alignment Problem,” (2021) by Brian Christian, may sound dated — given it was published five years ago and the breakneck pace of innovation since then — but one of Christian’s central points is that we need to, if not slow down, at least proceed with more caution. 

The titular “problem” Christian identifies is a potential misalignment between human values and AI principles. More specifically, the alignment problem goes back to the 1960s, when AI pioneer Norbert Wiener noted the potential gap between the “purpose put into the machine” and the “purpose which we really desire.” 

Christian and other AI theorists and practitioners recognize that there is a bit of a “black box” problem with AI: the gap between inputs and outputs (or what AI actually does) is incomprehensible to most of us. That complex opacity is potentially a danger.

Nobody knows what AI will look like in the next one, five, ten years or more. Whether our AI-infected future is utopian or human-less — or somewhere in between — understanding recent trends and present dangers of AI now may at least help us make responsible decisions, and align them with our human values.

Creative Commons License

Republish our articles for free, online or in print.

John Washington covers Tucson, Pima County, criminal justice and the environment for Arizona Luminaria. His investigative reporting series on deaths at the Pima County jail won an INN award in 2023. Before...