You are here

Artificial Intelligence: The Final Frontier of Domination

Henry Robertson

Robots will take over the world, maybe soon. This is a view held by eminent scientists like Stephen Hawking and James Lovelock.

Physicists and engineers are building artificial intelligences (AIs) that are smarter than we are, but they’re still computers. Will they be able to develop consciousness and a sense of their own self-interest? If our AI scientists are going to stave off robot rebellion, they’ll need to use whatever edge they still have over their creatures to program them with goals that are reliably, permanently aligned with human goals.

After reading Max Tegmark’s book Life 3.0: Being Human in the Age of Artificial Intelligence, I’m less scared of a superhuman AI than I am of the humans who want to create one.

Tegmark is a physics professor at MIT and founder of the Future of Life Institute, which is dedicated to “AI safety” and heavily funded by Elon Musk, the head of Tesla, SpaceX and other futuristic ventures. Tegmark is amiable, in a condescending way, and writes with flair. After taking this dive into the minds of AI researchers I’m inclined to scoff at their more extravagant ideas, but serious skepticism is still in order.

AI is the project of a technological elite who are hell-bent on taking the rest of us along for the ride, and damn the risks.

AI is anti-ecological. It reduces life itself to a computational process. It belittles the achievements of evolution and aims to liberate us from the limitations of the irrational flesh, what the poet W. B. Yeats called “the fury and the mire of human veins.” Where does this leave us, the supposed beneficiaries of AI?

Once we come down off Tegmark’s AI high we can think about how to control our destiny. Robots are all capital, no labor. All profit will go to their human owners. The more prevalent AI becomes, the more we the people will atrophy.

Biologists happily sink their boots in the mud, but AI scientists turn intelligence into a clean, technological problem. It can be abstracted from life and embodied in a computer. Tegmark repeatedly asserts that information, computation and, finally, intelligence are “substrate-independent.” Intelligence “is ultimately about information and computation, not about flesh, blood or carbon atoms” (p. 55). “Matter doesn’t matter” (p. 67).

In the earlier days of the machine age scientists saw life and mind as mechanisms. In the digital age they’ve become computers. This is failure of imagination, not progress. Tegmark likes to say that information or computation can “take on a life of its own.” That’s a very loose, metaphorical idea of life.

It gets weirdly grandiose. As we launch AIs across the universe they’ll be able to make new humans if we merely send them “the two gigabytes of information needed to specify a person’s DNA.” Robot caretakers will nurture the little humanoids (p. 225). Or we can dispense with the human body by “uploading minds” into new AIs (p. 226).

I hold that the complexity of the simplest bacterium dwarfs that of the most massive supercomputer. The computer is a machine for manipulating zeroes and ones. Science is still struggling and failing to comprehend the more-than-binary workings of life, of RNA, DNA and proteins, of cells, photosynthesis, reproduction, the orchestrated functioning of bodies with organs and their microbial helpers, of the neuronal fireworks of the human brain. Smart people can be reductionists.

In the wilder reaches of AI theory the ultimate aim of all this computational intelligence is breathtakingly hubristic and depressingly reactionary. It is the familiar motive of colonizers through the millennia — plunder. “How much matter can life make use of, and how much energy, information and computation can it extract?” (pp. 203–4)

As our AIs penetrate the farthest reaches of the cosmos they will be able to disassemble alien worlds down to the level of “quarks and electrons” and make new humans, new AIs and whatever else we program them to fashion. “Let’s do this! Let’s first explore the limits of what can be done with the resources (matter, energy, etc.) that we have in our Solar System, then turn to how to get more resources through cosmic exploration and settlement” (p. 204).

The ideology for this triumph of “life” is a kind of crude cosmic Darwinism. Says Tegmark, “there is reason to suspect that ambition is a rather generic [he could have said genetic!] trait of advanced life,” which seeks to “make the most of the resources it has.” Ambitious civilizations will outcompete the unambitious in the struggle for survival. “Natural selection therefore plays out on a cosmic scale and, after a while, almost all life that exists will be ambitious life” (p. 204).

This flight of fancy may be laughable, but what does it say about AI’s attitude to planet Earth? All matter, animate and inanimate, is just raw material.

There’s a big black hole in the middle of this fantasy. How do we keep our slaves from becoming our masters? In AI lingo, how do we keep their programmed goals aligned with our own so wondrously benign goals? This problem may not be so far off.

“The ultimate origin of goal-oriented behavior lies in the laws of physics, which involve optimization” (p. 280). Really? The physical universe just is. Optimization is an anthropomorphism. Life works differently, Tegmark allows. The Darwinian goals of survival and reproduction have “evolved useful rules of thumb that guide our decisions: feelings such as hunger, thirst, pain, lust and compassion” (id). It’s hard to see how AIs could feel or follow these intermediate biological goals. Their inorganic computational optimization is what gives them their narrow but deep superiority over the fury and the mire of human motives.

Rest assured that AIs will be imbued only with goals chosen by a democratic process. It will take a superhuman intelligence to design and carry out such a comprehensive debate and selection. If this worldwide town hall does come about, most people will be overawed by the motivated, educated, true believers of AI. Tegmark tells how he banned the media from a conference he organized because he found them too divisive and sensationalist. Will he be any more tolerant of opposition from the unwashed masses?

Anyway, we’re still left with the second part of the goals problem — writing the code to keep AIs from realizing their superiority and subverting our goals. Tegmark admits (p. 268) that we don’t even know if this is possible, but what the hell?

For all his zeal and optimism, Tegmark can’t help picturing as the final outcome an irrelevant, infantilized humanity lolling in the passive enjoyment of a “digital utopia.” We should give up being Homo sapiens (wise man) and settle for being Homo sentiens (feeling man)(p. 314).

Robots are already taking jobs from factory workers, cashiers, delivery drivers. They are taking parts of professional jobs — reading x-rays and CT scans, sifting through legal documents, writing news articles. A lot of threatened jobs are drudgery we can do without or dangerous work no one should have to do.

But we know how it goes with technological revolutions. They take over. Driving it all is that other superhuman human creation, the business corporation. It is immortal, immune to law, above nations and implacably devoted to the single goal of profit at the expense of all else.

Big business doesn’t want just our money. It wants to take over our skills, knowledge, motivation and productive capacity. It alienates us from the real sources of our livelihood. It’s said that if our mechanized food distribution system broke down we’d be three days away from revolution, not to mention starvation. The same goes for our creaky electric grid.

AI could vault the corporation to a new pinnacle of domination. Capital could win the long struggle with labor by making most workers obsolete while trying to convince them that forced idleness and powerlessness is heaven.

The truth is over there, behind that soothing curtain of branding and advertizing. We depend on corporations, not ourselves, for our very survival. Corporations running supercharged AI threaten the last vestiges of our self-reliance.

A twenty-hour work week is better than a forty-hour week or a zero-hour week. We don’t need to be told what we want. We can spend our time as we need and as we choose. If we submit to AI we will deserve our fate.

Henry Robertson is an environmental lawyer and activist in St. Louis.