Max Tegmark’s Life 3.0

This book is billed as a look at how artificial intelligence will affect “crime, war, justice, jobs, society and our very sense of being human”. The author, Max Tegmark, is a physicist and M.I.T. professor, so you can be sure he’s got the brains and experience to give a good survey of current AI research and ground his predictions on a firm, scientific footing.

Funny, then, that there is almost no science anywhere in this book. In fact, it would be more accurate to call this a work of speculative fiction that uses a gloss of science-babble to give a veneer of rationality to wild bouts of fantasy. In other words, it’s like a Star Trek episode where people start talking about “positronic-phase-shifting” to explain why an alien can walk through walls.

Take, for example, Tegmark’s discussion about how we might ensure that a future, super-intelligent AI will share our goals (and not just destroy humanity on a whim). Tegmark states, “to figure out what people really want, you can’t merely go by what they say. You also need a detailed model of the world… Once we have such a world model, we can often figure out what people want even if they don’t tell us, simply by observing their goal-oriented behavior.” (p.261)

Sounds straightforward enough, right? But stop and think a moment. Wow. I mean, WOW! Never mind the fact that I had a conversation that sounded exactly like this sometime in the early 90s in the back of a bus in L.A. with a guy who was stoned out of his gourd. The very idea that we just casually come up with “a detailed model of the world” is the kind of sci-fi story device that went out of style in 2009 when Wall Street bankers crashed the world economy by assuming they had mathematical models that described pretty much everything. The result, as we all know, was that they nearly broke the concept of money itself.

So, coming up with a detailed model of the world sufficient for a complete understanding of human goals? Science fiction.

And take heart, sci-fi fans, there are plenty of more traditional sci-fi elements to Life 3.0. Chapters 5 and 6 are the most obvious in this respect, with the clue being that Chapter 5 deals with the next 10,000 years of human/AI development, while Chapter 6 cover the next billion and beyond (give Tegmark credit for ambition). Here, Tegmark pushes into a post-human world in which consciousness has been uploaded to machines and seeks to expand into the universe. He speculates about different types of relationships between humans and AI, going through a list that includes headings such as Benevolent Dictator, Zookeeper and half-a-dozen other episode titles from the original series of Star Trek. Tegmark also discusses the likelihood of encountering alien intelligences and delves into the ability of advanced AI to harvest massive amounts of energy from entire stars, black holes, sphalerizers (google it–I’m not going to explain it here) and even the entire energy content of the whole universe.

The problem with the book is, of course, that it isn’t intended as science fiction. Tegmark is trying to call attention to serious issues that humanity needs to grapple with before we develop artificial intelligences that are smarter than us. He brushes up against several important topics, such as a good definition of intelligence or, separately, of consciousness, and he gives an interesting, near-future scenario of how a super-intelligent AI might first appear in our world. But Tegmark’s aspirations are constantly undermined by a lax and shifting use of definitions, poor examples and, to a degree, being overly ambitious.

For example, early in the book, Tegmark defines intelligence as the “ability to accomplish complex goals”. Later in the book, however, he describes a super-intelligent AI as “being better at accomplishing its goals than we humans…” (emphasis added). The difference between accomplishing a goal and setting a goal is enormous. It is the difference between following instructions and setting instructions. Yet Tegmark constantly mixes up these definitions. I get the feeling that Tegmark himself is aware of the distinction and its importance, but his incessant shifting from one definition to the other makes for sloppy writing and even sloppier arguments.

In addition, many of his examples are simplistic, i.e. the “detailed world model” and various forms of AI-led societies mentioned above, or even patently absurd. His statement, for example, that “Our planet is 99.999999% dead in the sense that this fraction of its matter isn’t part of our biosphere and is doing almost nothing useful for life other than providing gravitational pull and a magentic field. This raises the potential of one day using a hundred million times more matter in active support of life.”

Yeah, you tell ’em, Tegmark! NASA’s already built a toilet that can flush in space, so who needs gravity? And a magnetic field? Only wimps and French people whine about frying under a heavy dose of cosmic rays!

I could go on. And on. And on. The whole of Chapter 7 is a stinking pile of cat feces lying just under a thin layer of dirt in your garden, waiting for you stick your hand in it when you try to pull a weed. (If I’m generous, I’d say this chapter relied on the notion that our human sense of free-will and the pursuit of individual goals is all reducable to physical phenomena which can be described by physics. But Tegmark never explicitly says that and, after all the other crap examples and sloppy arguments I had to read in this book, I’m not in a mood to be generous.)

In sum, don’t waste your time with this book. Don’t be deceived by the catchy opening. And despite a few good pages here and there (Chapter 8, on consciousness, had some interesting and though-provoking touches), the majority of Life 3.0 was both frustrating to read and remarkably uninformative.

Leave a comment