, , , ,

525. Life 3.0: Being Human in the Age of Artificial Intelligence

Rating:  ☆☆☆☆

Recommended by:

Author:    Max Tegmark

Genre:   Non Fiction, Science, Public Policy

364 pages, published August 29, 2017

Reading Format:   e-Book on Overdrive

Summary

In Life 3.0, MIT professor Max Tegmark makes a strong case that we are on the precipice of tremendous technological changes that will impact every aspect of life on our planet.  Tegmark explores our post human future and discusses how will artificial intelligence (“AI”) will affect crime, war, justice, jobs, society and our very existence as humans. He looks at possible outcomes after the rise of AI and proposes strategies to keep them beneficial.

Quotes 

“Life 1.0”: life where both the hardware and software are evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.”

 

“Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download.”

 

 “If consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing. In other words, consciousness is substrate-independent twice over!”

 

 “If we don’t know what we want we’re less likely to get it.”

 

“… when people ask about the meaning of life as if it were the job of our cosmos to give meaning to our existence, they’re getting it backward: It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.”

“The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.”

 

“We invented fire, repeatedly messed up, and then invented the fire extinguisher, fire exit, fire alarm and fire department.”

 

 “This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0”

 

“In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.”

 

“The Matrix, Agent Smith (an AI) articulates this sentiment: “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You are a plague and we are the cure.”

 

“I think of this as the techno-skeptic position, eloquently articulated by Andrew Ng: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.”

 

“The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we’re stronger, but because we’re smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.”

 

 “Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road.”

 

“Will life in our Universe fulfill its potential or squander it? This depends to a great extent on what we humans alive today do during our lifetime, and I’m optimistic that we can make the future of life truly awesome if we make the right choices.”

 

“the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. As I mentioned in chapter 1, people don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants.”

 

“The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate.”

 

“I’m encouraging mine to go into professions that machines are currently bad at, and therefore seem unlikely to get automated in the near future. Recent forecasts for when various jobs will get taken over by machines identify several useful questions to ask about a career before deciding to educate oneself for it. For example: • Does it require interacting with people and using social intelligence? • Does it involve creativity and coming up with clever solutions? • Does it require working in an unpredictable environment?”

 

“The DQN AI system of Google DeepMind can accomplish a slightly broader range of goals: it can play dozens of different vintage Atari computer games at human level or better. In contrast, human intelligence is thus far uniquely broad, able to master a dazzling panoply of skills.

A healthy child given enough training time can get fairly good not only at any game, but also at any language, sport or vocation. Comparing the intelligence of humans and machines today, we humans win hands-down on breadth, while machines outperform us in a small but growing number of narrow domains, as illustrated in figure 2.1. The holy grail AI research is to build “general AI” (better known as artificial general intelligence, AGI) that is maximally broad: able to accomplish virtually any goal, including learning.”

 

 “Evolution optimizes strongly for energy efficiency because of limited food supply, not for ease of construction or understanding by human engineers. My wife, Meia, likes to point out that the aviation industry didn’t start with mechanical birds. Indeed, when we finally figured out how to build mechanical birds in 2011, 1 more than a century after the Wright brothers’ first flight, the aviation industry showed no interest in switching to wing-flapping mechanical-bird travel, even though it’s more energy efficient—because our simpler earlier solution is better suited to our travel needs. In the same way, I suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. It will probably draw more than the twelve watts of power that your brain uses, but its engineers won’t be as obsessed about energy efficiency as evolution was—and soon enough, they’ll be able to use their intelligent machines to design more energy-efficient ones.”

 

“Yet all these scenarios have two features in common:  A fast takeoff: the transition from subhuman to vastly superhuman intelligence occurs in a matter of days, not decades. A unipolar outcome: the result is a single entity controlling Earth.”

 

“It’s natural for us to rate the difficulty of tasks relative to how hard it is for us humans to perform them, as in figure 2.1. But this can give a misleading picture of how hard they are for computers. It feels much harder to multiply 314,159 by 271,828 than to recognize a friend in a photo, yet computers creamed us at arithmetic long before I was born, while human-level image recognition has only recently become possible. This fact that low-level sensorimotor tasks seem easy despite requiring enormous computational resources is known as Moravec’s paradox, and is explained by the fact that our brain makes such tasks feel easy by dedicating massive amounts of customized hardware to them—more than a quarter of our brains, in fact.”

 

“After all, why should our simplest path to a new technology be the one that evolution came up with, constrained by requirements that it be self-assembling, self-repairing and self-reproducing? Evolution optimizes strongly for energy efficiency because of limited food supply, not for ease of construction or understanding by human engineers.”

 

“a hallmark of a living system is that it maintains or reduces its entropy by increasing the entropy around it. In other words, the second law of thermodynamics has a life loophole: although the total entropy must increase, it’s allowed to decrease in some places as long as it increases even more elsewhere. So life maintains or increases its complexity by making its environment messier.”

 

 “it’s not very interesting to try to draw an artificial line between intelligence and non-intelligence, and it’s more useful to simply quantify the degree of ability for accomplishing different goals.”

 

“DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.  Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. Just like a dog learns to do tricks when this increases the likelihood of its getting encouragement or a snack from its owner soon, DeepMind’s AI learned to move the paddle to catch the ball because this increased the likelihood of its getting more points soon. DeepMind combined this idea with deep learning: they trained a deep neural net, as in the previous chapter, to predict how many points would on average be gained by pressing each of the allowed keys on the keyboard, and then the AI selected whatever key the neural net rated as most promising given the current state of the game.”

 

“After DeepMind’s breakthrough, there’s no reason why a robot can’t ultimately use some variant of deep reinforcement learning to teach itself to walk without help from human programmers: all that’s needed is a system that gives it points whenever it makes progress. Robots in the real world similarly have the potential to learn to swim, fly, play ping-pong, fight and perform a nearly endless list of other motor tasks without help from human programmers. To speed things up and reduce the risk of getting stuck or damaging themselves during the learning process, they would probably do the first stages of their learning in virtual reality.”

 

“The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain in figure 2.2 that haven’t yet been submerged by the rising tide of technology! Figure 3.6 shows that this forms not a single island but a complex archipelago, with islets and atolls corresponding to all the valuable things that machines still can’t do as cheaply as humans can. This includes not only high-tech professions such as software development, but also a panoply of low-tech jobs leveraging our superior dexterity and social skills, ranging from massage therapy to acting. Might AI eclipse us at intellectual tasks so rapidly that the last remaining jobs will be in that low-tech category? A friend of mine recently joked with me that perhaps the very last profession will be the very first profession: prostitution. But then he mentioned this to a Japanese roboticist, who protested: “No, robots are very good at those things!”

 

“I’m sure there’ll be new new jobs for horses that we haven’t yet imagined. That’s what’s always happened before, like with the invention of the wheel and the plow.” Alas, those not-yet-imagined new jobs for horses never arrived. No-longer-needed horses were slaughtered and not replaced, causing the U.S. equine population to collapse from about 26 million in 1915 to about 3 million in 1960.  As mechanical muscles made horses redundant, will mechanical minds do the same to humans?”

 

“So who’s right: those who say automated jobs will be replaced by better ones or those who say most humans will end up unemployable? If AI progress continues unabated, then both sides might be right: one in the short term and the other in the long term. But although people often discuss the disappearance of jobs with doom-and-gloom connotations, it doesn’t have to be a bad thing! Luddites obsessed about particular jobs, neglecting the possibility that other jobs might provide the same social value. Analogously, perhaps those who obsess about jobs today are being too narrow-minded: we want jobs because they can provide us with income and purpose, but given the opulence of resources produced by machines, it should be possible to find alternative ways of providing both the income and the purpose without jobs. Something similar ended up happening in the equine story, which didn’t end with all horses going extinct. Instead, the number of horses has more than tripled since 1960, as they were protected by an equine social-welfare system of sorts: even though they couldn’t pay their own bills, people decided to take care of horses, keeping them around for fun, sport and companionship. Can we similarly take care of our fellow humans in need?”

 

“Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data, and this is what it decided”? Moreover, recent studies have shown that if you train a deep neural learning system with massive amounts of prisoner data, it can predict who’s likely to return to crime (and should therefore be denied parole) better than human judges. But what if this system finds that recidivism is statistically linked to a prisoner’s sex or race—would this count as a sexist, racist robojudge that needs reprogramming? Indeed, a 2016 study argued that recidivism-prediction software used across the United States was biased against African Americans and had contributed to unfair sentencing.  These are important questions that we all need to ponder and discuss to ensure that AI remains beneficial.”

 

“Perhaps life will spread throughout our cosmos and flourish for billions or trillions of years—and perhaps this will be because of decisions that we make here on our little planet during our lifetime.”

 

My Take

Life 3.0 is a fascinating look at the tremendous technological change that is on our doorstep and what that will mean for the future of human beings, the planet earth and our universe.  Tegmark thoroughly discusses a diverse array of ideas about our past, present and future in language that the lay reader can easily understand.  A real “thinker” book and highly recommended.