Let’s talk about evil artificial intelligence. From Terminator to The Matrix, humans have feared computers taking over the world. I think there are great reasons why that’s unlikely:
1. Intelligence is hard to model.
2. Self-improving intelligence may be even harder.
3. Intelligence requires cooperation.
4. Intelligence grows. It is not a switch that is flipped.
5. There are different intelligence architectures (tortoise vs. hare). Humans learn from each.
6. Humans work together with smart machines. We use technology when it works well.
7. Friendly AIs will help us defend against evil AIs. Humans do this against human criminals.
So there are two broad types of AI: strong and weak. Strong means as smart as a human, weak means anything less. We only have weak AI right now (2013).
Weak AI can beat humans at chess or Jeopardy. As yet, it is starting to be able to identify cats and dogs or walk around a room. For now, five-year-olds are geniuses compared to artificial intelligence.
# 1 reason not to fear evil AI: It’s super-hard to model intelligence.
One of the most likely ways to create an artificial intelligence is to model human brains. Here’s the problem: how small do you need to go?
“Do you need to model the system at the level of individual neurons? Individual synapses? Individual receptors and ion channels? Individual neurotransmitter molecules? The exact level at which you need to model things doesn’t change the theoretical feasibility, but it may change the timing by decades or more.”
— Ramez Naam
This difficulty means a smarter-than-human, brain-based artificial intelligence probably isn’t going to jump out at us.
# 2 reason: Self-improving computers are even harder to make.
Two humans are smarter than one. Still, even with thousands of incredibly smart people working at IBM or Intel or Google, we haven’t become intellectual gods.
It probably takes a lot of time to figure out how to advance from X+1 intelligence to X+100.
For example, if you had sub-neuronal editing powers for your brain, would you know what to do to become a genius? I wouldn’t.
Some problems actually become harder at a rate that’s faster than exponential, like predicting atoms or how proteins fold.
Becoming smarter may be that kind of problem. If so, intelligence might not instantly explode.
# 3: Intelligence requires cooperation.
All intelligence is cooperation. The whole is greater than the sum of its parts.
RNA combines to make cells, which combine to become multicellular, then sexual, then animals, then “super-organisms”.
Super-organisms are groups of animals that cooperate to become more effective: like humans or ants.
Each increase in cooperation allows for smarter behavior: you can farm while I make farming tools. We both benefit.
Simple AIs are judged by their ability to help us. If they aren’t helping, we don’t keep them.
# 4: Intelligence is a continuum.
Intelligence grows from simpler intelligence.
As we grow it, we will select for traits that benefit us.
It’s doubtful this cooperation will suddenly switch off.
As you become an adult, do your values suddenly disappear?
# 5: There are many kinds of intelligence.
Consider a tortoise. Consider a hare.
The tortoise works diligently and precisely. The hare works quickly and efficiently.
Both work.
There are many kinds of intelligences, and it takes time to figure each one out.
Dogs, dolphins, and octopuses are all intelligent. They all do well in their specific fields.
Artificial intelligence will probably grow from simpler intelligence. We will be able to choose the kinds that help us.
# 6: Humans will accept smart machines into their lives.
Since the first club was used, technology has become a part of our lives.
From eyeglasses to automobiles to phones to the Internet, we integrate tools into our work, educational, and social lives.
If you were offered a safe and effective way to keep a phone on you at all times, perhaps with an ear surgery, would you do it? I probably would.
What about the Internet installed on your body, so you can talk to anyone, anywhere, in any way?
With smartphones, we are already smarter, faster, and better looking. Life improves when we use technology.
Instead of an us-versus-them attitude toward artificial intelligence, it’s really an us-and-them future.
# 7: Friendly AIs will defend against evil AIs.
Humans protect against evil humans.
Friendly computers will probably see the benefit of cooperating with us, because then we both learn more. We would both work to defend against evil intelligences.
Consider that there are many types of artificial minds, like the tortoise and the hare. They probably will cooperate with each other and with us so we all can learn more about nature. Knowing more allows us all to exist for longer.
Any evil mind trying to harm others will likely face many strong intelligences, both human and human-assisted.
Most humans are friendly. Most humans want a society that’s friendly. As we grow the intelligence of computers, we’ll see this friendly trait as very valuable.
Given all these reasons, it seems wise to be excited about the future of humanity and machines.
I don’t believe evil artificial intelligence is likely.
I hope you find my reasons compelling.
Sincerely,
Jesse
Addendum:
An AI researcher looked through my reasons. He says they aren’t guaranteed to stop evil AI. I agree. They do help me sleep better.