A friend asked me yesterday when artificial intelligence is going to take over the world and kill all humans. It’s a rather casual question at a Summer barbecue party, a question that I predict to come up more and more frequently in the next years.
AI was uninteresting for a long time. Yes, it was hot for a while in the 90s, but then it has not-worked for so long that it acquired a bad reputation. Even when it did work it was doing only computer stuff like playing silly games or fly airplanes: none of them feels really human or seems to be that hard in the first place.
Now however, artificial intelligence is capable of mind-blowing things. It understands what you say to them. It responds. It translates what you see or hear, on the fly.
Have you ever seen kids say “Siri, do you love me?” It’s not uncommon at all. (And, apparently, you don’t know what sexy means until you’ve heard a guy with a slight Indian accent slowly enunciate “I want to have sex with you” to his texting app.)
Next-generation AI is all around us. It drives our cars, watches over our home and family, translates texts on our phone.
The problem with humans and our inventions is that we have a tendency to mess things up at first try. The more power we give to semi-perfect artificial intelligence, the more damage it can cause: Tesla’s autopilot made the decision to drive under a trailer last week, killing the driver in the accident. And, Google’s Nest thermostat seems to have an appetite to freeze people’s houses every now and then.
On the big picture, AI does make our life safer. The real problem is that when it goes wrong, we don’t even understand what has just happened. In normal accidents we tend to know what was going on: the thermostat broke. The engine stalled. The driver fell asleep.
When it comes to AI, most of the time we have no idea what was going on in the computer’s “brain”.
This isn’t a human move
Whenever your phone’s camera uses face recognition to identify the areas to set the focus on, it uses simple algorithms that search for a face’s core features like skin colours or the a position of the eyes.
Modern AI can go many steps further: it can also recognise things like whether you have a hat on, whether you’re smiling, or if it really isn’t you but a dog. Or an orange. Modern AI can actually tell what it sees on a picture.
Unless it can’t. In an experiment on the University of Wyoming, researchers were able to fool cutting-edge deep neural networks using simple, random-generated images. For example, artificial intelligence looked at this first picture and said, with a over 99 percent certainty: it’s a centipede.
What’s interesting here is not that researchers can bring up a state-of-the-art image recognition algorithm and trick it into being wrong. What’s interesting is that most of the time no one can tell where exactly did it go off track.
When you show a picture to a kid and they say something funny, we can understand how their brain worked: it’s not a cat, it’s a lion. When it comes to deep neural networks, even if they are right, we don’t even know why exactly they are right. We don’t share the context with them.
[End of the outtake from my article in Yakuzuzu Issue 7. To continue, read “AI might actually kill us all” there.]