AI isn't really all that intelligent — yet

What’s the first thing you think about when someone says artificial intelligence? Maybe some super-intelligent computer trying to destroy the world? A red-eyed robot stomping across a blasted wasteland? For most of us, our understanding of AI has been shaped by pop culture. From movies like The Matrix to The Terminator, AI is often depicted as a world-ending scenario in the same vein as aliens or giant asteroids. But what is it, really?

The term AI is thrown around all the time in the media. Remember all the press a few years ago about an AI that talked about putting humans in cages? But this isn’t artificial intelligence in the traditional sense. For all its pomp and circumstance, the term has lost much of its original meaning. As the world stands now, in 2020, true artificial intelligence doesn’t exist.

Now, you might be thinking, but what about all of these companies that are constantly touting their AI? Well, essentially, what they are going on about isn’t AI as we’ve been taught to think about it by books and movies. These aren’t sentient machines, but they are self-learning.

The bulk of commercial and private AI available right now is more accurately described as machine learning. Granted, machine learning doesn’t carry the same heft as artificial intelligence, but it more accurately describes where we are right now with our current technology.

Machine learning, according to the good folks at MIT, powers many of the services we use every day including “recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa.”

Normally, a computer program would have to be meticulously programmed by a team of humans to function properly. Machine learning eliminates the need for human programmers by allowing computers to use complicated algorithms to exam large sets of data. Data in this case can be a number of things. The most popular example, as with all things on the internet, is cats.

Say someone wanted to teach a program to recognize pictures of cats. For a programmer to do this it would take hundreds of hours of meticulous planning. But a program utilizing machine learning would be able to exam thousands of pictures of cats and begin to recognize patterns until, eventually, it is able to distinguish cats from dogs with little to no human interference.

Where machine learning fails to live up to the expectations we’ve created for artificial intelligence is when we attempt to have the program identify a dog. A program that has taught itself to identify cats from dogs doesn’t understand what a dog is, only what it is not. To identify dogs the program would have to undergo a whole new learning process, which often leads to “catastrophic forgetting” according to the researchers at MIT. Essentially, it begins to understand what a dog is while forgetting what a cat is.

It all comes back to causation, and a machine’s inability to understand cause and effect. A program can understand that pointy ears, whiskers and a small nose are associated with the word cat, but they cannot understand that those same aspects are why cats are not dogs.

It all seems kind of simple for us, cause and effect are concepts we understand from an early age, that’s what gives us our sense of intelligence. But for a computer the capability simply isn’t here yet. A computer using machine learning can look at a picture of a cat and tell you it’s a cat, but not why this means it isn’t a dog.

Will a computer ever be able to replicate causation? Maybe. But for now, the world will have to wait on true artificial intelligence. But that’s okay, perhaps it’s for the best. After all, if science fiction has taught us anything it’s that a computer capable of judging cause and effect might quickly determine that humanity is the cause of many of the planet’s negative effects.

Recommended for you