Artificial intelligence (AI) has made spectacular progress in the last two decades. Computers can now diagnose medical images, predict customer behaviour, manage financial portfolios, compose poetry, and even generate art. The AI can do some of these things better than humans. As AI marches furiously towards becoming increasingly smart systems, an old philosophical question has returned to haunt us: Is human intelligence qualitatively different from artificial intelligence, or are their differences only quantitative?
The revolution in AI is primarily powered by a class of algorithms called artificial neural networks. These algorithms process large quantities of data and extract statistical patterns from it. When called upon to perform a task, they simply match the input data to the most relevant patterns to compute the result.
Such computational pattern matching is surprisingly powerful and can simulate many functions of human intelligence. Consider the game of chess which requires a synthesis of multiple abilities such as tactical thought, strategic patience, risk analysis, imagination, and foresight. Garry Kasparov wrote in 2007, “Chess is a unique cognitive nexus, a place where art and science come together in the human mind and are then refined and improved by experience.” Computers today easily defeat the best of human grandmasters. If they can emulate these human-like abilities by reducing them to patterns, can all of human intelligence be reduced to mere pattern matching? Or do our brains have a secret sauce that cannot be recreated computationally?
Despite their astonishing successes, there is no denying that present AI systems suffer some obvious limitations. They are brittle and can be easily fooled by making small modifications to the input data. They are incapable of solving problems that deviate even slightly from what they were designed for. And they are data-hungry and need unreasonably large volumes of data to learn from. Critics use variations of these limitations to conclude that there exists a fundamental difference between human intelligence and artificial intelligence.
This, however, may be a premature conclusion. If we look closer, it turns out that humans also suffer from these same limitations!
Consider the brittleness of AI. Researchers at Google demonstrated that a computer vision model was easily fooled into thinking a banana was a toaster just by adding a small image patch next to it. Images intentionally constructed to mislead AI models are called adversarial images. A 2018 paper shows that adversarial images that fool many AI models also fool humans if they are asked to make instantaneous decisions in limited time. Nature abounds with cases where one species purposefully adopts adversarial techniques designed to hack the behaviour of another species. Cuckoo birds not only lay eggs in nests of other birds, but cuckoo chicks fool their foster parents into feeding them more plentifully than their own offspring.
Marketing professionals are no strangers to the irrationality of the human mind as amply illustrated by Dan Ariely in his book 'Predictably Irrational'. The decisions we make are not always a product of conscious thought, but often outputs of subconscious processes that occur below our horizon of awareness. Simple tricks can significantly influence our decisions, just like adversarial examples can fool AI. Human intelligence may not be as brittle as machine intelligence, but brittle it certainly is!
Another shortcoming of AI is that the models do not generalise unseen datasets and fail to perform well in situations that deviate from what they were originally designed for. Yet again, we are not very different. Consider the travelling salesman problem. You are given a set of points and asked to determine the shortest path that will connect all the points. Humans can solve this problem fairly quickly since we have regularly faced such situations throughout our evolutionary history. But tweak it a little—find the longest path instead of the shortest one—and our performance degrades dramatically. We have excellent 2D navigation skills, but we are remarkably weak at 3D navigation. We can easily handle two-digit arithmetic but struggle with three digits. Just like AI, our cognitive abilities also fail to generalize evolutionarily unfamiliar situations.
Another oft-repeated argument is that AI needs more data than humans to learn how to perform a task. For example, AI might require hundreds of images to learn to differentiate between a zebra and a horse. A ten-year-old child can do it with just a few pictures or maybe even a two-line description. While this observation is entirely accurate, it still does not constitute a fundamental difference between artificial and human intelligence. The argument casually discounts the enormous volume of experiences the child has accumulated throughout her life. Compared to that, the AI has access to only a paltry few hundred images.
OpenAI, a non-profit company formerly backed by Elon Musk and Reid Hoffman, released their latest natural language processing system called GPT-3 last month. GPT-3 exhibits high, almost human-like, versatility. It is extraordinarily good at performing a wide range of language tasks, from writing fantasy stories to composing emails. It can translate between languages and write technical documents. It can answer common sense and reasoning questions. In many cases, the output generated by GPT-3 is indistinguishable from human-written content.
Although GPT-3 shows far better performance than its predecessor GPT-2, both models are qualitatively very similar. Their differences are only quantitative. While GPT-3 has 175 billion parameters, GPT-2 has only 1.5 billion parameters. The success of GPT-3 demonstrates that intelligence is a function of computational complexity. The human brain is estimated to have trillions of neural synapses. When AI systems become comparable in size to human brains, they may very well become as intelligent as us. It is worth recounting what Geoffrey Hinton, a leading AI researcher, said in 2013, “When you get to a trillion (parameters), you are getting to something that has got a chance of really understanding some stuff.”
The human brain is made of atoms and molecules which obey the laws of physics. The process of thinking is carried out by the neurochemical circuits inside our brain. Therefore, human thinking is also, at some level, mechanical in nature. It cannot transcend the laws of physics. An open question is whether the neurochemical circuitry of the brain can be emulated by electronic computer circuits made of silicon and transistors. Science so far has not found a secret ingredient in our brains that no physical process can reproduce.
Viraj Kulkarni holds a master’s degree in Computer Science from the University of California, Berkeley, and is currently pursuing a PhD in Quantum Artificial.
The opinions expressed in this article are those of the author's and do not purport to reflect the opinions or views of THE WEEK