A great deal of fear, excitement, and hype has lately grown around Artificial Intelligence (AI). This is partly because advances in machine learning keep surprising—and even overtaking—us in a growing number of domains, such as disease diagnosis, driving, language translation, and complex forecasting. To add fuel to fire, AI enthusiasts keep making dramatic claims about the imminence of Singularity, human-level AI, super intelligence, and the threat of machines taking over the world and even enslaving us! How warranted are these claims? We owe it to ourselves to better understand both the current state, the potential, and the limitations of AI, to separate hype from reality, and to reflect on the problem of AI philosophically—so we can focus on the actual challenges we’re likely to face as AI becomes more common.
AI can certainly improve human lives on many fronts, but this promise coexists with the fear that AI will cause havoc in labor markets by not just appropriating more blue collar work, as industrial automation has been doing for decades, but even a lot of skilled white collar work. This disruption—which will further concentrate wealth and create jobless hordes and cause new social upheavals in nation-states—will likely occur and needs to be taken seriously. What makes AI-led disruption different than earlier waves of technological disruption is that earlier the loss of manufacturing jobs was met by the rise of services sector jobs, but this time the latter too are at risk, with no evident replacement in sight. This is a recipe for jobless growth, with GDP and unemployment rising together—a grave problem that may well require disruptive solutions.
As for the more dramatic claims about AI, my view, which I articulated in The Dearth of Artificial Intelligence (2009), remains that even if we develop ‘intelligent’ machines (much depends here on what we deem ‘intelligent’), odds are near-zero that machines will come to rival human-level general intelligence if their creation bypasses the particular embodied experience of humans forged by eons of evolution. By human-level intelligence (or strong AI, versus weak or domain-specific AI), I mean intelligence that’s analogous to ours: rivaling our social and emotional intelligence; mirroring our instincts, intuitions, insights, tastes, aversions, adaptability; similar to how we make sense of brand new contexts and use our creativity, imagination, and judgment to breathe meaning and significance into novel ideas and concepts; to approach being and time like we do, informed by our fear, desire, delight, sense of aging and death; and so on. Incorporating all of this in a machine will not happen by combining computing power with algorithmic wizardry. Unless machines can experience and relate to the world like we do—which no one has a clue how—machines can’t make decisions like we do. Unless machines can suffer like us, they will not think like us. (Another way to say this is that reductionism has limits, esp. for highly complex systems like the biosphere and human mind/culture, when the laws of nature run out of descriptive and predictive steam—not because our science is inadequate but due to irreducible and unpredictable emergent properties inherent in complex systems.)
After all, what would it take for us to consider AI to have surpassed us in intelligence? I suggest it would have to persuasively display at least the following traits: a genuine sense of wonder, empathy, and ability to ponder its existential purpose and the good life; real self-awareness and ability to reflect and theorize about its place in the world; the ability to willfully deviate from the life path that others have chosen for it and to sensibly explain why it did so; a vivid imagination and a creative and subjective moral life; a sense of social identity and nuanced emotional bonds (can a ‘computed reason’ detached from emotion be deemed intelligent?); errors of learning and judgment that are akin to human errors; and so on. So the challenge of strong AI, let alone of super intelligence, implies that we first make AI comparably ‘smart like us’, because the best among us are the most intelligent creatures we know. Else our AI robots won’t evoke our respect, nor be seen as anything more than captivating, useful, or potentially dangerous toys.
Despite all the advances in machine learning, we are nowhere near attaining human-level intelligence, and may never be for good reason. Machines will keep beating humans in various domain-specific tasks using little more than layered neural nets and training, but if machines cannot attain human-level intelligence, it follows that they cannot surpass human intelligence either, whatever the hype merchants of AI may claim. But enough of my commentary. Here are seven recent essays that I believe are sensible and helpful in making sense of AI today.
1. Rise of the machines by anonymous author [free registration required]
"Part of the problem ... is a confusion around the word “intelligence”. Computers can now do some narrowly defined tasks which only human brains could manage in the past. An image classifier may be spookily accurate, but it has no goals, no motivations, and is no more conscious of its own existence than is a spreadsheet or a climate model.… AI uses a lot of brute force to get intelligent-seeming responses from systems that, though bigger and more powerful now than before, are no more like minds than they ever were. It does not seek to build systems that resemble biological minds. As Edsger Dijkstra, another pioneer of AI, once remarked, asking whether a computer can think is a bit like asking “whether submarines can swim”."
2. Artificial Stupidity by Ali Minai [Highly recommended]
"Intelligent machines will not be more rational; they will probably be more profoundly irrational (or boundedly rational) than humans in unpredictable and inscrutable ways ... If and when they come to pass, truly intelligent machines will have their own irrationalities, their own instincts, intuitions, and heuristics. They will make choices based on values that have emerged within their embodiment as a result of their development and learning. And their heuristics and their choices will often not be consistent with the common values that most humans share because of a shared biological origin."
3. Should we be afraid of AI? by Luciano Floridi
"True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions."
4. The body is the missing link for truly intelligent machines by Ben Medlock
"In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text ... Machine learning has produced many tremendous practical applications in recent years ... But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information."
5. The Great A.I. Awakening by Gideon Lewis-Kraus
How Google transformed Google Translate — and how machine learning is poised to reinvent computing itself.
6. From Technologist to Philosopher by Damon Horowitz
I realized that, while I had set out in AI to build a better thinker, all I had really done was to create a bunch of clever toys—toys that were certainly not up to the task of being our intellectual surrogates. And it became clear that the limitations of our AI systems would not be eliminated through incremental improvements. We were not, and are not, on the brink of a breakthrough that could produce systems approaching the level of human intelligence. I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines? And, slowly, I realized that the questions I was asking were philosophical questions—about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn't be another AI lab. If I really wanted to build a better thinker, I should go study philosophy.
7. Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' by Olivia Solon
"One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. [A group of Chinese researchers] claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias. “We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”
Finally, I wish to end with a very insightful observation and sound advice on AI by the philosopher Daniel C. Dennett (while not necessarily endorsing his larger approach to AI).
"The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence…. We should hope that new cognitive prostheses will continue to be designed to be parasitic, to be tools, not collaborators. Their only “innate” goal, set up by their creators, should be to respond, constructively and transparently, to the demands of the user."
____________________
Image source.
Comments