A great deal of fear, excitement, and hype has lately grown around Artificial Intelligence (AI). This is partly because advances in machine learning keep surprising—and even overtaking—us in a growing number of domains, such as disease diagnosis, driving, language translation, and complex forecasting. To add fuel to fire, AI enthusiasts keep making dramatic claims about the imminence of Singularity, human-level AI, super intelligence, and the threat of machines taking over the world and even enslaving us! How warranted are these claims? We owe it to ourselves to better understand both the current state, the potential, and the limitations of AI, to separate hype from reality, and to reflect on the problem of AI philosophically—so we can focus on the actual challenges we’re likely to face as AI becomes more common.
AI can certainly improve human lives on many fronts, but this promise coexists with the fear that AI will cause havoc in labor markets by not just appropriating more blue collar work, as industrial automation has been doing for decades, but even a lot of skilled white collar work. This disruption—which will further concentrate wealth and create jobless hordes and cause new social upheavals in nation-states—will likely occur and needs to be taken seriously. What makes AI-led disruption different than earlier waves of technological disruption is that earlier the loss of manufacturing jobs was met by the rise of services sector jobs, but this time the latter too are at risk, with no evident replacement in sight. This is a recipe for jobless growth, with GDP and unemployment rising together—a grave problem that may well require disruptive solutions.
As for the more dramatic claims about AI, my view, which I articulated in The Dearth of Artificial Intelligence (2009), remains that even if we develop ‘intelligent’ machines (much depends here on what we deem ‘intelligent’), odds are near-zero that machines will come to rival human-level general intelligence if their creation bypasses the particular embodied experience of humans forged by eons of evolution. By human-level intelligence (or strong AI, versus weak or domain-specific AI), I mean intelligence that’s analogous to ours: rivaling our social and emotional intelligence; mirroring our instincts, intuitions, insights, tastes, aversions, adaptability; similar to how we make sense of brand new contexts and use our creativity, imagination, and judgment to breathe meaning and significance into novel ideas and concepts; to approach being and time like we do, informed by our fear, desire, delight, sense of aging and death; and so on. Incorporating all of this in a machine will not happen by combining computing power with algorithmic wizardry. Unless machines can experience and relate to the world like we do—which no one has a clue how—machines can’t make decisions like we do. Unless machines can suffer like us, they will not think like us. (Another way to say this is that reductionism has limits, esp. for highly complex systems like the biosphere and human mind/culture, when the laws of nature run out of descriptive and predictive steam—not because our science is inadequate but due to irreducible and unpredictable emergent properties inherent in complex systems.)
Recent Comments