As a graduate student of computer engineering in the early 90s, I recall impassioned late night debates on whether machines can ever be intelligent—intelligent, as in mimicking the cognition, common sense, and problem-solving skills of ordinary humans. Neural network research was hot and one of my professors was a star in the field. Scientists and bearded philosophers spoke of ‘humanoid robots.’ A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.
I argued out of intuition, from a sense of the immersive nature of our life in the world—how much we subconsciously acquire and summon to get through life, how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted Turing test? How could a machine that did not care about its existence as humans do, ever behave as humans do? Can a machine become socially and emotionally intelligent like us without viscerally knowing infatuation, joy, loss, suffering, the fear of death and disease? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.
My interlocutors countered that while extremely complex, the human brain is clearly an instance of matter, amenable to the laws of physics. Our intelligence, and everything else that informed our being in the world, had to be somehow ‘coded’ in our brain’s circuitry, including the great many symbols, rules, and associations we relied on to get through a typical day. Was there any reason why we couldn’t ‘decode’ and reproduce it in a machine some day? Couldn’t a future supercomputer mimic our entire neural circuitry and be as smart as us? They posited a reductionist and computational approach to the brain that many, including Steven Pinker and Daniel Dennett, continue to champion today. Recently, Dennett declared in his sonorous voice, “We are robots made of robots made of robots made of robots.”
But despite the big advances in computing—for example, today’s supercomputers are ten million times faster than those of the early 90s—AI has fallen woefully short of its ambition and hype. Instead, we have “expert systems” that process predetermined inputs in specific domains, perform pattern matching and database lookups, and learn to adapt their outputs algorithmically. Examples include chess software, search engines, speech recognition, industrial and service robots, and traffic and weather forecasting systems. Machines have done well with a great many tasks that we ourselves can, or already do, pursue algorithmically—including many yet unbeknown to us—as in searching for the word “ersatz” in an essay, making cappuccino, restacking books in a library, navigating our car in a city, or landing a plane. But so much else that defines our intelligence remains well beyond machines, such as projecting our creativity and imagination to understand new contexts and their significance, or figuring out how and why new sensory stimuli are relevant or not. Why is AI in such a braindead state? Is there any hope for it? Let’s take a closer look.
Descartes, who held that science and math would one day explain everything in nature, understood the world as a set of meaningless facts to which the mind assigned values (or functions, according to John Searle). Early AI researchers accepted Descartes’ mental representations, embraced Hobbes’ view that reasoning was calculating, Leibniz’s idea that all knowledge could be expressed as a set of primitives, and Kant’s belief that all concepts were rules. At the heart of Western rationalist metaphysics—which shares a remarkable continuity with ancient Greek and Christian metaphysics—lay the Cartesian mind-body dualism that became the dominant inspiration for early AI research.
Early researchers pursued what is now known as ‘symbolic AI.’ They assumed that our brain stored discrete thoughts, ideas, and memories at discrete points, that information is “found” rather than “evoked” by humans. In other words, the brain was a repository of symbols and rules that mapped the external world into neural pulses. And so the problem boiled down to creating a gigantic knowledge base with efficient indexing, i.e., a search engine extraordinaire. They thought that a machine could be made as smart as a human by storing context-free facts and meta-rules able to reduce the search space effectively. Marvin Minsky of MIT AI lab went as far as claiming that our common sense could be produced in machines by representing ten million facts about objects and their functions.
It is one thing to feed in millions of facts and rules into a computer, another to get it to recognize their significance and relevance. The ‘frame problem,’ as this is called, eventually became insurmountable for the ‘symbolic AI’ research paradigm:
If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated? 
GOFAI — Good Old Fashioned Artificial Intelligence — as symbolic AI came to be called, soon turned into a degenerative research program. It is unsettling to think how many prominent scientists and philosophers held, and continue to hold, such naïve assumptions about how humans operate in the world. A few tried to understand what went wrong and looked for a new paradigm for AI. No longer could they ignore the withering critiques of their work by Professor Hubert Dreyfus, who drew inspiration from the radical ideas of the German philosopher Martin Heidegger (1889-1976). It began dawning on them that humans were far more complex, with their subconscious familiarity and skillful coping with the world, nonlinear decision-making, ability to assess and adapt to new situations, and the role of things like purpose, intention, and creativity that shaped, and were in turn shaped by, their meaningful organization of the world.
In many ways, Heidegger stood opposed to the entire edifice of Western philosophy. A hammer, he pointed out, cannot be represented by just its physical features and function, detached from its relationship to nails and the anvil, the physical experience and skill of hammering, its role in building fine furniture and comfortable houses, etc. Merely associating facts, values or function with objects cannot capture the human idea of a hammer, with its role in the meaningful organization of the world as we experience it.
Or consider music speakers. One way to represent them, in the manner of rationalists, is as objects with physical properties (shape, dimensions, color, material, attached wires, etc.), to which is then assigned a value, use, or function. But this is not how we actually experience them. We experience them as speakers, inseparable from the act of listening to music, the ambience they add to our living room, their impact on our mood, and so on. We do not understand them as context-free, object-value pairs; we understand them through our context-laden use of them. When someone asks us to describe our speakers, we have to pause and think about their physical attributes. According to Heidegger, writes Professor William Blattner:
The philosophical tradition has misunderstood human experience by imposing a subject-object schema upon it. The individual human being has traditionally been understood as a rational animal, that is, an animal with cognitive powers, in particular the power to represent the world around it … the notion that human beings are persons and that persons are centers of subjective experience has been broadly accepted … Where the tradition has gone wrong is that it has interpreted subjectivity in a specific way, by means of concepts of ‘inner’ and ‘outer,’ ‘representation’ and ‘object’ … [which] dominates modern philosophy, from Descartes through Kant through Husserl. 
The Western philosophical tradition, according to Heidegger, “has been focused on self-consciousness and moral accountability, in which we experience ourselves as distinct from the world and others.” Such dualism dominates modern science, but fails to describe how humans relate to the world, which is quite holistic. Heidegger contends that “we are disclosed to ourselves more fundamentally than in cognitive self-awareness or moral accountability. We are disclosed to us in so far as it matters to us who we are. Our being is an issue for us, an issue we are constantly addressing by living forward into a life that matters to us.”
In Being and Time, “Heidegger argues that meaningful human activity, language, and the artifacts and paraphernalia of our world not only make sense in terms of their concrete social and cultural contexts, but also are what they are in terms of that context.”. He claimed that the subject-object model of experience, in which we see ourselves as distinct from the world and others, “does not do justice to our experience, that it forces us to describe our experience in awkward ways, and places the emphasis in our philosophical inquiries on abstract concerns and considerations remote from our everyday lives.” Our being in the world is “more basic than thinking and solving problems; it is not representational at all.” When we are absorbed in work, say, using familiar pieces of equipment, “we are drawn in by affordances and respond directly to them, so that the distinction between us and our equipment—between inner and outer— vanishes.”
[Heidegger] argues that our fundamental experience of the world is one of familiarity. We do not normally experience ourselves as subjects standing over against an object, but rather as at home in a world we already understand. We act in a world in which we are immersed. We are not just absorbed in the world, but our sense of identity, of who we are, cannot be disentangled from the world around us. We are what matters to us in our living; we are implicated in the world. 
In other words, it makes no sense to believe that our minds are built on atomic, context-free sets of facts and rules, objects and predicates, storage and processing units. No wonder the methods of natural science, which look for structural primitives such as particles and forces, fail to describe our experience of the world. Contrary to the implicit belief of western philosophy and AI research, a computational theory of the mind may be simply impossible. Isn’t our common sense “a combination of skills, practices, discriminations, etc., which are not intentional states, and so, a fortiori, do not have any representational content to be explicated in terms of elements and rules?”  The older Wittgenstein agreed, adding in 1948: “[N]othing seems more possible to me than that people some day will come to the definite opinion that there is no copy in the ... nervous system which corresponds to a particular thought, or a particular idea, or [a particular] memory.”
A conceptual advance for AI came when some researchers noted that a problem lay in the fact that a computer’s model of the world was not real. The human ‘model’ of the world was the world itself, not a static description of it. What if a robot too used the world as its model, “continually referring to its sensors rather than to an internal world model”?  But this approach worked only in micro-environments with a limited set of features recognized by its sensors. The robots did nothing more sophisticated than ants. As in the past, no one knew how to make the robots learn, or respond to a change in context or significance. This was the backdrop against which AI researchers began turning away from symbolic AI to simulated neural networks, with their promise of self-learning and establishing relevance. Slowly but surely, the AI community began embracing Heideggerean insights.
Machine neural networks, starting with a blank slate (unlike humans), attempt to simulate biological neurons using a connectionist approach capable of continually adapting its structure based on what it processes and learns. In symbolic AI, a feature “is either present or not. In the net, however, although certain nodes are more active when a certain feature is present in the domain, the amount of activity varies not just with the presence or absence of this feature, but is affected by the presence or absence of other features as well.”  Learning is guided using one of three paradigms: supervised learning in controlled domains, unsupervised learning using cost-benefit heuristics, or reinforcement learning based on optimizing certain outcomes.
But the results are not promising. Supervised learning, for instance, remains mired in very basic problems, such as the net’s inability to generalize predictably based on the categories intended by the trainer (except for toy problems that leave little room for ambiguity). For example, a net trained to recognize palm trees in photos taken on a sunny afternoon may generalize on their shadows instead, and fail to detect any trees in photos from an overcast day. The sample size can be enlarged but the point is that the trainer doesn’t know what the net is training on, and such category errors continue until an exception shows up. Another net trained to recognize speech may keel over when it encounters a metaphor, say, “Sally is a block of ice.”  Outside its training domain, the net is also unable to recognize other contexts, or to know when it is not appropriate to apply what it has learned—problems that humans dynamically solve using their social skills, biological imperatives, imagination, etc.
Reinforcement learning has its own pitfalls. For instance, what is an objective measure of immediate reinforcement? Even if we take a simplistic view that humans act to maximize “satisfaction” and assign a “satisfaction score” to all outcomes in all possible situations, we need some way to model how “satisfaction” may be impacted by our moods, desires, body aches, etc., as well as their correlation with inputs in a diversity of situations (weather, familiar faces, noise, motion, etc.). But does anyone know what, if any, ‘model rules’ humans obey in their daily behavior? Dreyfus sums it up:
“Perhaps a [simulated neural] net … If it is to learn from its own "experiences" to make associations that are human-like rather than be taught to make associations which have been specified by its trainer, it must also share our sense of appropriateness of outputs, and this means it must share our needs, desires, and emotions and have a human-like body with the same physical movements, abilities and possible injuries.” 
In other words, the success of neural nets depends not only on our understanding of how we breathe significance and meaning into our world and finding a way to capture it in the language of machines—these nets also need to come into a social world similar to that of humans and project themselves in time the way humans do with their physical bodies, in order to have a shot at behaving like humans. None of this is even remotely clear to anyone, nor is it clear that it is even amenable to modeling on digital computers. To insist otherwise is not only an article of faith, it also seems to me increasingly obtuse and wild. 
Notes & Bibliography:
 Hubert L. Dreyfus, "Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian," 2006.
 William Blattner, "Heidegger’s Being and Time," Continuum, 2006, p.9.
 ibid., p.4-5.
 ibid., p.48.
 ibid., p.12.
 Hubert L. Dreyfus, "What Computers Still Can’t Do: A Critique of Artificial Reason," MIT Press, 1992.
 Hubert L. Dreyfus and Stuart E. Dreyfus, "Making a Mind vs. Modeling the Brain: AI Back at a Branchpoint," UC Berkeley.
 Think Ray Kurzweil, Nick Bostrom, and Bill Joy, with their fantasies of the technological singularity, mind uploading, etc.
 Jonathan Ree, "Heidegger," Routledge, 1999.
 Ari N. Schulman, "Why Minds Are Not Like Computers," The New Atlantis, Number 23, Winter 2009, pp. 46-68.