In Philosophy Now‘s special issue on consciousness, I explore “the complexity of consciousness and its implications for artificial intelligence.”
As a graduate student of computer engineering in the early 90s, I recall impassioned late night debates on whether machines can ever be intelligent – meaning, possessing the cognition, common sense, and problem-solving skills of ordinary humans. Scientists and bearded philosophers spoke of ‘humanoid robots’. Neural network research was hot, and one of my professors was a star in the field. A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.
I argued out of intuition, from a sense of the immersive nature of our life: how much we subconsciously acquire and call upon to get through life; how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted Turing test? How could a machine that did not care about its existence as humans do, ever behave as humans do? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.
More here (read two discussions on it: one, two). Also see the discussion on a version that appeared earlier on 3QD.

Leave a Reply