In Philosophy Now's special issue on consciousness, I explore "the complexity of consciousness and its implications for artificial intelligence."
As a graduate student of computer engineering in the early 90s, I recall impassioned late night debates on whether machines can ever be intelligent – meaning, possessing the cognition, common sense, and problem-solving skills of ordinary humans. Scientists and bearded philosophers spoke of ‘humanoid robots’. Neural network research was hot, and one of my professors was a star in the field. A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.
I argued out of intuition, from a sense of the immersive nature of our life: how much we subconsciously acquire and call upon to get through life; how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted Turing test? How could a machine that did not care about its existence as humans do, ever behave as humans do? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.
More here (read two discussions on it: one, two). Also see the discussion on a version that appeared earlier on 3QD.
I ran across this start-up, Vicarious Systems, which is "building software that thinks and learns like a human." In a video onsite, its two boyish founders, Dileep George and D. Scott Brown, describe their approach to AI. Folks, if you happen to be looking for investment opportunities, I say give this one a wide berth. Their approach is new but it's still full of philosophical naivete about human intelligence—which they imagine can be produced by "developing algorithms that mimic the function of the human brain". It builds on George's Ph.D dissertation (2008).
One analogy the founders used repeatedly for their model of human intelligence was the Wright brothers' model of flying birds. The idea being that just as the first plane produced flight with a functional model of the biological wings that birds use, so will their functional model of relevant activities in the neocortex produce intelligence. They see a similar leap of faith in the two cases. By studying the neocortex, they hope to gain "important clues about the nature of the assumptions made by the neocortex" that are "relevant from a learning point of view". In other words, they plan to extract some basic rules from watching the activity in the neocortex as it learns, digitally encode these rules in their learning algorithms, hopefully leading to machines that mimic human learning and produce human intelligence over time. Their initial goal is to "develop a vision system that understands the contents of images and videos the way humans do."
Well, you heard it here first. I'll keep an eye out and post a note when Vicarious collapses for real under the weight of its unsophistication.
Posted by: Namit | November 29, 2011 at 09:04 AM
Just noticed that the New York Times Opinionator mentioned this article late last year in a "gathering of recent philosophy-related links".
The discussion forum hosted by the magazine has also been abuzz, with dozens of comments posted.
Posted by: Namit | January 12, 2012 at 11:49 AM