The subjective nature of consciousness makes it a very slippery thing.
One could imagine a Chinese-Room Machine which reads inputs and produces outputs by a fixed algorithm, but does it so convincingly that most people would believe it was human.
One could also imagine a Pure-Perception Machine, which is perfectly well aware of itself, and perhaps even has a few rudimentary senses, a conception of space and time and consciousness and so forth, but has no capacity to communicate these conceptions to the outside world.
And so I tell a tale of two fictional MIT grad students, toiling in the AI Lab at the Stata Center, both trying to invent an artificial intelligence.
John Turing builds a Chinese-Room Machine, Creamy as he calls it. It's completely programmed, and he even delights in showing people the 4 million lines of code that it took to create all of Creamy's responses. Yet when people communicate with Creamy (usually by instant-messaging, so as not to be biased by appearance or sound), they have a terribly difficult time differentiating Creamy from John and from human strangers, and get it wrong as often as they get it right.
Jacque Descartes builds a Pure-Perception Machine which he calls Peppy. He claims that it is designed based upon new state-of-the-art theories of psychology, but when he tries to explain, nobody else understands what he's talking about. Jacque caims that Peppy is fully self-aware, and is even learning to understand concepts of philosophy and identity, but every time Jacque tries to demonstrate these capabilities, Peppy's performance is disappointing or even marginal. Jacque always claims that Peppy is exercising its genuine free will not to cooperate, but audiences are rarely convinced.
Are either of these two machines conscious? If we take both John and Jacque at face-value, then Peppy has the subjective features of consciousness and Creamy does not. If, on the other hand, we use the test developed by John's grandfather Alan, we find that Creamy displays behaviors we associate with consciousness, whereas Peppy does not.
So which machine is the AI? Which grad student deserves the McArthur Fellowship?
I think I have to say that John and his granddaddy are wrong. Creamy isn't conscious. That's not the way to AI, and never will be.
But Jacque hasn't convinced me either; he could be lying or misunderstanding his experiment. I don't really believe that Peppy is conscious either.
Though I believe
Jacque is conscious, don't I? Why do I believe that? And what would Jacque have to do to prove that Peppy is in fact conscious?
I think what he has to do is justify his
theory. He has to explain how consciousness arises in humans in a logical and scientific way, and then show how he replicated that in Peppy.
In the same way, if you could prove to me that Jacque himself is actually a Chinese-Room Machine (peel back his skull and point out the positronic net, download his runtime logs for the last three weeks onto a hard drive), then I wouldn't believe he was conscious either. But as far as I know Jacque is human, and as far as I know humans are conscious.
Philosophical empiricism is simply not true; you don't need to see to believe. You need to
understand to believe.
I don't believe in quarks because I've touched, seen, or smelled a quark. I believe in quarks because I've seen experimental results that when considered logically only make sense if there are quarks.
In the same way I know I have a self, because nothing I experience would make sense if there was not an
I to do the experiencing.
If Jacque can explain and justify his psychological theories, and then show how he replicated those same effects in the construction of his machine, it is Jacque, not John, who deserves the McArthur.