Probably once a month I receive a call from my father about a new horrific vision involving AI. There seems to be a new book out around every month in the bio-psycho-neuro-eschatogical horror genre, usually sporting a few vatic quotes from the great seer Elon Musk, who certainly does not have a conflict of interest in these matters. It was refreshing to see a set of articles that did not resort to so many scare tactics to cultivate interest. For instance, the article by MIT Tech Review on AlphaGo prominently featured Demis Hassabis. He saw research into neuroscience more as clues to clever system designs than a bold expedition for the foundation of experience. I also appreciated that he reportedly gave Elon Musk a much-needed “anti-pep talk” about AI. The article quoted Elon Musk comparing AI to “summoning the demon,” as though there is some key to the seventh seal we just haven’t quite outlined in all our neuroscience data.
Tania Lombrozo offers a clever description of the issues we face with AI: we are still chauvinists about thinking. With computer technology suddenly surpassing humans in chess and go, we do our best to shore up the limits of human consciousness, suddenly caring about Descartes and other philosophers of mind. Koch and Tononi of IEEE somehow condenses the Cartesian reduction into a purely formal theorem they call IIT, as a definition for what consciousness is. Following this theorem, they ultimately conclude that consciousness cannot be achieved in the current hardware systems, but could be accomplished with neuromorphic hardware (shocker coming from the instituted of electircal engineers, that the solution would be found in hardware). They put forward a materialist argument that to reproduce consciousness you must replicate the physical system where it emerges. We are always delaying AI to the not yet, somewhat like Derrida’s Messanicity without a Messiah. Whenever computer programs appear to shatter the barriers set up around human consciousness, we propose new regulations that would establish more comprehensive limits. We are chauvinists of consciousness, warding away strange vigils of conscious life. When we talk about consciousness we expect to find some identifiable trait in our brains, like a certain pattern of neuron development. The problem could be that the term consciousness or thinking might not designate any clearly delimited characteristics. Wittgenstein says in his Blue Book that when we ask if machines can think, we could just as well be asking if three has a color. The word ‘think’ seems somehow misapplied, like we have reached outside its everyday use in thinking about new advances in technology. Thinkers such as Deleuze have abandoned any attempt to identify and characterize consciousness or thinking; the thinking subject becomes one of many interlocking machines propelled by some desiring force. As Lombrozo says, we often outsource our thinking to our physical and social environment. Consciousness is not neatly contained as we would prefer, whether drawing on Descartes or Husserl’s doctrine of intentionality, but interwoven with the immanent forces of the environment.
Searle’s Chinese room is fitting refutation of the Turing test because it stages that criteria within its true problem space: language. We may have a machine that perfectly engages with a speaking subject, but has that machine ever engaged with language as we do? Or simply followed given instructions? The image of the Chinese room reminds me of Beckett’s plays, which some read as occurring within a human head, an empty cartesian subject endlessly amusing itself. Language always seems detached from its true import, muddled in confusions and solipsisms, much like it is for the mechanistic subject in Searle’s Chinese room.
The problem of gender in AI studies has not been sufficiently addressed. The fascination with reproducing human life in technology goes back at least to Mary Shelley’s Frankenstein. Yes she wanted to scare us about technology, but the crucial detail is that Frankenstein wants to create life without a woman. There is a denial of the feminine that leads to his monster. I believe Beckett had a similar concern. When a theatre group tried to stage a version of Waiting for Godot for an all female cast (a play with five parts, all male), Beckett reportedly threatened them with a lawsuit if they carried on with that casting. This emptying of consciousness had a specifically male infertility, something that he did not believe could be conveyed without male leads. When Vladimir and Estragon consider hanging themselves, their greatest fear is that they will take on an erection and life will continue through a homunculus that sprouts from the ground below them. There are no sufficient measures to ensure this monstrosity called consciousness continues, and so, they wait.