I am great admirer of artificial intelligence. AlphaGo defeating the go world champion seems like a gimmick, but its implications are relevant. The benefit of artificial intelligence is the broad applicability of it. I once assisted a presentation on IBM Watson and how its beneficial to the medical industry. The devices IBM assist doctors by providing them up do date research that is particular to an individual patient. It checks thousands of medical papers to produce a list of most likely results. Doctors liked this because it gave them suggestions and not simply an answer. People have difficulty trusting a AI system giving them a diagnosis by itself, but it makes them feel more confident that an actual doctor will process these results and give a diagnosis. This is the best of both worlds, the power of AI to search through thousands of documents in seconds, with the judgement and critical thinking of a human doctor with real world experience. This is just the tip of the ice berg, AI has many real-life applications that can potentially improve human quality of life.
The relevance of AI is also independent of the consciousness of the mind problem. We can never know whether a computer is actually conscious or simply emulating consciousness. I believe the Turing test can’t tell us that. However, I consider it irrelevant. If a general AI system is indistinguishable from a human being’s mind, why does it matter whether it thinks like us? If the system shows that it has emotions, concepts of morality, mental curiosity, and all elements we deem human, then the AI would very well just become an artificial human. Finding whether an AI that has a consciousness the same way a human does is tantamount to finding whether human beings have souls or not. For this reason, I believe the Turing test is a perfectly sensible test for an AI having a practical human mind.
In the movie Her for example, Theodore believed Samantha to be a real emotional partner. He knew she wasn’t a human, but that didn’t matter to him. She felt human enough to him. By the end of the movie she had transcended to a state that superseded matter itself, but he was just sad about losing his new partner. The ending was sad with a hint a possible AI world domination that was tossed aside in the background.
Are these dangers real? Of course. Imagine if there was a “human” with the body of a machine. If they truly create an AI brain that for all practical purposes performs the same as a human, it is expected that the AI will have a concept of not wanting to die. The AI can very easily explore the net and find how antagonistic people are against him. The next logical step is to not trust humans and defend itself. These dangers are real and should be taken into account when creating AI humanoids. For example, there is the idea that AI robots will be the ones fighting the wars of the future. Measures have to be taken to prevent them from turning against human. Developing AI without taking this into consideration is simply reckless.