[Reading 11] AI will never become a Conscience Being

Artificial Intelligence(AI) is basically when I computer can do something that would normally be associated with a human person acting intelligently. How it goes through the process of doing so is not important, but the fact that it can do it is generally the criteria. It turns out there are more specific types of AI: Strong AI, weak AI, and the things in between. Strong AI is AI that is aimed at genuinely simulating human reasoning that can be used to not only build systems that think, but also to explain how humans think as well. There has yet to be a system that can do this, but it is what many are ardently attempting. Weak AI, on the other hand, is just aimed at getting systems to work. It is systems that can behave like humans, but the results will tell us nothing about how humans think. An example of this is the chess playing Deep Blue, a system developed by IBM. In 1997 it was the first machine to defeat a chess master. The in-between AI systems aim to be inspired by human reasoning, but are not totally concerned with going about things the exact same way. An example of this is Jeopardy winning Watson, also developed by IBM.  Overall, the important thing here is that in order for a system to be considered AI, it doesn’t have to work in the same way humans do. It just needs to be smart.

I think that AI is very different from human intelligence because it really has no way of computing feelings. No matter how good it is at reasoning through information, there will never be a true emotional response. Never that feeling in the chest when something hurts you. They may be able to be programmed to simulate a similar response to inputs, but the actual substance of emotion and empathy will never be there. This is what I think the main different is.

According to the technical definition, AlphaGoDeep BlueWatson, and AlphaZero are all AI. But when it comes down to it yes I think they are all basically just interesting tricks or gimmicks. I mean its great that we have something that can, for information sake, be an extremely smart person. This can help is an incredible number of ways scientifically. With such a high level of technical intelligence, an AI program might be able to discover the cure to any number of diseases. Because of its extremely high capacity of information storage and analysis, this seems likely to me. But in the end it’s just a tool that humans will be using toward our own end. They are not an end in themselves.

The Turing test seems like quite the arbitrary measurement. Where does this 30% number come from? And why does it even matter if a computer can convince someone it is a human in light conversation? Is that somehow reflective of a functioning brain. I think the Chinese Room is a great counter argument to the Turing Test and is basically what I was saying before. It’s great if something gets really good at manipulating inputs to produce an output that is identical to a human being, but that is not what makes something human at all.

The only concern I see is the fact that since computers don’t have empathy, the human aspect of mercy will not be present if the particular AI is used for evil of some sort. A computing system cannot be considered to have a mind, or even be subject to morality in itself. Any moral questions are placed on the creator of the program if it is doing something evil. Humans are so much more than just a biological computer. Humans have immortal souls. We have spirits. These kinds of things cannot, and will never be able to be programmed into any AI.