Reading 11: “Sometimes it is the people no one imagines anything of who do the things that no one can imagine.”

Artificial Intelligence is the idea of programming computers to behave like we, as humans, do.  For some that means that computers should behave exactly as we do; this is called “Strong AI”.  The goal of a system like this would be to study human cognition.  On the other side of this is “Weak AI”, which is making computers to behave like we do, without telling us anything about the way we think.  And additionally, there is a third stream of thought, which is a combination of the two; this uses human reasoning as a guide and is not striving for perfection.

Some examples of Weak AI include things like AlphaGo, Deep Blue, Watson, and AlphaZero.  While the main functionality of  these systems are to play (and win) a game, they mean much more than that for the Artificial Intelligence world.  At the time each of these were created, the games they were meant to play were thought of as “too strategic” for a machine to be able to win, let alone against a human champion.  But in the end, no human proved to be a match for these systems.  This goes to show that we should never assume that there is a limit to what a computer can do.  I believe that the methodologies behind these types of advances will one day be used to solve more

The Turing Test is designed to find out if a machine can think.  Supposedly, a human “interpreter” asks questions to both a human and a machine and, based on their responses, is supposed to determine which is which.  The idea is, if the machine can fool the human, then the machine can in fact think.  I don’t think that this test is an accurate way of determining if a machine can think.  The Chinese Room demonstrates why this is not an accurate measure.  This test describes a Chinese-speaking man conversing with a machine that can pass the Turing Test.  Seale then describes a situation where they replace the machine with himself, a non-Chinese-speaking person, and he is to process the Chinese input as a program instructed, and give the responses back to the interpreter.  If he were to pass the Turing Test, it would mean that he “understood” the Chinese input.  But, he does not, in fact, understand Chinese and is just following a set of instructions to make it appear as if he does.  Here, the machine does not actually understand the input, but is just translating based on a set of instructions and feeding back responses that were programmed in.  This relates back to the idea of “Strong AI” vs “Weak AI”.  Are these machines actually thinking as humans do?  Or are they merely accomplishing the task of making it seem like they think like we do?

Any possible dangers that could come from Artificial Intelligence really come from the people in charge of it.  They can decide what the machines’ limits are.  I do think that we have a moral obligation to ensure that any AI machine does not do harm to people.  There is a lot of potential for these types of machines to do a lot of good, and I am hopeful for where this field is headed.