Reading11: Intelligence

Reading11: Indelligence

Artificial intelligence is probably one of the most exciting and promising fields right now due to the extraordinary amount of success researchers have achieved in the past decade.  It seems that every few weeks, news of another breakthrough comes out, and people are increasingly interacting with AI technology in their daily lives.  However, actually defining what artificial intelligence actually means is not completely obvious.  As a first pass, I would roughly define artificial intelligence as anything a computer can do that seems to require human intellect and/or intuition (for instance, recognizing that an image is of a dog and not a cat).  However, I do agree with John McCarthy’s caution that “once [something] works, no one calls it AI anymore.”  Though there are many exciting AI applications on the horizon, we must not continue considering AI as something only of the future because as I have noted, AI is all around us.

There is one thing, though, that was previously considered AI, that do not think qualifies.  Early “AI” game algorithms focus on brute force Monte Carlo simulations to pick an optimal outcome, such an approach to me does not seem intelligent.  I think for something to be considered AI, it has to do some sort of learning.  Given this restriction, I am hesitant to label IBM DeepBlue as a true AI system because although it is impressive, it does not actually learn anything.  In contrast, newer systems like AlphaGo and Watson learn from experience to improve qualifying them as intelligent systems.  Many ideas used in these systems have been successfully adopted across the AI industry, making them more than just gimmicks but exciting steps forward in artificial intelligence.

Though I think I have a working understanding of what AI is, I am extremely conflicted when debating its potential equivalence to human intelligence. I have qualms with both the Turing Test as a valid measure of intelligence and the Chinese Room as a sound counter argument.  I like the idea from the Chinese Room thought experiment that humans have a sort of working understanding of surroundings that a machine can never have, but I think the Chinese Room is an overly simplified case because it assumes that there is always a certain function that will be followed and discounts the ability for a mechanized system to learn from experience.  Therefore, my view is that the Turing test (including its equivalents for different tasks) is a valid measure of a machine’s ability to perform at or above human level for a certain task, but it does not necessarily mean that the intelligence it is displaying is of the same sort as human intelligence.

In this vain, I do not think that a machine can ever be thought of as possessing actual morality.  Though it may have to make decisions that seem like moral decisions, these decisions will be made to optimize some result or fulfill some rule and not to actually follow innate understood morals.  Because I have this view, I am not that worried about the prospect of AI taking over. I agree that we should be careful not to build AI systems with the potential to independently reach an “optimized state” that is harmful to us, I don’t think we are anywhere near a world where machines will consciously and malevolently take over.