Reading 11

Artificial intelligence is a buzzword that many people use, and we all have a general idea of what it is, but we don’t often take the time to define it. Even the articles that were assigned for this week didn’t formally or explicitly define AI. They did provide some descriptions and try to characterize the field of AI, however, and I think the best one that I came across described AI as programming computers to be able to perform tasks that seem intuitive to humans. It is about extending what we can do with computers by having them solve more “complex” problems that appear to require understanding or intelligence to solve, although Rise of the machines points out that what makes these problems harder for computers is that we haven’t found (or there doesn’t exist) a formal set of rules to apply to the problem, so “tasks that are hard for humans are easy for computers, and vice versa.” I’ve never thought of AI this way before, but it makes a lot of sense. If we are trying to mimic human intuition (because that is all we know) and re-create that in machines, we should consider where the boundary is, if any, between human and artificial intelligence.

Growing up, I always heard that there were different kinds of smart. Some people are book smart while others are street smart, and there’s this thing called emotional intelligence. The emotional side of things seems difficult if not impossible to code into a computer, but maybe I’ll be proven wrong in the future.

So far, all the hype in AI surrounds game-playing algorithms. While AlphaGo, DeepBlue, Watson, and the likes are all interesting examples of advances that have been made, they have been optimized to do one thing only. They may be able to “learn” more than a traditional program, but they can’t generalize that knowledge beyond the scope of the problem they were designed to solve. It’s also hard for me to accept these as proof of the viability of artificial intelligence because they aren’t really making their own decisions. In the case of these game-playing algorithms, there has to be some bias introduced in the heuristics used to make decisions or break ties, so I’m not entirely convinced that the algorithms are “thinking” on their own. However, in the direction that we’re headed with neural networks, we’re understanding less about how they make their decisions, and they’ve been able to detect patterns that humans can’t even see or comprehend. This is more what I expect when I think of artificial intelligence, but they’re still just attempts to mimic human intelligence.

The Turing Test and Searle’s Chinese Room are interesting thought experiments that raise philosophical questions. I don’t think the Turing Test is a good enough indicator for intelligence because these chat bots are once again only programmed to do one thing. While it’s cool for computers to be able to have seemingly intelligent conversations, it doesn’t fully capture the breadth of human intelligence. I think there’s something special and unique about humans that goes beyond their “intelligence,” and I think it’s important to acknowledge this difference. I like the Chinese Room argument because it exposes how AI really works, but there’s another part of me that believes that in the strict sense of “intelligence,” it doesn’t matter that the computers don’t actually understand because they can behave as though they do understand, and that’s enough to convey understanding.

Like I mentioned before, humans have a unique quality to them, and I’m not worried that this will ever be able to be replicated in a machine. In addition, computer programs are often made to optimize something, and I don’t think that humans do that, or at the very least, that we would ever figure out a way to identify and codify that underlying “something” that we are all trying to optimize. For example, there are people who have developmental disabilities who lack “intelligence” in the strict sense of the term who may never even be considered a challenge to Watson, but they are as equally worthy of love and equally valued in their humanity as Ken Jennings, which is something we would never and should never say about computers. My only concern is that we will try to incorporate so much artificial intelligence into our daily lives that we lose a significant amount of human-to-human interaction. As humans, we are made for community, and without that, we would be worse off. As far as I know, we are still in control and we can remain in control of artificial intelligence by limiting how much of it we let into our lives.

I don’t know if a computing system could ever be considered to have a mind. I don’t know exactly how I would define a mind. I’m taking my second philosophy course next semester on Minds, Brains, and Persons, so I’ll hopefully have a better understanding of it after that. Something that should be considered though is that there is a wide variety of ideas and values among humans, and I don’t see how that would be implemented in artificial intelligence. If artificial intelligence systems are to represent the best of the field, every “instance” would be the same, and there would be no difference between them to promote growth and development. To inculcate a difference, you could train them on different sets of data, but that would upset people because it’s unfair, so it’s not a feasible route. Computing systems also can’t develop their own morality because they have as a goal whatever they are programmed to have as an end goal. How they make decisions is based on heuristics at some level that were provided by humans, so their decision making is biased. Any values that an artificial intelligence system possesses would be indirectly passed down from a real person, but I don’t think the world is ready to decide which value/moral system is best to give the machine. There are already arguments about bias and censorship in devices such as Alexa and Echo Dot that upset people. I don’t know that the answer is, but I think it’s an important consideration to make because any system that is going to make decisions must have a motivation, end goal, or set of rules to inform those decisions.