Reading11: More like a calculator than a brain
I believe that what we call “artificial intelligence” (and I would argue is more aptly called “machine learning”), the field that we are currently seeing an enormous boom in, is fundamentally different from what we would truly call “intelligence”. Worries that we could unleash a monster smarter than us that ultimately leads to our destruction are unfounded in response to the current work being done, and very well could be hindering progress.
I like to quip to my friends that machine learning is just “statistics that have gotten out of hand”. And, essentially, that’s what it is: a rigorously defined architecture of nodes with weighted edges, with those weights and the connections between them being fine-tuned and adjusted depending on the type of network and its desired inputs and outputs.
That’s all AI is. It’s simply another tool that we are learning to use. It is a tool that can be very good at recognizing patterns, grouping things, or extracting meaning from complicated or chaotic data in a way that’s not easy for us to follow – which is the crux of the issue.
With typical computer programs, a programmer writes each instruction. Every action is defined and laid out beforehand; we can trace the execution and figure out exactly how and why things happened. With machine learning, this isn’t exactly the case. Programmers set up the network and provide the data, the goal function, etc., but then a dizzying amount of math happens, and a trained network is the result.
This “black box” is what scares people. The fact that there aren’t lines of code to trace to show why decisions were made is worrying. Consider AlphaGo, DeepBlue, Watson, etc. They decisively beat the best players in the world at things we consider very difficult. That doesn’t mean they’re “intelligent”, or understand what they or doing or why. It just means that they’re finely tuned systems meant to output chess moves, or trivia answers.
The article in ND magazine had a truly baffling quote: “Couldn’t you just turn it off? Not necessarily. Shutting down an AI would prevent it from achieving its goals and, with its superior intellect, it could disable that option.” Consider Google’s more recent Go playing program: AlphaZero. It is programmed to, provided game states in certain formats, decide on the optimal next game state and output a move (as well as learning from it, etc). It is NOT programmed to “BECOME BEST GO PLAYER” or anything that would have bucking its handlers and taking over the world (so as to never lose at Go, paperclip-machine style) be a possibility. (Let alone the fact that there many other issues with this: like a computer can’t just “decide” not to turn off – “superior intellect” can’t defeat an unplugged power supply. “Escaping to the internet”, and other clichés are equally nonsensical, and any further discussion on this will just become an increasingly silly rant).
Even though I firmly believe that the machine learning we are doing now is just making tools, completely separate from “intelligence”, I can’t say definitively that there is any reason why a truly intelligent computer couldn’t exist. Though, saying that likely comes more from ignorance of the human brain than any educated opinion – maybe there’s aspect of the brain that would preclude this, but the example of simply simulating a human brain seems to me like it could produce at least a facsimile of a consciousness. Would that really be a consciousness, or just a simulated one, a la Chinese Room? Would that even make a difference? I truly don’t know. That’s not to say I think it’s feasible, or could even happen in even the next century. It’s just to say that we don’t, and cannot, know what the future holds.
Overall, the fear over artificial intelligence or, as Elon Musk put it, humanity being “just the biological boot loader for digital superintelligence”, is unfounded as it relates to machine learning today. Maybe that could happen down the road, but as it is now, the AIs are something completely separate, more akin to a calculator or abacus than a brain.