There’s been a lot of hype in recent years over the buzzwords “AI” and “automation”, but while the two are often used synonymously, they refer to two entirely different concepts. AI, in short, is automated intelligence, and is the broader of the two categories as it generically refers to any “artificial” being, most commonly computers programs, possessing “intelligence”, i.e. having the ability to think like a human. Automation, on the other hand, refers to the replacing of manual/human tasks and jobs with automated systems. Thus, while AI can be applied to automation, it refers to a far broader category. When we talk about whether to embrace or fear AI then, we have consider its impacts in a variety of fields, not just automation, though AIĀ in automation is an extremely important talking point and where we will begin.
With regards to automation, whether you are against or for it, there is an air of inevitability. In the US, the most expensive component of production is human capital. Since automation in the long run will likely be cheaper then human labor, businesses will (if they are smart and want to remain competitive) invest in automation. Thus, at some point certain jobs, particularly low-skill jobs, will be replaced with automation whether we like it or not. I would argue then that we should just embrace it instead of futilely fighting against it, which would only serve to prolong and complicate the transition process. We need to embrace automation and accept its impacts so that we can properly compensate (such as Andrew Yang’s freedom dividend idea) those who will lose their jobs. AI in automation then is at least something we should embrace.
Now to consider AI in other aspects. First, we need to decide what counts. Consider AlphaGO (the go-playing super AI) or Watson. Are either “AI” or are they just software who can do a s,pecific subset of things very well? Watson, for example, is essentially a search engine with enhanced language interpretation abilities. Thus, even though Watson may be able to “understand” a sentence, it doesn’t necessary have “knowledge” to answer. Just because Watson can search up solutions to questions that can interpret doesn’t necessarily make it capable of “thinking”, which would logically seem to be a basic requirement for “AI.” It is somewhat clear then that proper AI needs to think at a higher level, but what exactly does that constitute and is a non-living being capable of achieving that level of thought? Regardless, let us consider a theoretical situation where we create a software “AI” that can think like a human does. Since this AI is a software, we could theoretically create near-infinite copies, thus creating infinitely many “humans”. Who is responsible for these humans? What can we “do” with these humans? Is it ethical to force them into slave labor to accomplish certain tasks? AI at that level then has so many ethical dilemmas that depend on the specifics of the implementation that it becomes difficult to understand how best to manage and regulate an AI future. Of course, this is all hypothetical, as it may be impossible to create software capable of actual “thinking” but I think that we need to treat AI with a healthy combination of excitement and fear. AI has a lot of potential, but lots of danger as well, and in some ways is inevitable. We should accept that AI is coming, and prepare appropriately for that future.
Recent Comments