In thinking about how the law can “stifle” innovation, the obvious example is the ongoing feud between rideshare services like Uber and local municipalities that want to protect entrenched livery services. This topic has been covered ad nauseam, so I thought it would be interesting to look further into the future.
One topic that is gaining increasing recognition as a place where the tech community and regulators may depart is artificial intelligence (AI). This article addresses recent efforts by the tech community to assuage Congress of the benefits of AI. There is concern from the public of a dystopian Terminator-like future, and since Congress is largely reactive, it is no wonder politicians are already gearing up to regulate technology that scientists say is decades away. This topic is likely even more polarizing when one considers that the tech community itself is largely divided on AI. Prominent tech and scientific figures like Elon Musk and Steven Hawking have banded together to warn about the perils of AI. It will be interesting to see how Congress reacts as this technology grows and it’s place in our society becomes more ubiquotous.
Personally I’m a skeptic and I often think of the famous Jurassic Park quote when discussing AI, “the scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
I agree that the ride-share industry is a great example of the tensions between new entrepreneurial innovation and legal pushback. An interesting anecdote on regarding AI and legal restraints has yet to come up. Currently Uber has been experimenting with Volvo XC-90s for their AI car fleet. While these seemed like great platforms to Uber, they may have just shot themselves in the foot with relation to the Americans with Disabilities Act. Under the ADA all “new vans” purchased by car services must be handicap accessible. For 30 plus years cab companies got around this by only buying used vans to update their fleets. Either Uber was unaware of this regulation that has not been relevant for over three decades, or they are confident that they can win in court by claiming that their Volvo SUVs are not “vans” under the ADA. Either way this is a great example of how even dormant parts of the law can become relevant again and potentially stifle new innovations.
Pat, this is article is dope and that Jurassic Park quote article is on point.
Buck’s framing of AI as a form of statistics is a wrong. In the last couple years Neural Networks, Machine Learning, Quantum Computing and Deep Learning have made dramatic advancements in AI. With these developments, programs can be self sustaining and goal driven. This will allow programs to be developed that are superior to humans at completing nearly any task.
Computers are already better at humans at things like math, video games, and chess. Programmers set different learning goals for AI every day. Further, there is nothing about the laws of physics preventing AI from learning any ask, including the ability to learn how to develop its own software to achieve a goal, without human intervention. Max Tegmark refers to this as Artificial General Intelligence in his new book Life 3.0. Once Artificial General Intelligence is achieved, humans essentially become obsolete because you’ll have an AI with access to all of the worlds knowledge, via the internet. This allows the AI to become super intelligent, and is what Ray Kurzweil has termed the singularity. A Super-inteligent AI would be capable of solving any problem, better than any human. Kurzweil suggests that he will be able to reverse engineer the human brain through neural networks to achieve this landmark at Google by 2029, because of Moore’s Law. However that date is often criticized as being too early, and most AI researchers anticipate it will be achieved around 2055. Tegmark, gives a really cool example of how an Artificial General Intelligence programmed by a company like Google, could take over the world, without anyone knowing. He claims Peter Theil has more power than anyone on the planet.
Ultimately, AI is going to have a profound impact on human society in the next 10 years. I think its awesome that Congress is becoming educated on the issue. While, I disagree with Buck’s characterization, its important that all sides of the debate are heard, so that regulators and law makers are educated on the topic, so that they are able to make informed policy decisions on whether and how the scientist should, when they can.