What could AI regulation look like?

And would it stifle research?

Artificial intelligence has long been viewed as one of the ultimate goals of computing, and the prospect of self-learning computers capable of thinking in a human-like manner is exciting. This week even Vladimir Putin got involved in the debate by predicting that the nation that dominates AI innovation will become “ruler of the world.”

Despite his optimism, most experts agree that there are risks, and the prospect of negative effects means some individuals and entities are interested in determining how to properly regulate it, prioritizing safety without stifling innovation.

Ceding Control

When it comes to regulating artificial intelligence, managing personal information is likely a top priority. Nations around the world have strict requirements for handling health information, for example, and health organizations that fail to protect patients’ health information risk significant fines and other sanctions. Because artificial intelligence can glean so much from health data, it makes sense for governments and regulatory agencies to provide guidelines for managing patient health information in an automated manner. These regulations can help the health industry as well; companies that follow regulations can shield themselves from lawsuits and other actions by showing they followed guidelines.

Automotive Artificial Intelligence

Another area where regulatory clarity can help is in automotive systems, particularly self-driving cars. Driving is an inherently dangerous activity, and self-driving cars promise to improve safety by removing the most unreliable part of the driving equation: The human driver.

However, small bugs in the code can lead to accidents and even death, and ensuring those developing the code enforce good practices through regulation can reduce the odds of accidents. Again, regulations can help the industry. Determining who is at fault if a self-driving car crashes is difficult from a legal perspective, and regulations can give automakers more confidence that they won’t face large lawsuits if they follow regulations.

Mandating Ethics

Rapid developments in the field of robotics are placing an emphasis on artificial intelligence, which would no doubt be combined when the technology reaches that point. At the heart of most conversations is ethics; in broad terms, how will robots, which will likely have more freedom over time, interact with the world? Lawmakers are already laying the groundwork for potential ethical codes, with Mady Delvaux and other European MEPs taking the lead in Europe, where regulations will have a significant impact on products manufactured around the world. Although human-like intelligence doesn’t appear on the horizon, ensuring future AI research and development follows basic ethical elements can help avoid behavior and freedoms many would consider dangerous, and laying the groundwork early can provide structure for future ethical debates.

Artificial Intelligence Misuse

One trend in computing seems unavoidable: All technology will eventually be misused. Technology developed for legitimate purposes may eventually be used by malicious actors. Malware, for example, already relies on sophisticated intelligence systems to spread between computers. Artificial intelligence could make the software even more dangerous. This doesn’t mean development should cease, but it’s worth considering the ethical ramifications of developing certain types of intelligence systems. However, any attempt to regulate what sorts of artificial intelligence can be developed are likely to be met with significant backlash. Instead, it might be worth focusing on system security to prevent malware from spreading.

Monitoring

Because generalized artificial intelligence systems are still in their infancy, it’s difficult to guess what problems will arise in the real world. However, this doesn’t mean a proactive approach is impossible. Instead of issuing regulations today, governments and other entities can instead begin to build reporting systems, allowing researchers to share possible points of concern. Although companies and research organizations will want to protect their proprietary information, setting up a pool that protects intellectual property while providing a means of reporting potentially negative outcomes can help in developing ethical guidelines and regulations. Research is most safe when it’s done in the open, and encouraging openness at research institutions and within companies can help prevent dangerous outcomes.

Stifling Innovation?

One of the concerns of regulation is that it stifles innovation, and companies tend to fight against regulations that affect the bottom line. The influence of large companies on government entities is significant, and those drafting and voting on regulations will need to address whether regulations will impact the industry as a whole. Before passing strict regulations on artificial intelligence, it might be worth first building an empty framework where regulations can be added if needed. Doing so allows regulatory agencies and companies to stay well ahead of future artificial intelligence breakthroughs and respond with smart regulations that protect people while allowing companies to innovate freely and ethically.

Robots are coming, and more generalized artificial intelligence systems may be coming online sooner than many people realize. While rapid development is reason for concern, there’s also reason for hope; artificial intelligence systems can benefit society tremendously, and we may find artificial intelligence to be beneficial in ways nobody ever imagined. However, it’s important to be proactive to ensure research and development is done in an ethical manner while avoiding regulations that stifle growth in the industry.