With the new executive order signed by Biden, AI could soon be regulated. (Image Credit: Gerd Altmann/pixabay)
This feels like season two of Pantheon. I suppose any sci-fi story dealing with AI or similar digital intelligence is similar.
AI regulation is the talk of the world. Some of the top Tech leaders, including Bill Gates, Elon Musk, and OpenAI CEO Sam Altman, are advising Congress on how to address it best. On October 30th, Joe Biden signed an executive order purposed to regulate AI. This order strengthens administrative initiatives designed to keep powerful AI systems usage safe and responsible. The executive order outlines legal consequences the government can enforce on AI powerhouses if they don’t follow orders.
Important things like employee displacement, potential discrimination, and dishonest/unethical use cases that occur due to the AI systems are addressed in the executive order. It even has varying solutions, like introducing reporting services that help combat questionable AI uses.
Biden invoked the Defense Production Act so AI companies, like OpenAI, Microsoft, and Google, are required to let the federal government know when an AI model undergoing training could threaten national security or public health and safety. The executive order also states that companies have to send risk assessment test results to the government. If companies don’t meet those requirements, then a government body could file a lawsuit against them. The Department of Commerce’s role involves setting the technical criteria for AI models so that the rule could apply to them. In effect, this could apply to those that consume a lot of computational power.
Additionally, the executive order points out other important aspects that are worth noting. For example, the Department of Energy and Homeland Security will deal with AI nuclear, biological, and chemical risks. Meanwhile, the National Institute of Standards and Technology focuses on coming up with red-team testing standards.
However, there are not a lot of specific actions listed in this order. The administration promised to develop tools and best practices without further detail. And since AI tech progresses quickly, these goals might not be up to date after a few months. Even then, the government can still use these measures to deal with issues that may arise.
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell