Biden’s EOs on AI is more CYA than action

President Joe Biden’s issuance of executive orders on AI development was all the news this week. It may not be all that important in the larger scheme.

It certainly set the cybersecurity world all a-twitter about what it would mean for AI security. Journalist’s inboxes filled up with extensive quotes from company executives to highlight their company’s contribution to the effort.

Michael Leach, Compliance Manager for Forcepoint, said in a sentence worthy of Joycian stream of consciousness, “The new Executive Order provides valuable insight for the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.”

We are not sure if he took a breath for that sentence.
 
Platform‘s VP of Data Privacy & Compliance, Joey Stanford, was less breathless, more cautious, and used effective punctuation. “President Biden’s executive orders is a step in the right direction and the most comprehensive to date. However, it’s unclear how much impact it will have on the data security landscape. AI-led security threats pose a very complex problem and the best way to approach the situation is not yet clear. The orders address some of the challenges but may end up not being effective or quickly becoming outdated in the approach.”

Lacking detail

Stanford pointed out that while a few AI developers like Google and OpenAI have agreed to use watermarks. But the fail to detail how that would work.
In the discussion of the importance the executive orders, it seem clear this is a CYA attempt to disguise the United States’ lack of movement on effective regulation of AI.

The US trails far behind China, the EU, and the UK in addressing potential harmful aspects of the use of AI, and general AI in particular. That fact is in stark evidence at this week’s Global AI Safety Summit in London where every major power has gathered to discuss a global effort in AI regulation.

Vice President Kamala Harris was thrown into this event and needed something to talk about. That gave a reason for issuing the orders. With the US Congress deadlocked on almost every issue, substantial tech legislation still waits to be taken off the backburners. While what President Biden signed appears to be a step in the right direction, it is a tentative step that could result in several steps backward, depending on the political climate.

Related:   System Storage: What Does It Hold and How to Optimize It?

The currently four-time indicted former president complained about the use of executive orders, but still produced more than 200 during his term. Biden has overturned about a quarter of those. And that’s the problem with executive orders. They are not meant to be permanent or even effective.

In place of action

The primary purpose of executive orders is to bring clarity to any legislation intended to become a national policy. For example, executive orders have directed the Environmental Protection Agency on how to implement signed legislation. Those orders have been challenged not only in Congress but in the judicial system, sometimes with catastrophic results.

The second purpose of the orders is to enact policy when Congress and the judiciary have been silent. Such is the case with the orders issued this week. In fact, the executive branch is the only part of the US government that has done anything close to regulating AI, and that effort has been getting promises from Alphabet, Microsoft, and Facebook to do good things and not bad things with AI. And we all know how those promises have gone in the past.

What eventually came out of the UK’s summit this week was more a statement that something should be done, without much detail about what will or is being done, but that seems to be more than anything we will see out of the US government in the next few years. So we will have to wait and see what finally arises from this effort.

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *