Legislation and lawsuits influence development in 2024

When it comes to technology, politicians and lawyers usually chime in on technology problems years after a product is on the market and well-adopted. But in 2024 government regulation, legislation and lawsuits, both criminal and civil, will influence the development of security and AI technology more than any innovation or market demand. And that’s just fine with industry… for the most part.

“What is needed with this new form of AI, generative AI, is some rules of the road and some regulation around this.” Apple CEO Tim Cook said in a December 2023 podcast. “I think many governments around the world are now focused on this and focused on how to do it, and we’re trying to help with that. And we’re one of the first ones that say this is needed, that some regulation is needed.” 

Current laws are inadequate

Over the past few years, legislative bodies in almost every country have called in tech company leaders for an accounting of the damage their products have done to society and have played catch-up with regulations. The European Union’s (EU) GDPR set the mark in 2018 for privacy and data security laws that followed in US states like California and New York. The effectiveness of those regulations. however, have been called into question multiple times. The latest was at the IAPP 2023 Global Privacy Summit “At a global level it’s the least stupid privacy law we have so far,” said private advocate Max Schrems in his keynote. “It’s just not precise enough in many areas.”

Hence the seemingly unending string of hearings in the halls of the US Congress and EU Parliament. All that talk, however, is bearing fruit. Cook said in the podcast, “Most governments are a little behind the curve today…but I think they’re quickly catching up. “ He then added that he’s confident there will be “some AI regulation” in the next 12 to 18 months.

Cook was a little off. Two weeks later the EU agreed to a sweeping set of regulations on the use and development of AI. The accord requires general-purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law, and disseminating detailed summaries about the content used for training. It also sets limits on governmental use of AI for surveillance.

Corporate pushback

The Computer and Communications Industry Association, a tech industry lobbying firm representing Big Tech, said in a December statement that the act could lead to an “exodus of European AI companies and talent seeking growth elsewhere.” The companies will have until 2025 before the regulations go into effect.

No companies yet, though, have outright stated they will leave. Sam Altman, the embattled CEO of OpenAI, said the regulation would force him to cease European operations, but he retracted that 48 hours later possibly because when he realized there was nowhere else to go.

In the US, the Security and Exchange Commission (SEC) has proposed new standards regarding the use of AI, in the form of predictive analytics, requiring firms to stop using the tools if they find it results in the firm’s interests being placed ahead of investors’ interests. One might think that’s a no-brainer, but brokerage houses are balking already. Dan Gallagher, Robinhood’s chief legal officer appeared on CNBC in mid-December complaining that requiring brokerages to make sure their AI tools did that was “onerous.”

Related:   Breaking Barriers, Building Bridges: Inspiring Inclusion on International Women's Day

Outside of the US and Europe, almost every developed country has laws on the books, or in process that will affect how AI can be developed. Brazil drafted a law after three years meticulously outlining the rights of users interacting with AI systems and providing guidelines for categorizing different types of AI based on the risk they pose to society. AI providers must inform users about how an AI made a certain decision or recommendation.

Compliance is a career

On the plus side, all this regulation creates a new job market: AI regulation compliance. That will drive demand for compliance training, because failure to comply will raise the dragon of legal prosecution not just in AI but in other crucial technology arenas, like cybersecurity.

Case in point, the SEC lawsuit against SolarWinds. The suit is one of a string of securities fraud actions taken against the security giant based on a massive breach, data exfiltration, and injection of malware in the company’s core products. Because SolarWinds is so widely entwined in industry, the breach affected multiple major security corporations, including Microsoft, Mandiant, now owned by Google, and Trellix, formerly FireEye. That affected most of the United States government agencies and military organizations.

And that was just one case. Since 2020, the number of SEC enforcement actions has multiplied by a factor of six (140 cases in 2020 to 784 cases in 2023) with a 400 percent increase in fines to $5 billion. That dwarfs the total fines issued by the EU for GDPR violations by $3 billion, including $1.2 billion leveled against Meta.

Acceptence over adoption

Cybersecurity violations have become so common that billions in fines are being budgeted in with larger corporations. That willingness to pay rather than comply is catching the eyes of federal prosecutors. The evidence gained in the civil suits is being investigated, at least in the US by the Justice Department for criminal fraud, according to sources in the government, and industry watchers like Ian Thornton-Trump, CISO for Cyjax.

“Just as class action lawsuits are being used to gather evidence for SEC investigations into companies with security failures, criminal investigations can use the outcome to file criminal charges against corporate officers and board members,” he explained.

Thornton-Trump said the lack of transparency about security inadequacies is going to result in multiple legal issues beyond losing customer data. “Intellectual property theft is a major concern and justify government investigations into companies like SolarWinds and Progress Software.”

To temper those investigations, he recommended companies consider electronic watermarking to track stolen code, implementing “poison pills” to forcefully encrypt criminal databases, and embracing immediate transparency when things go wrong.

“Consider the case of Zoom,” he said. “Zoom initially had poor security but invested heavily after admitting issues. Transparency early on, apologizing, and fixing problems is better than denying inadequacies.”

We asked Thornton-Trump if the industry would adopt this approach this year.

“Not a chance.”

Lou Covey

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *