Generative AI

Solons scrambling to save AI

State legislatures are scrambling hard to enact regulations of the cybersecurity and AI industries to protect them from themselves. And the leaders of those industries object to the efforts, like drug abusers forced into rehab.

For the past 10 years, the investor world shoveled money into any company that said they are focused on AI, but that support is starting to shake. Many AI startups that have received billions of investment are struggling financially, not the least of which is the elephant in the room, OpenAI. The most successful AI company in the world is on pace to lose $5 billion this year and, according to CEO Sam Altman, the company needs more than $8 billion more investment this year or will face bankruptcy inside 12 months.

Part of the loss of confidence in AI are the number of failures that seem to be increasing. The AI Incident Database, which chronicles incidents dating back to 1983, now contains 629 incidents. An even bigger reason is the self-governing rules the industry says it has adopted either don’t work or are ignored altogether.

The industry has generally acknowledged its weaknesses. More than a year ago, Altman sat before the US Senate essentially begging for the government to regulate the industry. Support for that legislation has waned, however, as 15 U.S. state legislatures are considering dozens of bills to regulate the development and use of artificial intelligence.

In a letter from OpenAI Chief Strategy Officer Jason Kwon to California Senator Scott Wiener (author of SB 1047), the company highlighted several reasons it opposed the bill, including the recommendation that regulation should be, "shaped and implemented at the federal level. A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards."

The “patchwork” argument has been used to oppose proposed laws in nine states. The problem with that is most federal laws come after a critical mass of laws at the state level. Historically, when two thirds of the sites pass similar laws, the US Congress considers standardizing them nationally. The US is less than halfway through that process.

The legislators authoring these bills seem to understand that they are not “experts” in technology and have been working with tech companies to make the bills more palatable. In California’s SB 1047, Weiner, removed provisions for criminal prosecution and an entirely new state bureaucracy to enforce the bill before it went to the governor’s desk last week. Instead, the bill merely directs the state attorney general to file civil charges when companies violate the mandates.

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

Media training offered for cyber industry

“Over the years, the content of news releases, websites and other marketing materials has become formulaic. We know what that formula is and it hurts company credibility,” said Covey.” The repetition in that content obscures the real story of these companies and the sheer volume of it overwhelms the few qualified journalists still working. The use of generative AI makes the problem worse. Generative AI uses the same, repetitive marketing language because that’s how it’s trained on. That results in homogenized messaging, destroying differentiation. This program will restore differentiation and, in the process, make it easier for us to accept and report on industry news. It’s a win-win.”

Read more...

Mining data is daunting but crucial

The cybersecurity industry seems addicted to research but isn’t all that good at it. Mining the massive amount of data produced is daunting but crucial to everyone.

Surveys and studies are an important part of marketing form the cybersecurity industry. Cyber Protection magazine receives a lot of them. We read them all. In the two months before the RSA Conference, more than one a day came into our inbox. However, they are not a great source of independent data and insight.

Ignoring the cherry-picked data highlighting a particular company’s product or service, there are a few nuggets that, taken together, produce some interesting insights. Out of 60+ reports, we took a pass on any that were repetitive, were suspect methodologically, or effectively plagiarized from another source. We chose to look at seven with a solid methodology, representation of industry-wide concerns, and originality. The reports came from Dynatrace, Black Kite, SlashNext, Metomic, Originality AI, Logicgate, and Sophos. We found three common themes: The impact of AI on security, government regulation compliance, and understanding of security concerns on the C-suites and board levels.

Understanding security issues.

Almost every study has a common complaint. CISOs say application security is a blind spot at the CEO and board levels. They say increasing the visibility of their CEO and board into application security risk is urgently needed to enable more informed decisions to strengthen defenses.

However, Dynatrace’s study said CISOs fail to provide the C-suite and board members with clear insight into their organization’s application security risk posture. “This leaves executives blind to the potential effect of vulnerabilities and makes it difficult to make informed decisions to protect the organization from operational, financial, and reputational damage.”

Recent news shows the study may have a point. Marriott Hotels admitted that a 2018 breach was the result of inadequate encryption of customer data. In 2018 the company claimed their data was protected by 128-bit AES encryption when customer identity was only protected by an outdated hashing protocol. One can imagine the discussion between the CEO and the IT department:

CEO: is our data encrypted?
IT manager: Yeah, sort of.
CEO: OK, good enough

If the CEO doesn’t understand the difference between a hash and AES encryption, that’s a problem.

And there many be evidence that ignorance is widespread. Apricorn reported that the number of encrypted devices in surveyed companies had dropped from 80 percent to 20 percent between 2022 and 2023. Some of that could be attributed to work-from-home (WFH) growth in companies. It is also likely that companies over-reported what was encrypted simply because they did not understand what “encryption” meant. Once they learned the meaning, adjustments were made.

That lack of a foundational security technology could be a reason for the devastating growth in ransomware in the past two years.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Third-party security is almost impossible

There are many themes arising for the RSA Conference next week including tools and services to protect against originating with unsecured third parties in the supply chain. That is a crucial issue in every industry especially with almost every company doing business with a supplier in the cloud. But the scope of the problem is almost impossible to resolve. The reasons are myriad.

With every Fortune 1000 business and government agency doing business with tens of thousands of third-party suppliers, the odds of finding one chink in the security protocols are very good for the criminals and state actors looking to do damage.

Social engineering can easily bypass the strongest technical defenses. It only takes a single lapse in digital hygiene to open the door to man-in-the-middle attacks, invite malware injections, and launch credential stuffing. It is also the favorite strategy of ransomware gangs.

Ransomware grabs headlines and remains highly lucrative for ransomware gangs. When compared to other forms of cybercrime, however, ransomware is really a minor issue. There are more than 33 million small businesses (under $100 million in revenue) operating in the United States alone representing 99 percent of all businesses. However, according to a study produced by the Black Kite Research and Intelligence Team, less than 5000 of them experienced a successful ransomware attack in the last 12 months...

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Ethics in AI should not be an afterthought

Ethics in AI is an afterthought in development, making adoption a risky proposition. New industry standards, such as ISO/IEC 42001, and rigorous testing for generative AI models, guided by established ethical principles in AI management, can ease apprehension surrounding the advancement of this truly transformative technology.

Huge hazards and pitfalls loom in AI adoption without adequate safeguards and guardrails. There’s potential for perpetuating stereotypes, spreading misinformation, advancing hate speech, hallucinating, exposing private data, and unforeseen harm. The potential for facing legal and financial consequences due to the inappropriate use of generative AI is genuine with devastating outcomes.

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

Google at loggerheads over support for journalism

Google and the state of California have come to loggerheads over legislation designed to require Google to provide financial support for local journalism. Naturally, Google is fighting this with a PR and lobbying blitz. They and their allies may be missing the point. Whatever the outcome, it could have a profound impact on the democratic process.

The legislation, The California Journalism Preservation Act (CJPA) has been wending its way through the California legislation for about a year. The text of the law says, "This bill … would require … a covered platform (as in Google) to remit a … payment to each eligible digital journalism provider … The … payment would be a percentage, as determined by a certain arbitration process, of the covered platform's advertising revenue generated during that quarter."

Google and the state of California have come to loggerheads over legislation designed to require Google to provide financial support for local journalism. Naturally, Google is fighting this with a PR and lobbying blitz. They and their allies may be missing the point. Whatever the outcome, it could have a profound impact on the democratic process.

The legislation, The California Journalism Preservation Act (CJPA) has been wending its way through the California legislation for about a year. The text of the law says, "This bill … would require … a covered platform (as in Google) to remit a … payment to each eligible digital journalism provider … The … payment would be a percentage, as determined by a certain arbitration process, of the covered platform's advertising revenue generated during that quarter."

History of dispute

A bit of history provides context. Google launched Google News in 2002

A bit of history provides context. Google launched Google News in 2002

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...