development

Getting serious about PQC

t seems like everyone should be concerned, based on the level of urgency the companies present, but in the end, no one has yet built a quantum computer capable of breaking even the most standard 256-bit encryption. To that statement, the industry responds with, “Yet.”

This year, however, the National Institute of Standards and Technology (NIST) issued the first, approved algorithm standards to produce encryptions capable of fighting off quantum computing attacks. So we thought it would be a good idea to put together a batch of experts to explain why the rest of us should care.

The invitation was put out to a dozen experts in the PQC industry, but also to the companies tasked with implementing their products into the internet. Unfortunately, none of the PQC companies ended up accepting the invitation when they learned they would on the same platform discussing their approaches. But we did get acceptances from representatives from the other group. Our final panel was Karl Holqvist, CEO of of Lastwall;; Tim Hollebeek, industry strategist for Digicert; and Murali Palanisamy, chief solutions officer of AppviewX.

The three companies both compete with and complement each other services, but all were active in the development of the standards with NIST. Our conversation is available on our podcast Crucial Tech.

However, there are still questions regarding the urgency, timing, and whether the introduction of quantum computing on an encryption-busting level is even possible in the near future.

The rest of this story is available with a subscription only.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Solons scrambling to save AI

State legislatures are scrambling hard to enact regulations of the cybersecurity and AI industries to protect them from themselves. And the leaders of those industries object to the efforts, like drug abusers forced into rehab.

For the past 10 years, the investor world shoveled money into any company that said they are focused on AI, but that support is starting to shake. Many AI startups that have received billions of investment are struggling financially, not the least of which is the elephant in the room, OpenAI. The most successful AI company in the world is on pace to lose $5 billion this year and, according to CEO Sam Altman, the company needs more than $8 billion more investment this year or will face bankruptcy inside 12 months.

Part of the loss of confidence in AI are the number of failures that seem to be increasing. The AI Incident Database, which chronicles incidents dating back to 1983, now contains 629 incidents. An even bigger reason is the self-governing rules the industry says it has adopted either don’t work or are ignored altogether.

The industry has generally acknowledged its weaknesses. More than a year ago, Altman sat before the US Senate essentially begging for the government to regulate the industry. Support for that legislation has waned, however, as 15 U.S. state legislatures are considering dozens of bills to regulate the development and use of artificial intelligence.

In a letter from OpenAI Chief Strategy Officer Jason Kwon to California Senator Scott Wiener (author of SB 1047), the company highlighted several reasons it opposed the bill, including the recommendation that regulation should be, "shaped and implemented at the federal level. A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards."

The “patchwork” argument has been used to oppose proposed laws in nine states. The problem with that is most federal laws come after a critical mass of laws at the state level. Historically, when two thirds of the sites pass similar laws, the US Congress considers standardizing them nationally. The US is less than halfway through that process.

The legislators authoring these bills seem to understand that they are not “experts” in technology and have been working with tech companies to make the bills more palatable. In California’s SB 1047, Weiner, removed provisions for criminal prosecution and an entirely new state bureaucracy to enforce the bill before it went to the governor’s desk last week. Instead, the bill merely directs the state attorney general to file civil charges when companies violate the mandates.

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

RSAC Reporter’s Notebook: Change is coming

The cybersecurity industry is just absolute chaos, and rightly so.  This is the industry charged with plugging dikes during the Class-5 hurricane that the internet seems to be today.  Nowhere is that chaos more evident than at RSAC just from a marketing perspective. Everyone has “ground-breaking”, “industry-leading”, and “first ever” product offerings and this year was no different.  But if you can look past the Macho-man impersonations, Formula One cars, and the mesmerizing miasma of the website and show floor, you can see an order forming in the chaos. Change is coming.

Back to step one

RSA CEO Rohit Ghai, said we have missed a step in AI development.  “We’ve seen it first as a co-pilot alongside of a human pilot and then see it taking over flying the plane.”  He said the first step is making it an advanced cockpit making it easier for less trained and experienced people to do the work.  He pointed out that cybersecurity is an industry with negative employment making it difficult to find experienced technicians to do the work.

Last year, any discussion of ethical development was met with confused stares. This year, the need for ethical AI development is taken seriously but few can see a profit in it. Cybersecurity VC Rob Ackerman (DataTribe) and Carmen Marsh, CEO of the United Cybersecurity Alliance, were open to suggestions,

“From the perspective of (companies like OpenAI), I understand the reasons to go as fast as they can to develop a true artificial intelligence, the question is, who are the people in the room guiding the process?” said Ackerman. “Once you get a diverse set of advisors working on the problem, then you do the best you can to create something ethical.  But right now, we aren’t even doing the best we can.”

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Ethics in AI should not be an afterthought

Ethics in AI is an afterthought in development, making adoption a risky proposition. New industry standards, such as ISO/IEC 42001, and rigorous testing for generative AI models, guided by established ethical principles in AI management, can ease apprehension surrounding the advancement of this truly transformative technology.

Huge hazards and pitfalls loom in AI adoption without adequate safeguards and guardrails. There’s potential for perpetuating stereotypes, spreading misinformation, advancing hate speech, hallucinating, exposing private data, and unforeseen harm. The potential for facing legal and financial consequences due to the inappropriate use of generative AI is genuine with devastating outcomes.

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

Google at loggerheads over support for journalism

Google and the state of California have come to loggerheads over legislation designed to require Google to provide financial support for local journalism. Naturally, Google is fighting this with a PR and lobbying blitz. They and their allies may be missing the point. Whatever the outcome, it could have a profound impact on the democratic process.

The legislation, The California Journalism Preservation Act (CJPA) has been wending its way through the California legislation for about a year. The text of the law says, "This bill … would require … a covered platform (as in Google) to remit a … payment to each eligible digital journalism provider … The … payment would be a percentage, as determined by a certain arbitration process, of the covered platform's advertising revenue generated during that quarter."

Google and the state of California have come to loggerheads over legislation designed to require Google to provide financial support for local journalism. Naturally, Google is fighting this with a PR and lobbying blitz. They and their allies may be missing the point. Whatever the outcome, it could have a profound impact on the democratic process.

The legislation, The California Journalism Preservation Act (CJPA) has been wending its way through the California legislation for about a year. The text of the law says, "This bill … would require … a covered platform (as in Google) to remit a … payment to each eligible digital journalism provider … The … payment would be a percentage, as determined by a certain arbitration process, of the covered platform's advertising revenue generated during that quarter."

History of dispute

A bit of history provides context. Google launched Google News in 2002

A bit of history provides context. Google launched Google News in 2002

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...