expert

Pig butchering: Proving the Luddites right

Pig-butchering may be proving the Luddites were right. The social-engineering scam bypassed ransomware as the most profitable cybercrime approximately two years ago. After government regulations and law enforcement took a big bite out of returns for ransomware this past year, public-private partnerships are taking aim at the new champ.

TL;DR
* Pig butchering eclipses losses from ransomware
* Top targets are tech savvy people under 50
* Human error trumps cyber awareness
* Public/private partnerships making inroads at dismantling scam operations
* Tips to avoid scams
* Podcast with Arkose CEO
Between 2020 and 20023, scammers reaped more than $75 billion from victims around the world. Approximately 90 percent of the losses came from of purchasing fraudulent cryptocurrency, according to the US Treasury Department’s, Financial Crimes Enforcement Center. In comparison, ransomware attacks in that same period harvested $20 billion worldwide in ransoms and cost approximately another $20 billion in recovery costs.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Getting serious about PQC

t seems like everyone should be concerned, based on the level of urgency the companies present, but in the end, no one has yet built a quantum computer capable of breaking even the most standard 256-bit encryption. To that statement, the industry responds with, “Yet.”

This year, however, the National Institute of Standards and Technology (NIST) issued the first, approved algorithm standards to produce encryptions capable of fighting off quantum computing attacks. So we thought it would be a good idea to put together a batch of experts to explain why the rest of us should care.

The invitation was put out to a dozen experts in the PQC industry, but also to the companies tasked with implementing their products into the internet. Unfortunately, none of the PQC companies ended up accepting the invitation when they learned they would on the same platform discussing their approaches. But we did get acceptances from representatives from the other group. Our final panel was Karl Holqvist, CEO of of Lastwall;; Tim Hollebeek, industry strategist for Digicert; and Murali Palanisamy, chief solutions officer of AppviewX.

The three companies both compete with and complement each other services, but all were active in the development of the standards with NIST. Our conversation is available on our podcast Crucial Tech.

However, there are still questions regarding the urgency, timing, and whether the introduction of quantum computing on an encryption-busting level is even possible in the near future.

The rest of this story is available with a subscription only.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Solons scrambling to save AI

State legislatures are scrambling hard to enact regulations of the cybersecurity and AI industries to protect them from themselves. And the leaders of those industries object to the efforts, like drug abusers forced into rehab.

For the past 10 years, the investor world shoveled money into any company that said they are focused on AI, but that support is starting to shake. Many AI startups that have received billions of investment are struggling financially, not the least of which is the elephant in the room, OpenAI. The most successful AI company in the world is on pace to lose $5 billion this year and, according to CEO Sam Altman, the company needs more than $8 billion more investment this year or will face bankruptcy inside 12 months.

Part of the loss of confidence in AI are the number of failures that seem to be increasing. The AI Incident Database, which chronicles incidents dating back to 1983, now contains 629 incidents. An even bigger reason is the self-governing rules the industry says it has adopted either don’t work or are ignored altogether.

The industry has generally acknowledged its weaknesses. More than a year ago, Altman sat before the US Senate essentially begging for the government to regulate the industry. Support for that legislation has waned, however, as 15 U.S. state legislatures are considering dozens of bills to regulate the development and use of artificial intelligence.

In a letter from OpenAI Chief Strategy Officer Jason Kwon to California Senator Scott Wiener (author of SB 1047), the company highlighted several reasons it opposed the bill, including the recommendation that regulation should be, "shaped and implemented at the federal level. A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards."

The “patchwork” argument has been used to oppose proposed laws in nine states. The problem with that is most federal laws come after a critical mass of laws at the state level. Historically, when two thirds of the sites pass similar laws, the US Congress considers standardizing them nationally. The US is less than halfway through that process.

The legislators authoring these bills seem to understand that they are not “experts” in technology and have been working with tech companies to make the bills more palatable. In California’s SB 1047, Weiner, removed provisions for criminal prosecution and an entirely new state bureaucracy to enforce the bill before it went to the governor’s desk last week. Instead, the bill merely directs the state attorney general to file civil charges when companies violate the mandates.

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

Media training offered for cyber industry

“Over the years, the content of news releases, websites and other marketing materials has become formulaic. We know what that formula is and it hurts company credibility,” said Covey.” The repetition in that content obscures the real story of these companies and the sheer volume of it overwhelms the few qualified journalists still working. The use of generative AI makes the problem worse. Generative AI uses the same, repetitive marketing language because that’s how it’s trained on. That results in homogenized messaging, destroying differentiation. This program will restore differentiation and, in the process, make it easier for us to accept and report on industry news. It’s a win-win.”

Read more...