Scam Bucket: Political/hate texts and what to do about them
Just when you thought the political fundraising texts were done, you get this screamer on your phone… Now that readsmore
Read more...Just when you thought the political fundraising texts were done, you get this screamer on your phone… Now that readsmore
Read more...It is Medicare scam season when Medicare patients can change their insurance plan for the new year, and that means………more...
n the past few weeks, as various security companies have published multiple studies about the state of cybersecurity, a common theme has arisen: Executives running the companies that purchase security tools and services are not sure their purchases have made them any safer. This widespread position in the market confirms results of a months’ long investigation by Cyber Protection Magazine that marketing practices in the industry are failing to do the job and, in the process, making society less safe.
While every report skews data to convincing customers to add their company’s tools and services to their budgets. However, every report also reports that between 60 and 90 percent of managers have significant concerns and doubts that the tools they have, and the tools they are considering, will not do the job that needs doing. The reasons for that lack of confidence are three-fold.
Three reasons for lack of trust
First, stuff is moving fast. Governments are legislating controls and protections faster than normal. Sometimes this rules don’t make sense and many in the industry think they are holding back innovation and adoption. Criminals and nation states are stepping up attacks that bypass established protections, and lawsuits for negligence are growing. Second, while understanding the need for security best practices is at an all-time high, that’s mainly because weaknesses due to work-from-home, generative AI and news about data breaches is also high. That means while understanding of the need is high, inexperience and ignorance is creating new opportunities for attacks.
“Many executives may not exactly understand how (the tools) work,” said Cache Merrill, founder of software outsourcing company, Zibtek. “. When there is a concern on the functionality of the tools or when attention is on what the tech teams understand without listening to them, anxiety is experienced. To put it simply, if they cannot see it, they will not put faith in it.”
Carl DePrado, an SMB IT consultant based in New York, aid, “The sheer number of cybersecurity products and services can be overwhelming. This contributes to a sense of vulnerability, as they may not feel confident that they have covered all their bases.”
There is a new industry association in town. It’s called The Fully Homomorphic Encryption Technical Consortium on Hardware, or FHETCHmore
Read more...Pig-butchering may be proving the Luddites were right. The social-engineering scam bypassed ransomware as the most profitable cybercrime approximately two years ago. After government regulations and law enforcement took a big bite out of returns for ransomware this past year, public-private partnerships are taking aim at the new champ.
TL;DR
* Pig butchering eclipses losses from ransomware
* Top targets are tech savvy people under 50
* Human error trumps cyber awareness
* Public/private partnerships making inroads at dismantling scam operations
* Tips to avoid scams
* Podcast with Arkose CEO
Between 2020 and 20023, scammers reaped more than $75 billion from victims around the world. Approximately 90 percent of the losses came from of purchasing fraudulent cryptocurrency, according to the US Treasury Department’s, Financial Crimes Enforcement Center. In comparison, ransomware attacks in that same period harvested $20 billion worldwide in ransoms and cost approximately another $20 billion in recovery costs.
Everyone hates telemarketers. If they can convince one or two suckers out of a thousand call to sign up, it………more...
As we prepare for the annual October holiday season with Cybersecurity Awareness Month there is an important question to ask. Are we as a society at the point of fatigue over every new security breach, or are the companies getting breached just too big to fail?
Security giant Fortinet announced a data breach this week that was remarkable in two ways. One was how small the breach was (less than 500GB) Two was how calm Fortinet seemed to be about. Security gadfly Dr. Chase Cunningham posted a flippant comment about the breach on Linkedin, encouraging his followers to “buy on the breach.” He pointed out that with big public companies, in security or not, generally take a hit on their stock for a day or two after a breach, but the stock rises to new highs as the dust clears. And no one seems to care about the downstream customers whose data might have been stolen.
A 2010 study published in the Journal of Cost Management concluded that a company could be more profitable if it annoyed unhappy customers more than they already were. The success of that strategy increased with the size of the company, according to the study, and when there were fewer competitors for a customer to turn to.
The reasons for the success were simple. If a pissed off customer decided to go a smaller provider, there were always new customers who signed up, simply because they were the biggest. If there were no smaller competitors, the customer never went away. In the process, the offending company rarely has to pay out to make the customer whole. The study pointed our that companies like United Airlines have notoriously bad customer service, but they rarely lose market share because of it.
Kevin Szczepanski, co-chair of Barclay Damon's Data Security, is much more forgiving
On Mastodon a poster asked last week, “Looking for an article or blog or text, that succinctly describes, at grade 1 level English, why ‘if you have nothing to hide, you have nothing to fear’ is a crazy and bad argument, and perhaps also includes what some good arguments are.” We thought that is an excellent idea for a Scam Bucket post. Let’s get to the biggest argument against that philosophy.
It may not be scandalous, like a drug addiction, pornography or drug dealing, but there is personal information that everyone wants to keep from someone like passwords, account number and routing number to your bank account, and social security numbers
People who ascribe to the philosophy will readily agree to those limitations of what should be available to public knowledge. What they may not be willing to admit that they have done something in their life that they are ashamed. As Jesus Christ once proclaimed, “No one is without sin. No, not one.”
Sometimes, the error is made in ignorance. Clicking on a link in an email that connects to a porn site. Being rude to a waiter or failing to give a tip. Road rage someone recorded without knowledge or consent. Sometimes it was a mistake they made when they were younger and didn’t know any better… or knew better and did it anyway.
Then there are things that people are totally innocent of but were accused of it anyway. An average of 200–300 people are arrested every year for felonies but are exonerated, according to the National Registry of Exonerations. If the arrest was reported in the news, it is likely the exoneration was not. So the news of the arrest still exists even though they did not commit the crime.
John Gilmore, director of research at the data-scrubbing service DeleteMe, related a story of Jordan Greene, a journalist who covered neo-Nazi rally in North Carolina. Members of the group picked out his face in a photo of the rally, ran it through facial recognition, found where he lived and showed up at his house holding burning flares.
A recent scam has arisen ...
t seems like everyone should be concerned, based on the level of urgency the companies present, but in the end, no one has yet built a quantum computer capable of breaking even the most standard 256-bit encryption. To that statement, the industry responds with, “Yet.”
This year, however, the National Institute of Standards and Technology (NIST) issued the first, approved algorithm standards to produce encryptions capable of fighting off quantum computing attacks. So we thought it would be a good idea to put together a batch of experts to explain why the rest of us should care.
The invitation was put out to a dozen experts in the PQC industry, but also to the companies tasked with implementing their products into the internet. Unfortunately, none of the PQC companies ended up accepting the invitation when they learned they would on the same platform discussing their approaches. But we did get acceptances from representatives from the other group. Our final panel was Karl Holqvist, CEO of of Lastwall;; Tim Hollebeek, industry strategist for Digicert; and Murali Palanisamy, chief solutions officer of AppviewX.
The three companies both compete with and complement each other services, but all were active in the development of the standards with NIST. Our conversation is available on our podcast Crucial Tech.
However, there are still questions regarding the urgency, timing, and whether the introduction of quantum computing on an encryption-busting level is even possible in the near future.
The rest of this story is available with a subscription only.
State legislatures are scrambling hard to enact regulations of the cybersecurity and AI industries to protect them from themselves. And the leaders of those industries object to the efforts, like drug abusers forced into rehab.
For the past 10 years, the investor world shoveled money into any company that said they are focused on AI, but that support is starting to shake. Many AI startups that have received billions of investment are struggling financially, not the least of which is the elephant in the room, OpenAI. The most successful AI company in the world is on pace to lose $5 billion this year and, according to CEO Sam Altman, the company needs more than $8 billion more investment this year or will face bankruptcy inside 12 months.
Part of the loss of confidence in AI are the number of failures that seem to be increasing. The AI Incident Database, which chronicles incidents dating back to 1983, now contains 629 incidents. An even bigger reason is the self-governing rules the industry says it has adopted either don’t work or are ignored altogether.
The industry has generally acknowledged its weaknesses. More than a year ago, Altman sat before the US Senate essentially begging for the government to regulate the industry. Support for that legislation has waned, however, as 15 U.S. state legislatures are considering dozens of bills to regulate the development and use of artificial intelligence.
In a letter from OpenAI Chief Strategy Officer Jason Kwon to California Senator Scott Wiener (author of SB 1047), the company highlighted several reasons it opposed the bill, including the recommendation that regulation should be, "shaped and implemented at the federal level. A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards."
The “patchwork” argument has been used to oppose proposed laws in nine states. The problem with that is most federal laws come after a critical mass of laws at the state level. Historically, when two thirds of the sites pass similar laws, the US Congress considers standardizing them nationally. The US is less than halfway through that process.
The legislators authoring these bills seem to understand that they are not “experts” in technology and have been working with tech companies to make the bills more palatable. In California’s SB 1047, Weiner, removed provisions for criminal prosecution and an entirely new state bureaucracy to enforce the bill before it went to the governor’s desk last week. Instead, the bill merely directs the state attorney general to file civil charges when companies violate the mandates.