Before you post, would you bet your job on it?

In a world awash in AI-generated, intentional misinformation and urban myths, would you bet your job on the reliability of the information you want to share? You might be betting someone’s life on it.

Disinformation (intentional misinformation) has become a major support for both sides of all conflicts in the world. Once called propaganda, technology, mostly social media, has turned state-controlled information into a virtually immortal beast that can end up turning on its creator.

Take Donald Trump for example. He turned falsified information into a business model for decades before making it a political weapon. It is now part of the cases against him in multiple civil and criminal cases. Even if he loses every case, that information will outlast him and his progeny for decades.

Disinformation means death

On a broader scale, the Russia/Ukraine and the Israel/Palestine conflicts weaponized disinformation over social media and and gains l;egitimacy when legacy news carried it without vetting it. People in both conflicts have died because of the use of disinformation.

“Modern warfare extends beyond the battlefield,” said Michal Gil, head of product management for the security training company, CybeReady. “The attacks we’re seeing can spread beyond what they were intended to for and beyond the targets they initially intended to harm.”

Gil said that hackers are no longer just people seeking to create mischief for entertainment or for financial profit. “Now they’re doing this for political as well as financial gain. It is crucial that people, and employees specifically, are aware of these threats.”

According to the Union of Concerned Scientists, the ultimate goal of this kind of disinformation is to eliminate all nuance in any controversial topic; to set the discussion in black-or-white terms, diminishing the complicity of one side’s responsibility in conflict while expanding that of the other.

“You can’t really know for sure what is real and what is not, especially with the advances in technology,” Gil said. “But you can apply some rules that will help make sense of what you’re getting.”

Trusted sources

Gil said certain sources are better than others like government publications, “trusted channels, news sites that you know that are normally legit and always, always, always try to avoid forwarding fake news.”

Gil singled out social media as being highly unreliable because “you really don’t know what’s fake and what’s not (on social media) so don’t spread things that might be fake and cause hysteria. Just understand that most of the social media rumors are just unreliable.”

That statement required a specific response and line of questions. What follows is an excerpt of that conversation.

CPM – When you say rely on legitimate sources, trusted channels and widely accepted news sites, that’s not an absolute. That’s a very fuzzy area for a lot of people, because some people will say, InfoWars s a good is a trusted source. Some people will say Fox News is a trusted source. Some people say MSNBC is a trusted source. And excuse my language, that’s b—–t, because they’re not. But to some people they are. How do you get to that point where you can attach a label of legitimate?

Michal Gil – Yeah, well, I don’t think anyone can really do that, right? You can only try to do it. So you try and rely on sources that in the past, you, you knew were reliable. You can try and rely on government publications and channels that people consider reliable. Other things that you definitely know that have a good chance of not being reliable, like stuff running on social media. Also, try and Google things. If you see something and you’re not sure about it, try and search for it in other places as well.

CPM Can I mix in some input here?

Michal Gil – Sure.

CPM – The number one thing, the first the very first question you need to ask, even before you read it, is, “Would I bet my life on the reliability of this source?” That question brings up a number of others like:

Do you recognize and trust the source of the information?

Do others you trust recommend the source?

Is the source close to the origin of the information or is it third/fourth hand?

Is the information accurate?

Does the information reference extreme or balanced positions and does it avoid using absolute terms (e.g. all, none, only)

Does the information require you to take a side, or does it allow you to make a decision based on the facts alone?

If the answer to any of that is “No” then the information is questionable.

Related:   How UK organisations can prepare for the next wave of cybersecurity regulation

For most people, going into that level of detail is a significant pain, but disinformation campaigns have been ramping up to skew results in high-profile elections in 2024 according to Andy Patel, a researcher at WithSecure. “This will include synthetic written, spoken, and potentially even image or video content. Disinformation is going to be incredibly effective now that social networks have scaled back or completely removed their moderation and verification efforts. Social media will become even more of a cesspool of AI and human-created garbage.” 

More needed than only training

Not training team members to question this information is problematic according to Gil and it isn’t just a one-time event. Training has to be continuous. Gil said it could mean just a few minutes a week, but it must be taken seriously. But to expect perfect responses to disinformation from employees at all times is unrealistic. Some dangerous things are going to be shared in organizations and between individuals. Disinformation often carries additional serious security problems beyond affecting opinions. The very information shared can contain malware and sharing spreads it far beyond a single recipient. That means constant monitoring of systems for infiltration rather than a one-off penetration test (pentest).

Erik Holmes, CEO of Cyber Guards, many companies restrict how often and how deep security testing will go to ensure business operations aren’t affected. “But threat actors don’t have those limitations, right?”

As CyberReady recommends constant disinformation training, Cyber Guard recommends daily threat testing

“Your environment changes over time. So, the two weeks that I was in testing it are very different from today,” Holmes said. “So time is a big issue Another is just basic humanity. Pentesters are human. Some days they don’t feel well, some days they are tired, and sometimes they are distracted by family issues. So that one day or one month or once a year pentesti can be flawed. Testing consistently, persistently, every day, all year long, removes constraints.”

As this year closes and the new one begins the philosophy of zero-trust extends beyond who we allow into our networks. What we allow into our minds and lives needs to be scrutinized.

Lou Covey

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

One thought on “Before you post, would you bet your job on it?

Leave a Reply

Your email address will not be published. Required fields are marked *