The Cyber Protection Magazine team (Lou Covey, Patrick Bock and Joe Basques) discussed Wired magazine’s list and decided to make their own. Among the culprits named were Peter Thiel for his influence in elections, generative AI for deepfakes, creators of disinformation and Mark Zuckerberg for Facebook’s spread of misinformation and his inaction to stop it.
Lou Covey (LC) – This is Lou Covey with Crucial Tech. Today we have the entire Cyber Protection magazine crew on this call with Patrick Boch in Germany and Joe Basques in Texas. And we’re talking about the most dangerous people on the internet. Back in December, Wired magazine published their annual list of what they call “the most dangerous people on the internet.” They chose Elon Musk, Donald Trump, Sam Altman, the Israeli Defense Force, Hamas, Volta Typhoon, Sandworm, Al-V and ClOP. And that seems to be the most obvious choices, but we’ve got some different ideas and patterns.
Billionaires
Patrick, who do you think is the most dangerous people on the internet?
Patrick Boch (PB) – If I had to pick one individual, I’d go for Peter Thiel.
LC – Wow.
PB – Yeah. The reason is, mean, he’s a big investor, probably the most important investor in Silicon Valley, which also translates to being the most, or one of the most important investors in the world when it comes to tech. And his views are, well, let’s put it, well, I don’t agree with that view, the views that he has. that’s actually, that’s actually also my point, because not only did his personal opinions is what I’m, what I have a problem with, or what I think makes him very dangerous, but he represents some of the concepts and philosophies of quite a few people. apparently in Silicon Valley, things like long-termism I think it’s called. And yeah, so he’s also financing some of the right-wing groups in the states and probably also in other parts of the world. So I think his influence sort of behind the curtain is what makes him dangerous because he’s not as prominent as, for example, the people listed from white magazine.
LC – OK. What is this long-termism?
PB – Essentially, I think it’s making sure the human race survives in the long run, which is a great goal. But for the sort of hardcore long-termism people, that also means that they’re willing to sacrifice anything in the short term and that’s what makes it dangerous for my point of view.
LC – Yeah, it kind of changes the impact of the long-term solution because if you’ve killed most of the planet by the time the long-term solution comes in, you’ve pretty much ruined your own job purpose.
PB – And that’s basically what they’re saying. think that climate change is a topic which they have on their agenda, but for them it obviously would also mean saving the planet, but it means we can save the planet and live 100 years, if we kill a billion people now, so what?
LC – And then you have people like Peter Thiel and Jeff Bezos and Mark Zuckerberg who are building bunkers to save themselves. Okay, anything else other than rapacious venture capitalists?
PB – Yes, but not an individual though. I think generative AI will become a much bigger problem when the technology evolves. And that’s right now, it’s because of the hallucinations generative AI have, and people using it, they might not know that it’s hallucinating. But even worse, if you look, it’s possible now, but if you look like, I don’t know, probably only a year into the future, things like deepfake audio and video, that will become a really big problem, not only because you can do it, but if you look at the the upcoming election in the States in November, if some video of Trump shows up doing something stupid, … well, okay, that’s essentially every video h does but never mind…. but you know, he can always say it is a fake, you know.
LC – And you know, that is a problem with long-termism. Because the whole thing about the growth of this industry was summed up by Sam Altman at the conference he was at in San Francisco, where he said, the first thing we gotta do is build it. And then we gotta figure out what to do with it. And then we can figure out how to make it safe. You know, that sounds to me like ready, fire, aim.
LC – So, okay, so we’ve got Peter Thiel and let’s kind of lump the other VCs underneath him. I think you’re probably talking about the general generative AI, not necessarily the individual applications of AI, because that’s been around for a long time. So, Joe, what do you got?
Disinformation industry
Joe Basques (JB) – Yeah, it actually, my piggyback’s a lot off of Patrick’s last one there, AI, but for me in general, it was really hard to pick one person. It’s more of a trend, and it’s a trend that there’s a lot of people involved in. For me, it’s, I guess I would use the word deception, right? Deception in these days. Some people, you know, there’s different versions, whether intentional or And I think it’s just become very, very dangerous. There are nation-state actors. There are individual people. But in general, you know, I’ve talked to you about this a lot. a of times, just there are other parallels, right?
I think our publication obviously is focused on security and I have often said, I think one of the biggest problems with security is that it gets too complicated. And when it gets complicated, people say, forget it, I’m going to assume my information is already out there and I’m going to find other ways to just keep an eye on it, right? I think that we’re coming to that stage of the same thing with, with deception, which is people either don’t have the time or don’t want to put the energy in to fact checking things, to making sure things are true before they share them. And I think there’s absolutely zero barrier to entry for anyone to do this. They could literally find something that they know to be true, simply say, I’m going to put out an article or an opinion in the other direction and do that, right? And that takes hold and takes off. And I think that what Patrick was saying with AI, you know, the barrier gets even lower, right? At that point, you don’t even need people to do that. And I think that’s a huge, huge problem, because at some point, it’s just human nature. I’ll say nature in general, right? We all know water takes the least path, the least resistance to go down a hill. People in general, I think, will tend to give up when they have to do the work to dig these things out and figure them out. And I think that’s a huge problem.
SEO industry
LC – That’s a good one. And I think, I got to jump in here with mine. My first group is the SEO industry. I mean, the SEO industry is as important as it is to marketing and advertising and communication and making sure your websites are working right. It’s primarily dedicated to gaming the system. Google started this thing, you could have keywords and you could help boost your readership on the internet. But SEO is all about fooling people into finding things. it’s gotten to the point where you’re doing a Google search where the first two pages are virtually useless for you because we know for a fact that the first seven are always going to be paid for. So you may be looking for it, but there could be keywords in there that drove that you to that.
For example, when we were first building our website for the PR firm that I was at 20 years ago, we got a call from an SEO guide. I’d never heard of SEO before, which is search engine optimization for those who don’t know what it is. And they said, here’s a here’s a free lesson on how you can boost your your traffic. Put the words Barack Obama in your keywords because Barack Obama is the most searched term in the internet
“But we don’t do anything about Politics where public relations.”
“Yeah, but it’ll get more people coming to your site.”
So that was essentially a a And we didn’t do it by the way but it is a deceptive way of boosting your traffic and that is integral to what generative AI uses as information they scrape the internet and it’s not necessarily about what it is You’re talking about that’s where the hallucinations come from and It is what I call an enabling technology And I think I don’t think that’s good You you you you’ve got to be able to have some sort of Shurity that what you’re searching for you’re actually coming up with. And SEO makes it difficult, if not impossible, to actually do that. So I’m actually naming them from an organizational standpoint. I think SEO is more dangerous than ransomware groups or Pete Thiel or VCs because they enable the process of deceiving people.
Zuckerberg beats Pichai
From an individual standpoint, I had a hard time choosing between one of two people. Either Sundar, Pichai, the CEO of Google, or Mark Zuckerberg. And the reason I’m looking at that again is because they too are enablers. Those companies know that they’re being used to spread disinformation and they’re using a very weak argument. about freedom of speech to justify that action. But I think in Sundar’s position, Alphabet is just way too big for him to have any control over what’s going on and all of the various areas that they’re doing. Because you’ve got Google, you’ve got YouTube, you’ve got the advertising, you’ve got all of the stuff going on under alphabet. And I think Sundar kind of lost control of it, so he’s not necessarily most dangerous person.
Mark Zuckerberg just might be the most dangerous person on the internet, simply because he doesn’t take the same road as Elon Musk does. He doesn’t build himself up as a major source. He doesn’t say hardly says anything about it. I’ve read his stuff and it’s a bunch of gobbledygook.He keeps a low profile as opposed to Elon Musk. But Meta, the reach that Meta has through Facebook and through Instagram and through everything else that they’re doing is massive. It’s so much bigger than what Elon Musk is doing on X. It’s bigger than what Alphabet is doing, even with YouTube. And he knows it. He’s been given that information again and again, but he refuses to do anything substantial about it because it might interfere with his money. And that’s why I call him the most dangerous man on the internet
PB – Yeah, I think between the two choices, I would agree that Mark Zuckerberg is a lot more dangerous. Well, yeah, for many the same reason that you, the only question. I have this, you know, is he keeping a low profile just because he can or is he keeping a low profile because somebody else has taken over at the meantime?
LC – I actually think he’s doing it because he can. The old Asian proverb, the nail that sticks out will be hammered down. And Elon Musk is getting hammered, not only by the people that don’t like him, but he’s getting hammered on a regular basis with illegal drugs and alcohol. Elon Musk’s problem is, yes, he’s a horrible human being, but he’s an addict, according to Walter Isaacson He has got mental health problems and I can’t necessarily dislike him because of that. But Mark Zuckerberg is a health nut. He doesn’t do drugs. He just makes a lot of money off the pain and suffering of the world. Joe, you got any input on that?
JB – I mean, I really, I can’t even I’m trying to I can’t follow up with that. I mean, I think that’s, you know, that’s a period on the end of statement. So there’s not much left to say.
Jammi Nardini and CheckMyAds
LC – Okay. Well, I want to end this on a positive note. I was thinking about this morning and I wanted to call out people that are doing God’s work in making the internet safe. And the first one, the one individual I want to name out is Jammi Nandini. She is the CEO of a nonprofit organization called Check My Ads.
She first came to prominence when she organized a boycott of Tucker Carlson’s show on Fox that got just about every advertiser to drop him other than people selling pills and pillows on his website on his show. She was the one that got started taking him down. In the years that have come after that, she has gone after absolutely everybody.
She works primarily with companies in the ad tech space, which companies hire to spread their ads over social media and the internet. But they don’t necessarily know where they go. So that’s why some companies that we might think, well, these are good guys. Why are they advertising on InfoWars? Well, they didn’t know because the ad tech companies were pushing their ads out to all the different places indiscriminately. And they could only stop it when they were made aware of where their ad was showing up, which is what Jamie’s been doing for the past few years.
She was also instrumental in taking down Alex Jones and InfoWars. So I just want to give a shout out to her. I think she’s doing great work there.
Mozilla’s Privacy not included
The other one is the Mozilla Foundation. We’ve reported on their Privacy Not Included list that they put out every year before Christmas, which is a great service, but they’re doing a lot of stuff under the radar to make the internet safe again. And one of the things they’ve just recently spun out is an open source company called Open Measures. It’s limited in its scope because it takes a look at the normal sources of misinformation or disinformation. Let’s make that differentiation. Disinformation is information that is intentionally created and distributed and the distributor knows it’s wrong. Misinformation is when somebody receives that disinformation and believes it and they start to share it.
What Mozilla has done is put together this website at Open Measures. where you can use the tools to determine from about a dozen different sources, including Truth Social and Gab and InfoWars and that sort of thing. And you can type in a phrase or a name of something that you know might be wrong, but you’ve actually determined is wrong. I was playing with it yesterday snd it will show where it originated within those 12 units or whether they are sharing that stuff, how often they do it. And I just want to give that shout out to Mozilla because they’re thinking outside the box on how to solve the problems of the internet. So those are my two. Anybody got some?
JB – If anyone’s looking to kind of verify information like that, what’s the you know the website for them to go check that out where you were?
LC – Open measures.io.
JB – Perfect. That’s good.
LC – So anybody got anybody else?
Open Source and white hats
PB – I have, I take two groups as well. One is the Chaos Computer Club Club over here in Germany, and they’re a bunch of hackers slash activists, they, you know, they point out every one, every company or individual group, which does something dangerous, insecure, what have you on the internet. And, you know, obviously they’re most active over here in Germany or Europe, but they’ve been sort of uncovering a lot of things, which then eventually led to, hopefully, better security or data privacy. They also look into data privacy. Excellent. lot. And the other pretty large group I’d like to mention is the open source community. Because I think, know, hardly any software, any commercial software even out there would be be in existence without the open source community.
LC – Good point.
PB – You know, the downside is we saw that with the locked 4j vulnerability like two years ago.
LC – Yeah.
PB- You know, when basically an open source component was responsible for, you know, one of the biggest security incidents. But the other side of the coin is that they’re doing a lot of good stuff and they’re helping a lot of people really getting good software for no cost essentially.
LC – Excellent. And you know, the thing I like about open source too is that, you know, there are some bad actors in open source and we look at chat GPT, for example, open AI. They’re supposed to be an open source, but it’s not really open source, but there, but there’s more instances of good open source projects than there are bad ones. And that’s, that’s a good one to shout out to. Joe, you got one?
JB – You know in general, I just wanted to say thank you to all the people who do that kind of work to make it easier For other people right because society at large I said I think we get tired of it and for anybody trying to simplify it kudos to you Keep up the battle keep up the fight what you do is important Good one Okay, so folks that’s it for our list of the most dangerous people on the internet.
LC – I do want to point out that we are going to be having a special issue Near the end of this month at Cyber Protection Magazine where we are going to make predictions for what’s going to happen in Cybersecurity in 2024 but here’s the the catch on this: We’re moving to a subscription format so if you want to get that special issue you’re gonna have to pony up with how much is it Patrick?
PB – 25 bucks a year
LC – That’s 25 bucks a year to hear what not only what we are predicting, but what some major sources in the cybersecurity community are predicting. So if you want to find that out, get a subscription. And that’s it for this week. This has been crucial tech with the entire cyber protection magazine crew. This has been a foot washing media production. Thanks for listening.
Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.