Social media security

DDoS on X was avoidable, but inevitable

The DDoS attack on X.com this week provided a certain amount of schadenfreude for people less than enamored by Elon Musk. It also rang alarm bells in the cybersecurity community as that style of attack seems to be making a comeback, and not for financial gain. All indications are corporations, and, in particular, government institutions are not ready to repel attacks motivated by political revenge.

Security intelligence company Fletch.ai this week identified multiple ongoing attacks around the world targeting corporations for a variety of political positions, depending on which side the entities supported. Issues include the Ukraine/Russia war, Palestine/Israel, immigration, tariffs and just plain political leanings.

Musk blamed Ukrainian hackers for the attack on X (aka Xitter) but because DDoS attacks use multiple servers arrow the globe it is difficult to identify a particular source. However, Fletch and other analysts identify pro-Russian and pro-Chinese hacktivist groups behind most of the attacks using tried-and-true botnets.

Cheap and easy

Mithilesh Ramaswamy, a senior security engineer at Microsoft, said the cost of compute and cloud infrastructure are cheap now creating a low barrier to entry. “Even renting a botnet or using a DDoS-for-hire service is relatively simple and inexpensive.”

Dependency on cloud services also make organizations vulnerable when they rely heavily on third-party services or microservices architectures, he explained, allowing attackers to exploit integration weak points and unleash large-scale disruptions with targeted floods of traffic.

Cloudflare reported blocking a record-breaking 5.6 Tbps DDoS attack carried out by a Mirai-variant botnet. The significant increase in DDoS attacks in 2024, with a 53% rise from the previous year, underscores the growing threat. Fletch reported that the BadBox botnet infected over one million Android devices in 2024 “Despite efforts to disrupt it, the botnet continued to grow, indicating the persistent and evolving nature of DDoS threats.”

A pro-Palestinian hacktivist group known as Dark Storm claimed responsibility for attack on X.com, which caused major outages on the platform over the course of 48 hours. But that claim has not been verified.

Lax security

Ian Thornton-Trump, a well-respected security expert and current CISO for the Inversion6, blamed lax security standards at X.com for the breach. He pointed out that the section of the X.com servers the was hit was not covered by their Cloudflare subscription. Cloudflare is primarily a third-party service that provides a robust protection against DDoS attacks. The rise of these services helped drive the popularity of the attacks down over the past few years, but an organization still has to turn on the protection as they implement new data services. X apparently did not do that.

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

How social media moderation works

There has been a lot of debate regarding the imposition of moderation on social media and whether that constitutes censorship and violations of the right to free speech. That argument is specious at best. Offending content on commercial social media is removed only when it negates profitability.

Most humans moderate their speech. Sometimes we think about impulsively speaking in reaction to something that incites strong emotions. People who do not react have what is called, “self-control”. Some people don’t have that filter (I’m looking at you, Elon) and blurt out offensive, nonfactual, or dishonest responses. Sometimes they aren’t atypical, they are just selfish people without manners (still looking at you Elon). Moderation of your speech is just a civilized attitude.

Profit motive

When it comes to social media, however, especially for-profit social media, the primary factor is profit. That has been the guiding principle of all social media moderation.

Read more...

Editorial: Jog on, Meta

Mark Zuckerberg made two announcements about major changes in Meta in the past two weeks. The first was the revelation that they would be creating hundreds of AI-driven bots to interact with users. The second was the announcement that they would stop moderation of content, “except for dangerous stuff,” according to a video posted by Zuckerberg. With a certain amount of schadenfreude, we note that Meta had to pull the accounts they had already made as users started engaging with them, finding their inherent flaws and raking them over the coals for how piss-poor their execution was.

Both of these announcements validated a decision I had made earlier this year to start divesting myself of Meta platform accounts. I made the request to deactivate all the accounts (Facebook, Instagram and Messenger) a week before both announcements. I would have done it sooner if I had known it would take Meta 30 days from my request to deactivate everything. This morning, however, I received a text from my partners in Cyber Protection Magazine asking if I thought we should deactivate our Facebook account.

Frankly, I had forgotten we had one, basically because we received zero engagement from the platform, despite the amount of content we put up there. That,.too, is a result of Meta de-emphasizing legacy media. Of course, I concurred with the team. Sometime in February, we will disappear from Facebook.

Read more...