Commentary: Move fast and break AI

In 2012, Mark Zuckerberg wrote a letter to investors that explained the ethos of the tech sector saying, “Moving fast enables us to build more things and learn faster. However, as most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly. We have a saying: ‘Move fast and break things.’ The idea is that if you never break anything, you’re probably not moving fast enough.”

It really wasn’t a new idea for technology. Zuckerberg merely admitted what the sector has been doing for half a century. It was even celebrated by investors, most famously by Roger McNamee, an early Facebook investor In a 2018 Frontline documentary. “It wasn’t that they intended to do harm so much as they were unconcerned about the possibility that harm would result.”

Breaking society

Facebook did break things. It broke governments, democracy, mass media, advertising, capitalism, interpersonal relationships, small businesses and civil rights. That ethos drove Google away from “don’t be evil” to “do the right thing (for Alphabet)”. Self-preservation, at any cost, is now the ethos of all for-profit technology companies. (Yes, you can say that for any capitalist effort but I’m trying to be focused here.)

That ethos is now moving into society as a whole. People want to tear down basic societal structures as fast as possible with little available to replace them. But that might not be a bad thing, overall, especially with artificial intelligence (AI).

I the 40 years I’ve been working in tech I have never seen a “hockey stick” like generative AI. It has been with us for less than two years. The difference has been that the previous technology had specific applications, like semiconductor design. Today’s iteration has no actual focus other than to “move fast and break things.” It’s not even profitable yet and investors are pumping billions into it. But now may be the time to employ the ethos and break it now. It seems to be happening.

Regulation looming

First, we have governments seriously considering regulations that would hobble the growth of AI. It’s not without proponents in the AI industry. Even Sam Altman of OpenAI has requested government regulation without specifics. The problem is that the government is filled with people who barely have college educations (some without high school educations) and none of them with degrees in any technological discipline. What they produce will likely reflect that lack of insight and understanding and will harm the innocent while protecting the guilty. But until it is put into effect it does impose some barriers to development.

Next is the nascent deepfake and disinformation detection industry. Like application-specific AI, these companies have been around for a while, doing other things (like plagiarism checkers). The current tools are about as effective as what ChatGPT and Google have produced without the devastating effects of hallucinations and distribution of misinformation, so it’s a start.

Tech to the rescue?

Then we have companies trying to ensure the data used to train an AI is clean and safe. I recently interviewed Venkat Rangapuram at Centific about their efforts, which initiated my thoughts for this commentary and found the company’s approach to be fairly devastating for the generative AI industry. The current growth of Microsoft, OpenAI and Google efforts are based on the ability of noxious individuals to do harmful things with AI. I’m not just thinking about deep fakes, but at the thought that AI can replace humans in creative endeavors and personal interaction. I’m just waiting for someone to produce a personal chatbot to talk to corporate chatbots so we don’t have to deal with it. But a clean-data AI will be as boring as an unbiased news report, which means a deep cut in the profits of a big generative AI, like ChatGPT.

Related:   Before you post, would you bet your job on it?

Fourth, we have legacy industries putting on the brakes of AI in the hardware housing AI. I have limited information but researchers are investigating the potential of putting governors onto GPUs that will limit the kind of information an AI can scrape from the internet. From my uneducated position that seems to be a devastating possibility for the industry, and well deserved.

Swifties can do it

But finally, the best possible effort to break the technology paradigm lies with us. You’ve no doubt read about the deepfake, AI-produced porn featuring Taylor Swift. She’s already considering legal action, which is justified, but there are better (although more difficult) ways of stopping that kind of debauchery. Just quit the platforms spreading the filth.

I was kicked off X for dissing Elon Musk shortly after he bought a major interest in the platform. I also deleted the Facebook app on my devices about a year ago and pretty much ignore it now. My small group of friends and family still use Instagram and Messenger to communicate with me, but I am seriously considering taking the plunge and shutting that down, too. I’m much more comfortable with LinkedIn and Mastodon. 

Imagine what would happen if a quarter of the Swifties on Facebook, Instagram, WhatsApp, and X decided to do the same thing. Their ad revenues would tank. Well, in X’s case, they would have to start paying advertisers. They also might think that the AI crap on the platforms is a very bad idea and they might do something about it.

So in the end, we can’t with for technology to break technology in the short term. As the bard said (not the AI but Billy Shakes), “The answer… is in ourselves.”

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *