AI bubble about to pop for cybersecurity?

As quickly as the artificial intelligence (AI) industry appeared, it may disappear just as quickly. That may have significant ramifications for cybersecurity, according to industry watchers, as the technology falls into the trough of disillusionment.

When OpenAI burst on the scene more than two years ago, Microsoft was a significant instigator in its growth and adoption. Microsoft invested billions in the not-for-profit enterprise for early access to cutting-edge AI technologies and helping accelerate OpenAI’s research. It transformed its Azure cloud platform into a leading infrastructure provider for AI development, offering specialized hardware (like GPUs and TPUs) and services tailored for machine learning workloads. AI capabilities were embedded across its product suite, and Microsoft Research contributed significantly to AI advancement in computer vision, natural language processing, and deep learning.

All of that came with extreme demands on computing resources. Microsoft began a buying spree in data centers, both to secure resources and build new centers. They even entered into a deal to reopen the notorious Three Mile Island nuclear power plant.

Spree ends

That has all come to an end. As reported in Bloomberg last week, the company decided to scale back data center projects in the UK, Australia, and Indonesia. Data center development in Illinois, North Dakota, and Wisconsin is also canceled. All tolled, Microsoft has walked away from more than 2GW. That’s on top of the news that Microsoft had walked away from two data center projects in the US and Europe, piling on to a February announcement that it was canceling data center leases.

The speed of this pullback was remarkable. In January, the company announced plans to invest $80 billion in AI data centers in 2025. Last week represented a 180-degree turn from that. But what drove the decision?

One reason was the US imposition of Draconian tariffs that solidified in the same week. The tariffs place significant costs on data center equipment. But the signs of a pullback predated the tariffs.

Not just tariffs

Gartner reported last year that 85% of AI projects fail due to poor quality data or lack of relative data, meaning most companies paying for AI services are not getting what they paid for.

In 2024 free use of AI tools was still growing faster than those buying subscriptions, according to reports from Anthropic and OpenAI. However, providers are increasingly restricting free use to encourage subscriptions. That may increase revenue, but, at present, it is not near enough to cover the expense of running the platforms, hence, Microsoft’s decision to pull back on infrastructure.

Increased use fees will hit the Cybersecurity companies touting AI as a feature. Walking the exhibit halls of any cybersecurity conference shows how much AI is integral to their marketing messaging.

That doesn’t mean AI is going away, but the prediction of generative AI taking over everyone’s job. (and eliminating humanity) becomes slightly less plausible.

It’s not just happening in the US, either. The MIT Technology Review reported in late March that China’s AI infrastructure is largely unused. The country is shutting down data centers and selling off surplus GPUs to staunch the flow of red ink.

Investors leery

Several venture capitalists Cyber Protection Magazine has talked to in the past year have grown increasingly skeptical of the current universal application of AI. Bob Ackerman said in a Crucial Tech podcast predicted a wholesale collapse of the industry, possibly in a 12-month period. “What rises from the ashes will be what to invest in.”

Danny Jenkins, CEO of the end-point protection company ThreatLocker, agreed with Ackerman’s assessment.

“Yes, it’s a bubble, and it’s going to burst,” he said. “If you look at Intel’s market cap valuation of a company that sells chips, they’re trading at about 1.5 times annual revenue. It’s not recurring revenue, it’s hardware sold revenue. Compare that to Nvidia, which is 30+ times revenue. To be worth what they’re worth, they would have to grow 30 times. The problem is it’s not even reoccurring revenue. It’s one-time revenue. So it’s definitely a bubble. The world is overreacting. It has no tangible way of getting to the valuation. Eventually that’s going to come down.”

Related:   Privacy Not Included Group dings Amazon this year

Jenkins points out that even a company like Google is going to have to rethink how it moves forward with AI. Right now, he said, Google makes its money by selling placement on their search pages. However, “I find myself clicking into websites less because Google’s giving me the answer I want with a gen AI box on top. I don’t need to go into the search below, and that is going to hurt Google’s advertising revenue.”

FOMO driving interest

Jenkins said many companies are moving ahead with AI investment out of fear of missing out.

“They think I missed on the internet boom. That’s caused AI stocks to go through the roof. But at the end of the day, when you look at a company that’s trading at 30 times, you start questioning what is going on.”

Ben Colman, CEO of the deepfake detection company RealityDefender has a more measured view. He agrees that AI technology is not where the industry wants us to believe it is, but that doesn’t mean it isn’t going to get there.

Making the case

“We thought that the 2020 election was going to be a deepfake election, but there weren’t any deepfakes. We had a lot of institutions asking us to prove this is even a problem. The data wasn’t there. And that was because, for the most part, the tools needed to make a great deepfake required a lot of cloud compute.” Colman said now the technology has evolved so it can be done on a high-end laptop computer without the cloud, and therefore, is a lot cheaper. Potential customers are no longer saying it isn’t a problem, he said, but need proof that there is a defensive technology.

There is a general awareness that deepfakes developed using generative AI are a potential concern, with an emphasis on potential.

Recent threat assessment reports indicated there was a spike in attempted fraud using deepfakes in November and December 2024, but there have been less than half a dozen successful attempts since deepfake technology has been successful. Colman’s company represents one of the few defensive technologies that have developed alongside the malicious use of AI. The lack of successful breaches using AI may account for criminals resorting to tried and true methods of just catching a victim not paying attention.

If, or when, the current AI industry collapses, that is not the end of the industry. The technology represents an effective tool to stand in place of skills gaps when human workers are in short supply. However, implementing AI tools has its own skills gap to deal with, slowing any effective adoption of the tools.

Lou Covey

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *