Generative AI tools are a blessing and a curse, but cybersecurity may lead to reason

Generative AI is still generating press describing it as a blessing or a curse, perhaps both. A recent announcement of ChatGPT-4 improvements was less than stellar and indicates the hype is still overblown. Cooler heads, however, are finding balance and benefit, especially in the cybersecurity field.

OpenAI blared massive improvements to their product over version 3.5, including “40 percent” more factual accuracy. However, since the previous version was found to be inaccurate 9 times out of 10 it still means the output is wrong more than half the time. That inaccuracy is due to the poor quality of what is put into the AI’s database, AKA everything on the internet.

On the hype side of the narrative from the largely tech-ignorant general press is how some are monetizing generative AI to do creative work. For example, a marketing communication consultant, Larry Lundstrom, was reported to be using ChatGPT to produce presentation slide decks for start-up companies at $1000 a pop. When contacted directly, the consultant had a much different story.

A tool, not a replacement

Lundstrom said he charges the same amount for the decks that he did before he started using AI. All the AI tools do is save him time at the beginning of the build for a framework for the final product.

“It’s producing, in some cases 50%, of the copy that’s used. I would say it builds the platform for you. I still have to go in and massage it to the point where I want the deck to go.”

While some predictions claim that generative AI is going to destroy demand for human creative work, Lundstrom predicts demand for human creative work to triple, “at a minimum because this tool is not going to replace it. It’s actually going to allow producers to be able to provide more services.”

The human element in creation is something that AIs can’t yet duplicate. Comedian John Oliver, on his HBO weekly news commentary, delineated between “narrow” and “general” AI. The former has a specific purpose but lacks the truly creative and rational abilities of the human mind. That’s where a relatively narrow focus on cybersecurity can be a real winner for the tools.

Cybersecurity leads the way

AI has been such an integral product feature of cybersecurity companies that they have been using (dot)ai in their official URLs for almost a decade, including Fletch.ai. The company feeds its proprietary AI with thousands of articles to deliver personalized advice on how to protect your organization from relevant threats. On the hardware side, Axiado Corporation is integrating an AI in a security co-processor that will sit in a server to monitor, detect and defend against enterprise attacks from the edge, or the network to the server.

However, while it may seem to be a duplicative effort, the two companies do not compete. Fletch focuses on companies with smaller teams with few resources while Axiado operates on the enterprise level and their efforts could be complementary. Axiado is currently reaching out to the entire security ecosystem asking for attack scenarios to enhance their “data lake”. The company reports since making the announcement last week, the community response has been remarkable.

Gathering vast amounts of data, as mentioned above, can render the value of a generative AI useless without proper vetting. OpenAI, in the joint announcement with Microsoft, mentioned that they are improving the data quality, an effort that wasn’t evident in their first releases. That’s why the narrow focus of Fletch and Axiado on existing and potential threats makes their AI potentially more valuable than what Microsoft and Google are promoting.

Microsoft and Google may have learned that lesson after the initial product failure. The latest announcements limit the AI use to business applications, like writing “long emails” that everyone loves reading. (Really, doesn’t everyone love curling up with a 5,000-word email generated by a machine?)

Related:   Boosting digital hygiene without technology

Making generative AI unaffordable

Regulations, the desire of internet users to maintain control of their data, and plain, old copyright law may put a significant crimp, if not a death blow to effective, accurate and profitable generative AI products. Facebook and Alphabet learned when Apple decided to allow users to stop tracking cut into their advertising profits. That came in concert with US and EU laws allowing users to require their data, including proprietary content, to be expunged from their databases. If applied to generative AI, and it is not out of the question, that would reduce the “data lake” available to Microsoft’s and Google’s products, destroying their profit margins in short work.

An alternative proposed in a new book, Data for All, by renowned data scientist John Thompson, might ameliorate the problem of access to data. He proposes users to be paid by corporations for the use of their data. For some prolific users, including media companies, that could end up being a significant source of income. It also, however, adds another cost center for companies distributing AI products for profit. Short of paying for the data, the companies would have to convince the data suppliers of some sort of benefit would arise from their unremunerated donation. To date, that benefit has only been and amorphous promise of “exposure.”

Here come the lawyers

That latter scenario seems unlikely as large content producers, like Getty Images, are pursuing infringement lawsuits against generative AI organizations using protected content. It won’t be long before publishing companies and authors pursue the same.

That makes efforts like Axiado more likely to succeed as it cooperates within a given community toward a mutual goal of protecting customers and clients. However, even if massive corporations like Alphabet and Microsoft can embrace such cooperative efforts, gaining the trust of the general public could be tricky.

A recent episode of the irreverent animation show, South Park, highlighted the unethical use of ChatGPT using it to write the episode with a writer’s credit. Without arguing the validity of their argument they made a point. Technology like the internet and social media has been introduced as a boon to humanity and society, and when it was run by non-profits it was proven to be beneficial. But corporate commercialization of the technology gave way to assaults on privacy, democracies, and the mental health of users and not just by criminal elements. What assurances can we expect for the general commercialization of generative AI? Probably none if history is any measure. But with the focused work of industries like cybersecurity and regulatory infrastructure that seems to be only slightly behind the curve, maybe the future isn’t so dire.

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *