The future of AI is distributed and democratic, according to renowned security technologist, Bruce Schneier, not controlled by corporations. In his talk he revealed work on developing “public” LLMs. Developments in the past few weeks indicate he may be right and that would be good news for verifiable information.
Speaking at the RSAC Conference in San Francisco last week, Schneier talked of trust and how we give it to people, strangers, organizations, and technology. His description of that process predicted the development of artificial intelligence controlled almost exclusively by the user, rather than the dystopian corporate AI replacing humanity.
Two levels of trust
Schneier broke trust into two categories: interpersonal and social trust. The first is trust we give to people we know based not on their specific actions, but more about what we know about them. Social trust, however, is based on reliability and predictability. This is trust strangers, organizations, and technology engenders.
Both categories are controlled by constraining mechanisms including common morality, reputation, laws, and security technology. While there are outliers that violate these mechanisms, by and large, this trust is what keeps society functioning. “The first two (morality and reputation) are person to person constraints,” he said. “The last two compel trustworthiness.”
He gave an example of the difference between interpersonal and social trust. “You can give a package to a friend and ask him to deliver it across town, or you can put it into a post office. The first is personal trust and the latter is social trust.”
Schneier said when we confuse the two is when trouble arises. “Corporations benefit from this confusion.”
Not our friends
“We might think of corporations as friends, but they provide a service for profit. They will be as immoral as they can get a way with. We are about to make that categorical error with AI. We will think of them as a friend when they are not. They are, in fact, working for their owners to make a profit.”
This reality is problematic for what is known as Web 2.0, Schneier explained, because people use AI applications to do search instead of search engines. In fact, Google has found that the AI box in a search engine is getting more clicks than the search results. Schneier pointed out that corporate websites will disappear because the information about the companies will be more readily available from an AI agent and presented in a conversational format. That format makes us more likely to trust an AI agent, even though they are not trustworthy.
“Did your AI chatbot recommend an airline because it was best for you, or because the airline paid a kickback to the AI company? When you ask it to explain a political issue , did it bias the explanation to a political party that gave the most money? Unlike search, the conversational interface will help the AI hide its agenda.”
Personal AI
The next step in the development of AI, he said, is agentic AI as a personal assistant to the user, creating a greater intimacy with AI than with any other technology. Because of that, Schneier said we need trustworthy AI where we understand its behavior, limitations, biases, and goals, “that won’t secretly betray your trust to somebody else.”
He pointed back to the original premise that trust is based on reliability and predictability and enforced by laws and technology. That is at risk with current AI products. They are not only manipulated for corporate interests, but are vulnerable to adversarial players, AKA hackers. “If we can’t guarantee it can’t be hacked, it can’t be trusted.”
He proposed a research question for the next version of the web. “Can we build an integrous system out of non-integrous parts?”
He compared the digital wallet many people use on their phones that include payment systems, event tickets, etc. “Instead of your data being spread through multiple organizations… it is all on your device. And you can give read-write permission to various apps on your phone.” This distributes power to the users rather than organizations that may or may not have the best interests of the user in mind.
He said he and his team are already experimenting with demonstration products on this. There will be much to be done before this becomes widely available, including convincing corporations to give up control of our data and government regulations that enforce it.
This proposal could be a technological advancement greater than the invention of the internet itself and would face enormous pushback from organizations that profit off the personal data of users, but two fairly recent events show it is possible.
Public LLM
First, Deepseek showed that AI does not have to be expensive or massively large. Second and most recently, OpenAI’s Sam Altman announced that he would abandon the effort to convert OpenAI into a for-profit company. In order for the company to continue, it will have to find away to convert individual users into buying subscriptions. Schneier’s system is a personal AI the user trains and controls, rather than a massive corporate server. This increases the security of the AI by creating multiple small targets for hackers, rather than a few large targets with single points of failure.
It also changes the ability of large organizations to manipulate data to move people in predetermined directions. They will have to rely on telling the truth, or on people and organizations dedicated to finding and reporting truth. That could result in a renaissance of the news media industry.
News media revival
There is a common trope that people don’t trust “the media” as though it is some monolithic source of misinformation. While people do not fully trust certain forms of media, they trust corporation, politicians and social media less. Pew Research and the American Press Institute report people between the ages of 16 and 40 trust print news over other sources.
A digital “active wallet” that Schneier describes would be able to access the sources that users trust the most. That bypasses traditional advertising. Instead of paying for advertising on search engines, companies need to associate with trustworthy sources of information, as they do with public broadcasting. Advertising would return to its original purpose of showing support for democratic institutions rather than just trying to increase sales. Truth would become profitable.
Moving away from search engines to LLM apps is already underway. Apple announced plans this week to replace Google search in Safari with a commercial AI, either from OpenAI or Perplexity. Those options are “secret agents” for corporations, as Schneier describes them, but the move opens the door to Schneier’s LLM. Altman’s announcement provides a clear path for that company to market a “personalizable” LLM.
Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.