A brief history of bots

Bots have been around for more than half a century to automate repetitive tasks and provide services on early internet platforms. The first was ELIZA, developed as a research project in 1966 at the Massachusetts Institute of Technology (MIT) the goal was to simulate conversations with a human being. ELIZA conversed with users, although it did not understand what the user was saying. Artificial intelligence chatbots are much more sophisticated versions of ELIZA, but still lack human comprehension.

Bots not replacements

The purpose of ELIZA was to determine if computers could replace psychoanalysts. Consequentially, it was the first time the prediction that computer could replace humans had some hard evidence. Today, there are mental-health AI applications with not much better results than ELIZA but projected to have a $8 billion market by 2032.

In 1988, the earliest broad use of bots was Internet Relay Chat (IRC) automating user list management, searches, and providing services like weather updates or game scores. But these were not known as bots at the time. They were called automations and still required a human interface to operate,

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

Zero Trust: easy concept, hard to implement

Last week, Dr. Zero Trust, AKA Dr. Chase Cunningham, posted in Linkedin that he was fed up with people who say they don’t understand Zero Trust. To a certain extent, I feel his frustration.
Journalists understand the concept. We have a decades-old saying, “If your mother says she loves you, check it out.” It doesn’t get more zero trust than that.
The problem is that while it’s easy to understand as a concept, it isn’t easy to build a zero trust infrastructure, especially with the misleading gobbledygook most cybersecurity companies put out. Cunningham says there are hundred of books and articles on the subject. He’s right, of course. The question is, which one do you choose?
At the RSAC Conference, We sat down and briefly talked with Dale Hoak, CISO for RegScale, about how easy it is to understand Zero Trust

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

Schneier predicts “public” LLMs

ibuted and democratic, according to renowned security technologist, Bruce Schneier, not controlled by corporations. Developments in the past few weeks indicate he may be right.

Speaking at the RSAC Conference in San Francisco last week, Schneier talked of trust and how we give it to people, strangers, organizations, and technology. His description of that process predicted the development of artificial intelligence controlled almost exclusively by the user, rather than the dystopian corporate AI replacing humanity.

Read more...

An encryption primer: Don’t wait

Encryption became a hot topic in the news in the past month. The United Kingdom, Sweden, France and the EU are considering requiring “back doors” to encryption protections. The “Signalgate” scandal in Washington, DC started most people asking, “What is this encryption stuff?” So we decided to provide a primer on the state of encryption today.

While the technology behind encryption is complex, it is not new. The basic algorithms have been with us for decades, silently running on devices and servers, invisible to the user. The purpose is basic: to keep data safe from prying eyes, like criminals and nation states.

Encryption is also a good way of saving money and not just in avoiding ransoms. Insurance companies often offer up to 15% premium discounts to businesses demonstrating strong security practices, including proper data encryption. Encryption significantly reduces the risk of data breaches and their associated costs.

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

AI bubble about to pop for cybersecurity?

As quickly as the artificial intelligence (AI) industry appeared, it may disappear just as quickly. That may have significant ramifications for cybersecurity, according to industry watchers, as the technology falls into the trough of disillusionment.

When OpenAI burst on the scene more than two years ago, Microsoft was a significant instigator in its growth and adoption. Microsoft invested billions in the not-for-profit enterprise for early access to cutting-edge AI technologies and helping accelerate OpenAI's research. It transformed its Azure cloud platform into a leading infrastructure provider for AI development, offering specialized hardware (like GPUs and TPUs) and services tailored for machine learning workloads. AI capabilities were embedded across its product suite, and Microsoft Research contributed significantly to AI advancement in computer vision, natural language processing, and deep learning.

All of that came with extreme demands on computing resources. Microsoft began a buying spree in data centers, both to secure resources and build new centers. They even entered into a deal to reopen the notorious Three Mile Island nuclear power plant.

Spree ends

That has all come to an end. As reported in Bloomberg last week, the company decided to scale back data center projects in the UK, Australia, and Indonesia. Data center development in Illinois, North Dakota, and Wisconsin is also canceled. All tolled, Microsoft has walked away from more than 2GW. That’s on top of the news that Microsoft had walked away from two data center projects in the US and Europe, piling on to a February announcement that it was canceling data center leases.

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

EU’s DORA: Who will stand up for protection?

The EU's Digital Operational Resiliency Act (DORA) went live in January. This legislation's goals seem to conflict with the US administration’s willingness to ignore technology security standards. The question is: Who will stand up to protect corporate and consumer data?

DORA is highly targeted at the stability and resilience of the financial services sector. It ensures financial institutions can respond to, withstand, and recover from ICT-related threats and disruptions. It also requires robust strategies and policies to manage ICT risks in financial institutions.
Arnaud Treps, chief information security officer at Odaseva, said, “DORA is very different from previous regulation where you have to change where you operate. DORA is about having proper backups, the capability to restore quickly, and building redundancy.”

Europe takes the lead

But does the US rejecting data privacy regulation mean walling America off from the rest of the world? Meta has threatened to potentially limit

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

How social media moderation works

There has been a lot of debate regarding the imposition of moderation on social media and whether that constitutes censorship and violations of the right to free speech. That argument is specious at best. Offending content on commercial social media is removed only when it negates profitability.

Most humans moderate their speech. Sometimes we think about impulsively speaking in reaction to something that incites strong emotions. People who do not react have what is called, “self-control”. Some people don’t have that filter (I’m looking at you, Elon) and blurt out offensive, nonfactual, or dishonest responses. Sometimes they aren’t atypical, they are just selfish people without manners (still looking at you Elon). Moderation of your speech is just a civilized attitude.

Profit motive

When it comes to social media, however, especially for-profit social media, the primary factor is profit. That has been the guiding principle of all social media moderation.

Read more...