A brief history of bots

Bots have been around for more than half a century to automate repetitive tasks and provide services on early internet platforms. The first was ELIZA, developed as a research project in 1966 at the Massachusetts Institute of Technology (MIT) the goal was to simulate conversations with a human being. ELIZA conversed with users, although it did not understand what the user was saying. Artificial intelligence chatbots are much more sophisticated versions of ELIZA, but still lack human comprehension.

Bots not replacements

The purpose of ELIZA was to determine if computers could replace psychoanalysts. Consequentially, it was the first time the prediction that computer could replace humans had some hard evidence. Today, there are mental-health AI applications with not much better results than ELIZA but projected to have a $8 billion market by 2032.

In 1988, the earliest broad use of bots was Internet Relay Chat (IRC) automating user list management, searches, and providing services like weather updates or game scores. But these were not known as bots at the time. They were called automations and still required a human interface to operate,

Premium Membership Required

You must be a Premium member to access this content.

Join Now

Already a member? Log in here
Read more...

AI bubble about to pop for cybersecurity?

As quickly as the artificial intelligence (AI) industry appeared, it may disappear just as quickly. That may have significant ramifications for cybersecurity, according to industry watchers, as the technology falls into the trough of disillusionment.

When OpenAI burst on the scene more than two years ago, Microsoft was a significant instigator in its growth and adoption. Microsoft invested billions in the not-for-profit enterprise for early access to cutting-edge AI technologies and helping accelerate OpenAI's research. It transformed its Azure cloud platform into a leading infrastructure provider for AI development, offering specialized hardware (like GPUs and TPUs) and services tailored for machine learning workloads. AI capabilities were embedded across its product suite, and Microsoft Research contributed significantly to AI advancement in computer vision, natural language processing, and deep learning.

All of that came with extreme demands on computing resources. Microsoft began a buying spree in data centers, both to secure resources and build new centers. They even entered into a deal to reopen the notorious Three Mile Island nuclear power plant.

Spree ends

That has all come to an end. As reported in Bloomberg last week, the company decided to scale back data center projects in the UK, Australia, and Indonesia. Data center development in Illinois, North Dakota, and Wisconsin is also canceled. All tolled, Microsoft has walked away from more than 2GW. That’s on top of the news that Microsoft had walked away from two data center projects in the US and Europe, piling on to a February announcement that it was canceling data center leases.

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here
Read more...

AI making life hard for consumers and cybersecurity

The AI industry supposedly to make life easier for humanity. Since it first burst onto the scene it has, arguably, made life more difficult. Consumers and the cybersecurity industry, in particular, are struggling professionally, emotionally, and mentally to understand the value, if not the efficacy, of the technology.

Cyber Protection Magazine evaluated three surveys, from Armorcode, Arkose Labs, and Appdome, over the past few weeks. They agreed the public image of AI is untrustworthy, full of false promises, and something to be feared. In spite of this image, customers believe they must adopt and adapt to the technology, even if they don’t want to.

Read more...

Breach fatigue or too big to fail?

As we prepare for the annual October holiday season with Cybersecurity Awareness Month there is an important question to ask. Are we as a society at the point of fatigue over every new security breach, or are the companies getting breached just too big to fail?

Security giant Fortinet announced a data breach this week that was remarkable in two ways. One was how small the breach was (less than 500GB) Two was how calm Fortinet seemed to be about. Security gadfly Dr. Chase Cunningham posted a flippant comment about the breach on Linkedin, encouraging his followers to “buy on the breach.” He pointed out that with big public companies, in security or not, generally take a hit on their stock for a day or two after a breach, but the stock rises to new highs as the dust clears. And no one seems to care about the downstream customers whose data might have been stolen.

A 2010 study published in the Journal of Cost Management concluded that a company could be more profitable if it annoyed unhappy customers more than they already were. The success of that strategy increased with the size of the company, according to the study, and when there were fewer competitors for a customer to turn to.

The reasons for the success were simple. If a pissed off customer decided to go a smaller provider, there were always new customers who signed up, simply because they were the biggest. If there were no smaller competitors, the customer never went away. In the process, the offending company rarely has to pay out to make the customer whole. The study pointed our that companies like United Airlines have notoriously bad customer service, but they rarely lose market share because of it.

Kevin Szczepanski, co-chair of Barclay Damon's Data Security, is much more forgiving

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...

Crossing the Compliance Chasm

There is a wide gap between regulatory compliance mandates and practical implementation and enforcement that I like to call the “Compliance Chasm”. That chasm is defined by the activity to protect consumers and consideration for the economic and operational impact on business enterprises. Finding that balance requires thought, not the more popular whack-a-mole enterprise strategy that reacts to new compliance mandates.

The frequency and size of regulatory fines are rising for non-compliance. In January 2023, Meta was fined $418 million for GDPR violations by Meta properties’ Facebook and Instagram. Ireland’s Data Protection Commission follows up in May that same year with a $1.3 billion fine for additional violations. And those were just the latest fines imposed on web giants, that also included Google and Amazon.

The targets of those fines might be justified in saying compliance is an impossible task. By 2025 the volume of data/information created, captured, copied, and consumed worldwide is forecast to reach 181 zettabytes. Nearly 80% of companies estimate that 50%-90% of their data is unstructured text, video, audio, web server logs, or social media activities.

Read more...

Election security is not a technology problem. It is how naive we are

When it comes to election security, the technology we use to vote and count those votes is not the problem. The problem is how naive we are.

Election security has been at the forefront of daily news cycles for more a decade. The concerns about illicit use of technology to input and count the votes turned out to be largely overblown. Every U.S. state other than the Commonwealth of Louisiana, uses paper ballots, matching the practice of every other western democracy. Lawsuits have bankrupted people and organizations claiming the technology was changing votes. Those that have complained the loudest about election interference are now facing prosecution for the crimes.

Now the tech focus is on the use of artificial Intelligence to create deepfake video and audio. A recent pitch from Surfshark,

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
Read more...