ChatGPT One Year On: The good, the bad and the unknown

ChatGPT first launched on 30th November 2022, reaching a record-breaking one million users within five days. Since then it has only continued to climb to new heights, with estimations putting the number of users at around 180.5 million as of November 2023.

Despite its popularity, ChatGPT has been a contentious topic. On one side, you have advocates for how generative AI can aid efficiency and productivity, identifying trends to protect us from cyber threats better than any human ever could. But in the opposite corner, you have voices that are hugely concerned about data privacy and the mass layoffs we could, and already are, experiencing as a result of the implementation of generative AI.

With ChatGPT only reaching one year old this week, the technology is still in its infancy. There is much more for us to discover about its capabilities, uses and drawbacks, marking a huge grey, unknown area around generative AI that many are cautious of.

In celebration of the technology’s first year, we spoke to six security experts about the good, the bad and the unknown of generative AI to determine what the next year of ChatGPT’s life could have in store. 

The good

Jason Keirstead

AI isn’t a new conversation in cybersecurity – many vendors have had AI integrated into their solutions for years. But, generative AI has brought huge, new benefits to the industry over the past 12 months.

Jason Keirstead, VP of Collective Threat Defense at Cyware, explains that “AI remains a powerful tool for cybersecurity defenders. AI, for example, can be used to generate new detections based on historical data and threat intelligence, helping organisations keep up with the evolving threat landscape. AI can also be used to orchestrate detections across multiple security tools, such as intrusion detection systems (IDS), security information and event management (SIEM) systems, and firewalls. This can help organisations get a more complete picture of their security posture and identify threats that may be missed by individual tools.”

Matt Hillary

With this complete picture of a company’s security posture, misconfigurations and vulnerabilities can be detected and reported “much faster than human beings ever could,” adds Matt Hillary, Chief Information Security Officer of Drata. “When configured and trained accordingly, AI can help suggest and even support the remediation of vulnerabilities and response to security alerts. Using AI in this manner also helps mitigate the risks associated with potentially missed analyses in routine tasks and exhaustive manual processes that too often plague traditional methods.”

Even with the efficiencies that AI brings, cybersecurity must be a team effort. But the technology also helps with this by enabling easy collaboration within cybersecurity teams and between other stakeholders.

Michal Lewy-Harush

As Michal Lewy-Harush, Chief Information Officer at Aqua Security, shares: “Generative AI and ChatGPT is already proving to be a very powerful tool to foster collaboration and bridge the gap of resources between Dev and Security teams. GenAI can be utilised to automatically generate prescriptive remediation steps for misconfigurations and vulnerabilities across container images and other artefacts, multiple clouds, and multiple workload types. This means that developers and security teams no longer need to spend countless hours manually reading advisories, searching for patches, and building verification steps before taking action. Instead, AI guides them with clear, concise instructions on how to complete the fix, and in addition it helps the security teams focus on the most critical vulnerabilities that create the highest risks.”

The bad

Gary Lynam

Despite these great benefits to security teams, ChatGPT does have its downfalls. After its launch a year ago, Gary Lynam, Managing Director, EMEA at Protecht, recalls that “it didn’t take long before major organisations like BT were announcing plans to replace thousands of workers with artificial intelligence (AI), while newspapers were quickly filled with stories about AI already doing a better job than trained humans at various business tasks and applications.”

Related:   Roblox Hack: don’t let cybercriminals blindside you

He also warns that “AI’s current limitations must not be overlooked, such as the ChatGPT models that invent fake case studies identified as ‘hallucination bias’ by the Financial Conduct Authority (FCA).”

Lynam recommends that “in order to make the most of AI’s vast potential without falling foul of its current limitations, organisations must build risk and compliance capabilities within teams to ensure the AI operating models and outcomes are fully understood and avoid any bias. Ensuring that rigorous validation, testing, and audit processes are in place along with continuous monitoring is vital.”

The unknown

Chris Denbigh-White

Whilst you would have had to have been living under a rock over the last year to not have heard about ChatGPT, there is still so much that is uncertain about the technology. As Chris Denbigh-White, Chief Security Officer at Next DLP, summarises: “Since ChatGPT entered the public’s consciousness, it has been cited as both a dream for employees and a nightmare for organisations that are trying to protect sensitive data.”

He expresses concern around the uncertainty of trust: “The question is: do we trust LLMs? Just like the friend in the pub quiz who is totally convinced of an answer even though there’s no guarantee he’s right, LLMs are still a black box – and regulation that surrounds it is still a bone of contention and unlikely to be solved anytime soon.”

Chris Dickens

We have discussed the cybersecurity benefits of generative AI for organisations, but Chris Dickens, Sr Solutions Engineer at HackerOne, opens our eyes to how cybercriminals can also use the technology to their advantage: “There is a constant battle between organisations that rely on Generative AI use cases to safeguard their security systems and the threat actors that use it to conduct even more sophisticated and prevalent ransomware and phishing campaigns.

“However, in the hands of ethical hackers, looking at an outsider mindset and an understanding of how GenAI can be exploited, it has also become a powerful tool for them to seek out vulnerabilities and protect organisations at even more speed and scale. HackerOne’s latest Hacker-Powered Security Report highlighted that 53% of hackers use GenAI in some way, with 61% of hackers looking to use and develop hacking tools from GenAI to find more vulnerabilities in 2024.”

As we look to 2024, we can expect to see even greater applications of ChatGPT in cybersecurity strategies, reinforcing the fact that a successful cybersecurity program isn’t about replacing human ingenuity with AI, but augmenting it.

The verdict

ChatGPT has certainly taken the world by storm in its first year, and it’s safe to say that it isn’t going anywhere any time soon. Whilst there are still many issues and uncertainties to iron out, and it will take a lot of work and a few years until we achieve this, the benefits of generative AI are too great to give up. In just 12 months, it has had a profound impact on several walks of life. So, we can expect to be celebrating ChatGPT’s birthday for many years to come.

Leave a Reply

Your email address will not be published. Required fields are marked *