How end-to-encryption can protect businesses from the increased risks of ChatGPT

As the use of AI continues to grow in the business world, so too do the risks of cybercriminals exploiting these technologies for their own gain. A recently published research analysis by Darktrace revealed an 135% increase in social engineering attacks using generative AI. One such threat comes from generative AI tools like ChatGPT, a language model trained by OpenAI, which can be used by hackers to penetrate corporate systems and disrupt business operations.

As CISOs must be aware of any vulnerability in their systems, the potential for tools like ChatGPT to allow threats to permeate their systems is a serious risk which must be addressed prior to implementation. With the added complexity of disparate workforces that make it harder to educate and support employees, putting in place tools and technologies to prevent them from unwittingly taking inappropriate actions is the only way forward.

How AI tools like ChatGPT increase risk

AI chatbots may have the potential to improve cybersecurity, either detecting suspicious patterns in documents and network activity to prevent breaches.  However, what makes AI tools like ChatGPT particularly concerning for UK businesses is their ability to mimic human communication and generate convincing messages. It’s an easy way for cybercriminals to craft sophisticated phishing attacks, social engineering scams, and other types of malicious messages that can trick employees into handing over sensitive information or downloading malware.

It’s important to be aware of the many and various ways in which a chatbot like ChatGPT can throw up AI security issues caused by malicious actors. Some of these include data theft, where attackers use ChatGPT to impersonate others and create code to steal data. Alternatively, hackers might use ChatGPT to write phishing emails that read like they were written by a professional. It can also be the facilitator for malicious code that could encrypt an entire system in a ransomware attack.

Spam text generated by ChatGPT can also carry malware or lead users to malicious websites. One specific type of social engineering attack known as a business email compromise (BEC), which can trick someone into sharing data or sending money, can get past the usual security filters of an organisation when powered by ChatGPT.

How end-to-end encryption defends from breaches

As businesses are under pressure to adopt ChatGPT at pace, this presents a greater risk as many of the fundamental security protocols needed to secure confidential and sensitive files and data can go unchecked and overlooked. End-to-end encryption can be implemented to ensure the confidentiality and integrity of business communications.

While it is not a panacea for all cybersecurity threats, end-to-end encryption can help mitigate the risks of AI-powered attacks and threats like those enabled by ChatGPT. By encrypting all data that travels between devices, end-to-end encryption can prevent cybercriminals from intercepting and manipulating messages sent using ChatGPT or other AI tools.

Related:   Why is data protection training crucial for organisations?

Encryption ensures that data is secure and protected from unauthorised access by converting it into an unreadable format that can only be decrypted by authorised users. Anonymisation, on the other hand, removes any identifying information from the data, such as names or email addresses, to prevent it from being linked to specific individuals.

Other ways to secure data while using ChatGPT

In addition to adopting end-to-end encryption to maximise their defences, organisations must adopt the following tactics to keep their data secure from any conversational AI system:

  • Educate using ChatGPT: ChatGPT itself can be used to help close the cybersecurity knowledge gap by offering insights on preventative measures in concise text. It can also offer fast advice on setting strong passwords, password resets, and more.
  • AI systems audits and assessments: Organisations should conduct regular assessments of their AI systems to identify vulnerabilities. This reduces the risk of insider threats, accidental or intentional data breaches and other security incidents.
  • Up-to-date software: It’s essential to ensure employees are using the most up to date versions of software. Ensure the company has a policy to proactive patch security vulnerabilities that a threat actor might use to attack data.
  • Firewalls: The operating system’s firewall is the first line of defence that polices traffic, with the power to block malicious activity, and this must be enabled.
  • Multi-factor Authentication (MFA): Using MFA controls access to the AI language model and its users’ accounts by requiring multiple forms of identification before access is granted, such as a password, a fingerprint scan or a one-time passcode.
  • Password security:  A worker’s password is one of the most basic yet essential lines of defence against a data breach. Creating a strong password for all accounts is essential to secure accounts.

Proactively mitigating ChatGPT issues

While hailed as the future for automation and personalisation, the proliferation of advanced AI tools like ChatGPT in business can also introduce security issues that must be addressed.  As threat actors use this to create more dangerous malware, scammers may use them to execute more daring social engineering attacks. CISOs, CIOs and teams that are prepared with a robust cybersecurity plan which incorporates end-to-end encryption will ensure all data and systems remain safe and secure.

CEO at 

István Lám is a cryptographer, computer scientist, and entrepreneur. He is co-founder & CEO of Tresorit, the cloud encryption company. István earned his MSc degree with highest honors at the Budapest University of Technology and Economics. As a researcher of cryptographic key sharing and distributed systems at Crysys Labs, Hungary and the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, he specialized in cryptography engineering. Still at university, István co-founded Tresorit, a global leader in cloud encryption. Tresorit now provides end-to-end encrypted collaboration and file sync for more than 10,000 businesses globally.

Leave a Reply

Your email address will not be published. Required fields are marked *