Standards bodies doing the heavy lifting in AI regulation

As generative AI (GAI) platforms become more commonplace, concern over their security issues is growing. As with any digital product, security relies on four arenas. User responsibility, corporate accountability, government regulation and industry standards. The first two are unreliable because users feel put out by having to protect themselves and corporations don’t like to spend money on security upfront. That leads to the third arena, legislation produced by people who don’t know the difference between a thumb drive and a thumbtack.

That put a lot of the load on industry standards and one of the most active is the European Telecommunications Standard Institute (ETSI). Cyber Protection Magazine’s (CPM) editors Lou Covey and Patrick Boch sat down with Scott Cadzow, chair of ETSI’s Specification Group for Securing Artificial Intelligence about the progress and problems of standardizing safe GAI. The link to the full interview is next. The transcript that follows was edited using ChatGPT.

CPM – Maybe to set the stage, I asked for this interview because I read the press release on securing artificial intelligence. Obviously, this is a very big topic everywhere these days, especially in securing and regulating AI. Could you give us a general introduction to what you do and what ESTI is doing in terms of securing artificial intelligence? I’m also curious about when ESTI started working with AI, was it after ChatGPT’s release in November?

Scott Cadzow – No, not at all. We’ve been working with AI for over three years now, almost four years. Initially, it was an academic exercise to identify AI’s security issues and their impact on systems. We began with problem statements and defining artificial intelligence’s implications for system security. Since then, we’ve expanded our work, as mentioned in the press release. We’re addressing topics like AI testing, providing assurances of AI’s existence, characterizing AI hardware, and more. Our reports have contributed to a wider understanding of AI applications. The recently introduced AI Act has shifted our focus, aiming to bring rationality to the AI debate. We want to counter the dystopian narratives from movies and novels and focus on the positive aspects, such as using AI in medical applications or data verification. We aim to add security features that verify data authenticity, and integrity, and ensure proper processing within AI systems.

CPM – Okay. I have a question along those lines. Have you been surprised by the lack of quality in the output of generative AI?

Scott Cadzow – Not really surprised, but more surprised that people consider it important. Generative AI’s quality is generally low. It’s comparable to a low-level school student with a reading age of around nine or ten. While grammar and spelling are often fine, the meaning is often nonsense, and it’s easily identifiable. Many worry about generative AI’s impact, particularly on less educated populations. However, we need to understand that generative AI is not much better or worse than the ravings of someone intoxicated. People overestimate AI’s potential and focus on the wrong aspects. Instead of being concerned about using AI to manipulate audiences, we should concentrate on the positive aspects, like AI assisting in medical diagnoses. Our goal is to bring rationality to the AI debate and distinguish between the genuine benefits and the potential risks.


You get what you pay for

CPM – I read an article claiming that generative AI’s low quality actually has a positive impact, as it makes it more affordable to produce low-quality content. Although, I don’t see that as a significant issue. We’ve been using AI checkers to evaluate article submissions. Is that the kind of work you guys are doing?

Scott Cadzow – Yes, that is part of our work. We want to identify if AI has been involved in generating content and make it obvious whether something is AI-generated. We’re developing tools to detect AI usage, enabling better verification. We aim to prevent the falsification of texts, where people use generative AI to bulk up their original content artificially. AI can be a shortcut for those unwilling to put in real effort. But our focus is on finding techniques to filter AI-generated content and promote AI’s positive use. We want AI to assist humans in safer, less dangerous tasks and make it a valuable asset rather than a manipulative tool.

CPM – Interesting. Security of the AI platform is a crucial aspect since hackers might try to manipulate the AI’s outcomes. Have you seen any evidence of such attacks in the wild?

Scott Cadzow – While we haven’t seen much of it in the wild yet, research has shown that it’s a viable attack vector. Hackers can modify the neural network weightings by feeding queries and manipulating feedback loops, potentially altering the AI’s decision-making. If attackers can directly target the hardware, the risks increase significantly. Our goal is to gain a deeper understanding of specialized AI processors and their role in the attack surface. We want to secure the entire supply chain of AI, including hardware, software, and intent, to minimize potential attack opportunities.

Related:   2020 – a paradigm change in cybersecurity


The challenges to small businesses

CPM – So, with AI being used for both offensive and defensive purposes, do you think small and medium-sized businesses or private individuals are adequately prepared to defend against AI-driven attacks?

Scott Cadzow – Small and medium-sized businesses and private individuals might face challenges in defending against AI-driven attacks, especially if they lack the resources to invest in advanced cybersecurity solutions. Our aim is to make security solutions more accessible and easier to implement for everyone. Raising awareness about potential AI threats and educating people to verify sources and recognize AI-generated content can improve overall security. Zero trust principles, where nothing is trusted until verified, are essential. We need a mindset shift and proactive measures to ensure broader cybersecurity readiness.

CPM – Some discussions around AI have raised concerns about the singularity, where AI takes over the world. What’s your take on this? Are you worried about it, or do you think we can manage AI’s impact on our lives?

Scott Cadzow – While some discussions revolve around AI singularity, I’m more optimistic about humanity’s adaptability and social nature. We’ve survived previous technological revolutions and challenges. Panic and fear can spread quickly, but so can confidence and good behavior. We need to promote confidence and good behavior in dealing with AI, rather than fearing it. I believe that if we remain open to the issues and work alongside AI developers, we can make AI work for us, rather than against us. AI is just one more challenge in our journey, and by maintaining rationality, we can achieve a safe and beneficial coexistence.


Quality degradation

CPM – There’s an interesting study that suggests future generations of AI may be trained on content generated by current generative AI. This could lead to a decrease in AI quality over time. What do you think about this?

Scott Cadzow – It’s an intriguing perspective. As AI generates more content, there’s a possibility that the quality of training data may decline. However, I have more faith in human adaptability than in machines. Human researchers can still ensure the AI’s training data is of high quality. By remaining vigilant, we can continue to improve and develop AI responsibly.

CPM – That makes sense. We have a long way to go, but with proper awareness and training, we can mitigate potential risks. Thank you for your valuable insights, Scott.

Scott Cadzow – You’re welcome. Remember, our goal is to build trustworthy and secure AI systems. By being proactive and responsible, we can maximize AI’s benefits while minimizing risks.

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Founder and Editor at

Patrick Boch has been working in the IT industry since 1999. He has been dealing with the topic of cybersecurity for several years now, with a focus on SAP and ERP security.

In recent years, Patrick Boch has published various books and articles as an expert, especially on the subject of SAP security. With his extensive knowledge and experience in the areas of SAP compliance and security, Patrick Boch has served as product manager for several companies in the IT security sector since 2013. Patrick is Co-Founder and Editor of Cyber Protection Magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *