Getting a handle on generative AI, before it gets a handle on us

You cannot spit without hitting a news story about generative AI (AKA ChatGPT, Bard, etc.). Some of the news is good, some of it bad, and all of it fairly confusing. ⁠Cyber Protection Magazine⁠ has been digging through the detritus and find what really is good or bad about it and today we continue that with an interview with a very smart man: Dr. James Norrie, a full-time professor in the Management, Marketing, and Entrepreneurship department at York University and founder of the cybersecurity company cyberconIQ. He holds advanced degrees in cybersecurity and intelligence analysis, copyright law, and project management. And he has a very specific take on generative AI.

PM — Thank you for joining us today. So, ChatGPT is a big deal.

Dr. Norrie
It is indeed.

CPM
It could be a big deal for a good reason, or it could be a big deal for a bad reason. And I’ve had a basic problem with technology since I got involved with it, going back to when I was working with nuclear missiles. And it seems like we develop stuff saying it’s going to be good, and then it gets changed. And I think that process has gone faster with generative AI, which really isn’t an AI, it’s a large language model. But it’s gone faster because even before it became commercially available, cyber criminals were using it to create malware. The ChatGPT was announced, I think, in early December and two weeks later was a story about how somebody used itto create a new form of ransomware malware.

Dr. Norrie
Yeah. Well, let’s linger there for a second for your listeners.

So in one of my other jobs, as you know, I also happen to be the CEO and founder of CyberConIQ. And one of the things we do is help companies combat ransomware and malware and other kinds of human factors attacks. And we were astonished by the speed with which these new tools are being put to use to create new memes and much more targeted social engineering attacks. Now, what your listeners might want to be aware of is something that I think you and I would agree on. Frequently, what happens is we have a tech innovation and we do release it with this grant. and we’re all anticipating the good. But like many things that involve social, political, or legal progress, we can’t absorb the pace of the change. And inevitably, it begins to pose really new problems, new challenges, new threats. And then eventually what happens is the good is perhaps overwhelmed by the bad, or maybe they coexist. So in this case, the frightening statistic that I read recently is that there is a new, a new brand new meme in a threat that is ransomware related every 1.7 seconds. And it’s all being machine generated. And then in addition, the positioning of that is automatically done to amp it up from general fishing, by taking individual contextual information that’s easily available by mining all of most of our social profiles. And you put the two together and the bad guys are going to be able to scale this before we are able to scale our defenses. And so what you just mentioned to me is exactly why the promise of technology always underwhelms because the promise is never matched to the threat and we never have an opportunity as a society to get ahead of the really fundamental issues that this new technology delivers for all of us.

Getting ahead of the problem

CPM
Yeah, and how do we fix that? I mean, we’re talking about legislation now, you know, that will actually control this and being written by people who have no understanding about what it is.

Dr. Norrie
Right, so don’t get me started on politics. We’ll try and leave this aside, but you’re right. We do not have leadership in Washington in either house that probably truly fundamentally understands this. So they rely on information that is primarily delivered advocacy and all of the things that happen with large tech companies trying to set the agenda and I can understand as the CEO of a large tech company why you would want to influence government policy because it is fundamentally critical to your destiny as a private company. So I understand that. But the idea of a generative AI is one that we should tread very carefully about as a society. And we should tread carefully because you mentioned the law and that is one area of expertise, really a deep love and passion. And IT and IT law can’t keep up with the pace of change. And so the law by definition, in particular, common law jurisdictions like the United States, in Canada, and UK, and Australia, and a whole bunch of countries around the world, we rely on cases coming through to help define how to interpret both existing law and potentially have shaped new laws. Well, you can imagine it is going to take months or years before we start to see some of the first vestiges of those cases start to make their way through courts and appellate courts. In the meantime, what do we all do? And let me see if I can use an example. So, Lou, we’re going to pretend that you were a painter. What kind of painting do you really like? So when you go to me, Museum kind of like.

CPM
I really don’t like painting. But I’m out of the same opinion of Michelangelo that I dislike painting and that it resembles sculpture. And I dislike sculpture and that it resembles painting.

Dr. Norrie
Okay, but if you were in a museum, what might you be looking at as a genre of art? Would it be contemporary?
Would it be European masters? What might you look at?

CPM
Probably European masters.

Dr. Norrie
Okay, so you’re in a museum, you see a piece of art displayed and that piece of art of course belongs to the museum. And it is in a fairly public space, right? And you go and look at that and you absorb that as a human being. So it provides some input, sensory input to you in the form of the effect it has on what you’re seeing and what you’re thinking and creating. And so when you then go after you’ve stimulation and input, you go as a human to create your own painting, then those influences, you don’t, and you haven’t done anything with them, they have only inspired you, would that be fair?

CPM
Yeah.

Dr. Norrie
Okay, so there’s no question of ownership.

CPM
Right.

The question of ownership

Dr. Norrie
Okay, now let’s go into the world of generative AI. And we know digitally in order to have an image appear inside a computer, it has to be digitized, right? We would agree. And in digitizing it, we pixelate it. And as we pixelate it at some level of detail, we are bring that image down into teeny tiny dots. Now, for any resolution of pixelation, we kind of calculate if we were to take those pixels and start to combine them into something new. And we have a very intriguing legal problem, don’t we, Lou? Because the image is not an inspiration to the artificial intelligence generator. Is it? It’s digital input. And so the amalgam of those hundreds of thousands of pixelated images that allow these new tools to create something digital art. And we’ve had this conversation because this is starting to happen. The real intention is who owns that? Is it the person that the language model that perhaps creates the image? So it would be the software company? Or do we all have a bit ownership in the sense that you could really trace back each of those individual pixels and say, what teeny tiny percentage of each of those original quote-unquote digitized images actually resulted in the final image, right? And you could do some sort of calculation that would say, I own 1,000 or something, you know, it would be really intriguing legally. But the idea of ownership when you’re dealing with something that belongs as to somebody else, even in the public domain, and that is now important, whether we’re talking about the printed word, visual images, whether we’re talking to anything else, it really opens up a whole new question about ownership and intellectual property. We’ve had intellectual property rights as defined law for the better part of almost 300 years we long ago realized that human endeavor and what we create has value. What do we do when machines render that process that is both a social and a legal question and I don’t know that anybody who in Washington is even thinking that broadly. There are more concerned about what it may mean without understanding the fundamental way the society are going to be asked to resolve the issues of how this technology will evolve and be used in future. And so it’s going to overtake us before we have a chance to catch anything up.

CPM
Well that’s depressing.

Dr. Norrie
Well, It can provide so for instance, one of the classes I teach at York
College in Pennsylvania is cyber security and national security. So we did a really interesting thing. We were looking at this very question. So I had the students put into ChatGPT, the request for it to create a rap lyric about the dangers of AI. Allude, I was amazed, the students were amazed. Not only how thoroughly it could self-describe its own threats, which was quite interesting. And that goes to your point about this really being a language learning model, deep language learning model. But it looked generally a lyric. Then we were able to go and actually have that lyric fed into a machine that digitized the voice of Snoop Dogg. And we had Snoop Dogg doing a rap about the dangers of AI. It took the class no more than 15 minutes to do that. Now, that endeavor would previously have been an enormous undertaking. So there is something to be said for the speed and the capability of these machines to do certain things. And that example is kind of benign. It’s just fun. But imagine that on something that really is a human problem. Think about the impact on things like radiology and radiography and reading underlying CAT scans, PET scans, X-rays, the speed and the clarity with which we’re going to be able to get those results that goes well beyond the human eye. I mean, there’s a great use, right? Let’s think about what happens when you’re dealing in professions like my own. Like, if you’re thinking about law or medicine or engineering, where massive amounts of information that are written need to be synthesized and validated. Imagine what it would be like instead of having, you know, a lawyer pouring through hundreds or thousands of cases that you could do that in a blank and identify those that had the perfect fit for your case pattern and move to the really important question of how you’d use that information to defend your client or to advance your client’s interest rather than the grueling work. of sifting and sorting through that. So tasks that we take for granted today will change overnight. That’s the promise of AI. I think we would agree, Lou.

CPM
Yeah, with some provisos. I mean, as you’ve been talking, I’ve been thinking of a lot of different issues. And I’ve been writing about AI for about five years now.

Dr. Norrie
Yeah.

Missing pieces

CPM
So I’ve talked to a lot of experts and while it’s this, this could be debatable to a certain degree, there are basically three components to any AI. Okay. One is the data source. Right. Some people call it a data lake or database, but it’s a lot of data.

Dr. Norrie
Okay.

CPM
The next is the machine learning component, which is also considered to be a large language model. Finally, there’s deep learning.

Dr. Norrie
Right.

CPM
Deep learning is what’s missing from almost everything that’s identified as an AI. That’s why a lot of people are saying that chat GPT and Bard and whatever Microsoft is doing is not really an AI. It’s a large language model because all you’re doing is feeding a lot of data into the machine language for machine design and it’s pumping out stuff based on the data. The deep learning…

Dr. Norrie
Well, even before three, back to one and two for a second because I think you’re all sure that’s really fundamentally important. But even before you get to deep learning, I have a question. How many of these are capable of figuring out the providence of the data? And this is the issue that I think people don’t understand about your point one to two. No model if you stop at one and two is of any value unless it can sort out whether the information sources that it’s actually using are reliable, reputable, and informative to the question.

And there is the line that I think you’re trying to draw, which is really a student. critically important for listeners to understand. Unless you can mimic judgment, which is the deep learning part, the first two are nothing more than enormous data models, exactly what you described. But the fear that so many researchers as I have, and my colleagues sharing this fear, is it scoops up information currently from all kinds of sources, not all of which should be scooped up and treated equally. So that we get into the question of what are the rules, where you’ve been hearing all this stuff about programming. You’ve probably heard the tragic story of the suicide in Belgium of a man who was depressed. You’ve probably heard about the New York Times reporter who was encouraged to leave his wife. So we know that the boundaries of machine learning, especially in a large language model, need to be framed by a set of rules. And the instant you’re into a set of rules, absent judgment, then you are into something that is not number three.

CPM
Right

Dr. Norrie
Okay. And so really when we get to number three, three, it’s going to change some of the opportunity and some of the risk. But getting to three from one and two is not as obvious as you know from your previous conversations.

CPM
I was actually commissioned to help an engineer write a book on how to create unbiased AIs.

Dr. Norrie
Good luck.

The issue of bias

CPM
Yeah, well, that’s the thing is that after six months into it, we both came to the table to create an AI that is unbiased.

Dr. Norrie
Well, because human judgment has bias.

CPM
Yeah, and that’s the thing. As long as you’ve got humans creating it, regardless of where they’re getting the data from, because the data was created by humans, you’re going to put that bias in. And the problem with the internet or the data that is in the internet that is being fed into these AIs is not vetted. It is, it is, it is, well, it’s as the old saying goes “90% of everything is crap.” So 90% of everything being fed into these AIs is crap

Related:   From Coders to Protectors: Women's Influence in Cybersecurity

Dr. Norrie
So Lou how long ago in technology have we had the phrase “garbage in garbage out?

The first time I heard it was when I was taking a look at a failure analysis of a Trident II nuclear missile.

Dr. Norrie
Oh, there you go.
30 years or more.

CPM
Oh yeah. More. like 50.

Dr. Norrie
Okay, so exactly I so when people react to the current dilemmas by saying who got the rule or who didn’t program this writer who allows this to happen? We need to take what you just said and we need to Understand at a software level. What’s really happening? So you have code and code is going to accept into as an example The prompt and you’re gonna put a question in and it’s going to be taught how to undo that prompt and sort out where huge universe of possible data sources it has, which ones it thinks are relevant to crafting a conversational reply. That is all ChatGPT is doing. Nothing more, nothing less. So it’s more conversational and it’s able to produce output that is, therefore, more usable in a human context. Hence in higher education, one of the fears of all of my colleagues that they express to me all the time is, what are we gonna do when students just use ChatGPT to generate all of their essays or research papers? And my answer is, we’re already there. So what are you doing about it? And how are you embracing it and bringing it into the classroom? Because you are quickly going to get to the point where although there are things you can do to detect it, they too will eventually fail to detect it.

What are we measuring?

But here’s the question, do we summarize a college degree by our ability to write a properly sourced academic paper? Or to actually consider critically and with an eye to the quality and caliber and authenticity of the information? What we should be teaching students is data literacy. We shouldn’t be too concerned because I’m gonna tell you, except, in a college or university atmosphere, in the real world, people plagiarize and steal all the time. This is the natural part of being a professional. . I’ll bet you there are a bunch of people asking colleagues, hey, what’s your AI policy? Have you felt any idea what you’re gonna do about AI? How’s AI affecting your business? And that’s how it’s gonna start, right? So what’s natural about being a human being is we connect, we share, we’re social creatures and the same thing is true professionally. So we need to be teaching students how to use these tools with a keen eye to the critical analysis of what they produce so that we can take the benefits and leave them negative on the side. And we need to teach how to do that. Cause if we don’t, Lou, we are. headed for a disaster.

And in particular, our current political climate, where our enemies, those who would destroy us from the inside out, and that’s a whole other podcast I’d love to do with you sometime, but there are mortal enemies of this shining beacon of democracy on the hill who would like to see it slide into an abyss. And so they are disrupting us from the inside out, and artificial intelligence is going to give them a voice and a platform that is going to be incredibly persuasive. Let me just warn your listeners, if it was able to cause a depressed man in Belgium to take his life, is it possible that it would also very credibly contribute to false partisanship as an example, to false memes, to interference in our election processes, to advocacy for ideas that would stoke domestic terrorism, I could go down the list of things at scale, and I don’t. believe that our educational system, our local system, our political masters, I don’t think any of them have really thought through that kind of a threat.

CPM
Well, I don’t think they have. And I think part of the problem is what you alluded to essentially applying critical thinking to the development of new technology.

Dr. Norrie
And to data sources that feed it.

CPM
Yeah. As I mentioned, we came up with the engineer and I came up with the resolution that you cannot create an unbiased AI. But what followed on with that in our conversation that we didn’t get into with the publisher who decided to reject our opinion altogether, was that you really don’t want to an unbiased AI. You want a system that’s going to protect you. Going back to Isaac Asimov rules of robotics, Asimov wasn’t saying that it was impossible design a robot that would kill people. No, you have to design a robot that has been given the directive not to kill people. That is a very specific ethic. And ethics isn’t taught in our colleges anymore.

Dr. Norrie
We do teach it as part of our law class.

CPM
OK, but it’s not taught in the engineering departments.

Dr. Norrie
There you go. So I think that is true in an issue or if they teach it, they teach it as professional ethics, which is probably different than the kind of ethics you are talking about. But it’s funny you should mention ethics because one of the assignments in one of my law classes is to go back and watch 2001 a space odyssey and why

CPM
Because of HAL.

Ethics

Dr. Norrie
You got it. And no spoiler alert here because I bet you tons of listeners have never stopped to see this now quite dated movie. But it is so topical and timely because the ethics of what we’re about to do, which is when as we move closer to number three, and we enable machines or robots to mimic even at some substantive level the kind of interaction, social discourse, and if you like thinking or judgment in some ways, not always, but in some ways that we experience when we’re engaging with one another, it can become very dangerous because it can mimic us. It isn’t us. It isn’t human, but it can mimic human. And the extent to which we’re going mimicry of humans and in what content is very important. So I, one of my dear friends owns a robotics company and one of the things that he resists and he says he will never do is he will never ever create a robot in human form. He will never participate in that because it would become too dangerous. He wants robots to be robots and humans to be humans and he says it’s not that I don’t believe in robots. He thinks robots like AI and the combination of robots and AI is going to unlock humans from having to do an awful lot of drudgery. It’s really great. He said you’ll have a robot that will do the lawn for you. This is kind of the future dream that we’ve always had as futurists about where those good technologies can free humans from drudgery. But if you take something that mimics a human being and turn a machine into a mimicked human, you cross in my view an ethical boundary from which there may never be a return.

CPM
That’s kind of where we are right now, isn’t it?

Dr. Norrie
It sure is. And we’re on the cusp of having enough experience to know that this is going to unleash some significant challenges as a society to our political and to our legal leaders. They have to get ahead of this curve. Now it’s very hard because technology curves move well ahead of social acceptance. This is not one of those that I think, Lou, and I’m not being alarmist here at all because it’s here. It doesn’t matter whether we’re alarmist or not. So there’s no point in panicking. But we really ought to get on with the discussions inside society about what needs to change to equip people to be able to use this for its good and to start to see for themselves using critical methods and tools that need to be developed and taught to people of all ages, by the way, from children all the way up so that we can sort it out. Maximize the good while recognizing that it has negative applications that are quite evil and dangerous. So as long as we like everything else. When early man began to invent fire, everything was good while they could contain fire, right, Lou? Then when they can’t, oops, okay, AI is like that. So we have the flickering small flame of early AI. And right now it’s somewhat contained and it’s spreading fast and running fast. But what we don’t want is a raging inferno. So why not try on this particular advance where so much discussion is occurring with so many very thoughtful people? And I know so many people on this topic, because I’m sure you do, who share our concern and who I think can be part of a solution that is not alarmist. But it requires that we begin to think about that now. And we need to think about it at a human level, not a machine level. The machine level we’ve proven we can create. Now the question is how do we catch the human side of this equation?

CPM
And that’s kind of where I want to wrap this up with maybe a bit of of optimism because for about 50 years now, technology has been seen as something that, yeah, it can be bad, but generally we need to adopt it and move forward with it. But we’re in an age now where people don’t trust technology like they did over the past 50 years. And maybe people are starting to get to the point is, well, wait a minute, maybe I don’t want this. I mean, we’re even seeing it in social media where people are leaving Twitter in droves, Facebook has become pretty much flat. And I actually was talking to a gentleman this last week in my current podcast where he used the phrase “the collapse of the metaverse.” I’m going, wait a minute, has that collapsed already?

Dr. Norrie
Sounds like a black hole or something, you know.

Getting time to prepare

CPM
Yeah, well, essentially what he’s saying is trillions of dollars have been put into the metaverse that has resulted in nothing.. And investors are getting frustrated, which is probably why things are starting to slow down is that the people who have the desire to make a lot of money are saying, our money is going away. We’ve seen it with Bitcoin. We’ve seen it with the metaverse. And I think we’re starting to see it with generative AI, where people are finally getting to the point of saying, wait a minute, one step at a time, you’re going to do what with my job?

Dr. Norrie
Yeah. Well, and you know what, a little bit of shameless self-promotion for just a second move. Sure. So my latest book is now about three years old, it’s a limited claim as an Amazon best seller, but in the book, which is called CyberCon: Protecting Ourselves from Big Tech and Bigger Lies. It addresses exactly what you’re saying, but with some optimism, you’ll be glad to know, right? Because in the book, what I said is we can’t trust big tech to do what’s right by society. That would be foolish. They are private enterprises. They are tasked with, to your point, making money for their investors. That’s their job. A capitalist society rests on the idea of investors having capital available to management teams, put them to use making money. And if I’m talking about CyberCon IQ, I understand that. If you’re talking about your media firm, you understand that, you know, you’ve got to have money in to get profits out. That’s just the way it goes. So well understood thing. Therefore, they’re not really tasked with doing what’s right in society. And we would be foolish, foolish as a society to trust them. That would make no sense to me. And the proof of this would be all of the businesses through the late 1800s and right up untnow. We have watched industry after industry prioritize profit at the expense of society.

CPM
Yeah.

Dr. Norrie
Right. So then this is why we have regulation. And so, you know, one of the things we have to start to imagine is holding a big tech somewhat more accountable. And I think the interesting thing about the current issue of AI is also going to be this. I don’t know that AI will get the same shield as we currently have with social media. And again, maybe a topica for a future podcast. But we have allowed social media platforms to have a shield that are not considered publishers. This is AI is going to challenge that because when they get to this, I would argue that as Google and Microsoft and others go down the path of modifying search from simply linking to other content that is original content to amalgamating that content and publishing something original that they will become publishers. And when they do…

CPM
I think they’ve actually jumped that shark.

Dr. Norrie
Absolutely. And that will fundamentally change the regimes of legal consequence and regulation for them in a good way. I think it’s about time we got there. So listen, here’s the optimism. I do believe that ultimately AI will serve society. Like many technologies, it will take us time to face and understand what we need to do as a society to respond to the advance. And there’s going to be fear and uncertainty. The conversation for me has happened way ahead of the curve. We did not see this same level of engagement and conversation with early social media because it wasn’t quite as powerful. I do hear all kinds of great dialogue. And so for your listeners, the optimism is, let’s get our entire society engaged in this conversation you and I had today. Let’s spread that word. Let’s have this awareness. Let’s build the capacity as human beings to manage this technology for the greater good. That to me is the mantra of optimism around generative AI.

CPM
That sounds good to me. Dr. Norrie, thank you for your time.

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *