Commentary: Deep Fakes, Facial Recognition are much ado about nothing

For a year now, facial recognition and deep-fake video technology have caused a great deal of handwringing in the news. It’s a largely unwarranted concern because it has not become an actual problem. Rather, it has been a premature hype cycle.

There are several reasons to be unconcerned.

Both require artificial intelligence (AI) with access to extensive databases for the technologies to do their thing. Without getting into the reality that no true AI exists (they are mostly just machine learning engines, not AI), there is an extreme dearth of facial data. Take last year’s, recognition darling, ClearviewAI, with their 3 billion photos scraped from media. According to Statista, there are 2.65 billion social media users. Only 85 percent of millennial users have ever posted a selfie and over the age of 35 that number drops to 65 percent. Throw in the reality that most of those 3 billion photos are redundant. But, wait! There’s less!

Gartner places AI and deep-fake technology still on the upswing of hype over reality

The average user has taken 4000 selfies, with a high of 20,000 to a low of four. That means the number of people in the ClearviewAI database could be as low as 250,000 worldwide. Approximately 600 police departments worldwide are evaluating the technology so a very small number of people can be identified. At a security conference in 2019, representatives from six law enforcement agencies agreed the tools have no value.

At this point, you might want to bring up China and the expanding use of facial recognition there. Yes, it is much more advanced, but the purpose of the tool there is not to catch criminals. It’s to keep an eye on foreigners and ethnic minorities. That’s a much smaller data set to develop and maintain.

Bias promotes failure

Another problem is bias in the development of an AI. It is extremely difficult to keep personal bias out of an AI, making the tool useless. Let’s look at ClearviewAI, again.

Gizmodo published a story saying right-wing blogger and serial founder of several failed startups Chuck Johnson claimed to be a ClearviewAI founder.. This is plausible because the face and CEO of the company, Hoan Ton-That, is on record on having a friendly relationship with Johnson and has made several supportive statements regarding white nationalism (Let’s stop and think about the irony of a Vietnamese immigrant to Australia supporting white nationalism). Then there is then primary investor in the company, VC Peter Thiel, who regularly meets with white nationalist leadership.

Now, tell me that there is no racial or political bias in the development of ClearviewAI.

Related:   Video: Modernizing SOCs

Deep Fakes

Now let’s look at Facial recognition’s sister technology: the ability to make videos using someone’s image and voice without their involvement. A lot of it has the same problem as facial recognition regarding lack of comprehensive data and a wealth of bias, but there has been no nascent industry with as much attention paid to killing it before it actually starts.

Last year, Microsoft announced its own tool for detecting deep-fake videos. That announcement was followed by dozens of academic research efforts and apps to detect, label, and remove them from the internet. Some of the efforts focus merely on reading lips. But common sense remains the best detection of deep fakes.

A miscreant might create a video of President Barack Obama extolling the benefits of the KKK, but no person with reasonable consciousness would believe he would say anything like that. The effort would be, primarily, to elicit a laugh. In the end, deep-fake videos have two purposes: entertainment and crime. The first is protected speech. The second is stopped by existing technology.

Realistic thinking

The fears of a worldwide police-state using technology to watch everyone is entertaining fiction for movies, TV and books, but it is far from a reality. In digital security, a biased AI may actually help protect a network by being overly cautious about who and what gets into our feeds, but it isn’t that valuable to law enforcement or surveillance. We should be making an effort to develop AI for limited specific applications with restricted data silos and keep the deep fakes focused on video games. That’s what they are good for.

Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.

2 thoughts on “Commentary: Deep Fakes, Facial Recognition are much ado about nothing

Leave a Reply

Your email address will not be published. Required fields are marked *