The SAG-AFTRA strike in Hollywood is more than just about the livelihoods of actors and writers. It is also about the personal security and privacy of everyone and how it gets resolved may set the future paradigm for generative AI use of personal data and biometrics.
Generative AI seems to have arisen faster than any other major industry without legislative or industrial controls and any regulation seems to be still in the hands of the companies creating the platforms. Amazingly enough, however, the technology has been commercialized long before general public awareness.
Ethical AI
In central Kyiv, amid the daily air raids from Russian assaults, Respeecher has been creating AI-generated voices for Hollywood for most of the past decade. The posthumous voice of Carrie Fisher, and the younger voice of Mark Hamill in the most recent Star Wars movies were produced by the Ukrainian company. But that company does more than just produce the voices. It also protects the intellectual property of those actors and their estates, providing an ongoing revenue stream under contract to the Disney Corporation. That service is at the core of the Hollywood strikes.
The major entertainment studios, like Disney, have hoarded vast amounts of intellectual property that they jealously protect. The fact that Sony owns the rights to Spider-Man held up the inclusion of the character in the Marvel Universe for many years before Sony and Disney could reach a cooperative agreement.
Who’s the disruptor?
Bob Iger, CEO of Disney, said in a CNBC interview that what the actors and writers are asking for is too “disruptive” to the industry. But because the industry does not – yet – own the technology to reproduce the voices of iconic characters that may have aged or died out of their roles, they must purchase the content from companies like Respeecher. So it is the studios, not the artists asking to disrupt the current status quo. the unions merely want to codify what exists. The protection of intellectual property offered by Respeecher and other companies to the original artists is the only ethical control placed on the content.
“It’s not a tough question where I’m coming from,” said Anna Bulakh, head of ethics and partnerships with Respeecher. “This is essential because the person on which we train that particular voice model owns that AI voice model, and should be able to receive revenue from that.”
“Beyond the Pale.” – John Thompson
Effectively, the Alliance of Motion Picture and Television Producers (AMPTP) has agreed to that principle and in their offer guaranteed that actors would be compensated for each production their voice and image is used. But the problem is that what the studios want to do is bring the actor in for a day, digitize their image and voice, pay them for a day of work, and then use the actor in any scene and as many scenes as they want, effectively reducing the amount they have to pay the actors.
We have reached out to the Disney Corporation, the AMPTP, The SAG-AFTRA, and the Writers Guild multiple times in the past two weeks to respond to this analysis but have yet to receive any response.
The AMPTP appears to be following the lead of social media companies that require users to give up their rights to the content they produce, as well as their personal data. But Meta, Twitter, TikTok, and others have a bargaining chip the studios don’t. You have to agree to the terms in order to access the platform. But production studios don’t have that leverage unless they can get the unions to acquiesce. That is not the trend the studios want.
John Thompson, a data scientist and best-selling author on the subject of data ownership, believes the unions’ position on content ownership is the wave of the future.
“What Mr. Iger is asking for is obviously beyond the pale. The world is not moving in the direction he wants. It’s moving in the other direction. Everybody is going to own their own data. I’m going to own my own data. You going to own your own data. Your mother, your brother, your aunt, your uncle… we’re all going to own our own data. Those companies are going to have to pay royalties or license fees to use that data to train their models. The world is going in the opposite way.”
“Protecting one’s likeness is akin to protecting intellectual property,” Bulakh added. “That is the ethical application of (AI) technology going forward and our ethical conduct is a cornerstone of our business.”
Business opportunities in ethical AI
Bulakh sees a significant business opportunity for AI companies in protecting the rights of voice and image owners. “Anyone can take and create your voice so that use becomes a question of accountability. Where you should go, how you can create legal procedures that you can reclaim, ownership of your likeness if it was created illegally.”
There are several companies, like Pindrop and Copyleaks, that are working on defensive technologies to identify fraudulent use of voice and image content, but she sees that as a reactive posture. “Detection is a reactive tool useful in investigating existing cases. How do you prevent like from creating those cases? It’s when you disclose in the authorship of the digital content that is created by artificial intelligence.”
She suggested that undetectable and undeletable “watermarks” will facilitate self-regulation in the AI industry, which both the United States and European Union have succeeded in getting the industry to agree to, but that technology is not really available yet, but will probably involve encryption of metadata in the models. Until that issue is resolved, Respeecher and its competitors will be the best protection of personal data.
Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.
Pingback: Standards bodies doing the heavy lifting in AI regulation - Cyber Protection Magazine