TBS Newsbot

These fake videos are causing real problems

With AI now able to create extremely convincing fake images and videos, lawmakers are scrambling to catch up. 

 

 

This week, we learned that one AI was able to create extremely convincing pictures of people who aren’t real. Don’t believe me? Check out these hotties.

 

 

Yeah, they don’t exist. These faces are created by using deep learning techniques to produce realistic portraits out of a database of existing photos. If you want to be mildly freaked, head over to This Person Does Not Exist.

I don’t want to fan the flames of Luddite paranoia, but this isn’t the first time similar technology is used to seek a similar gain. The concept of convincing imitation videos known as ‘deep fakes’ is on the rise.  According to The Wrap, “…deepfakes are about to get a lot tougher to identify. Within the next two years, digitally manipulated videos will likely reach a point where they’re undetectable to the naked eye, according to Siwei Lyu, head of the computer vision and machine learning lab at the University of Albany. And that threatens to create a headache for nearly everyone from Hollywood to Washington, D.C. “This is about whether we can trust individual media that is propagated on the internet,” Lyu told TheWrap. “We’ll have information, but we cannot trust it, so it’s the same as not having any information at all. This is an issue that everyone should be concerned about.”

For those who have yet to experience it, here is Jennifer Lawrence facing the media at the Golden Globes.

 

 

I don’t want to state the obvious, but totes would bang.

My desires aside, it represents a fairly prickly legal issue. What if one of these videos depicted a CEO of a company (or candidate) doing nefarious deeds? Clearly, the environment of the now, which includes sharing first and fastest (read: Jussie Smollett), would see that particular company tank on the market, and way-hey, our apocalypse may is an economic one.

University of Maryland Francis King Carey School of Law professor and privacy expert Danielle Citron believes that criminalising the use of deep fakes and a tightening of the current legal system is the way forward, but also noted that “the law is a modest and blunt tool,” Citron admitted, “but we have to try.”

Citron is pushing a change in the Communications Decency Act to make immunity conditional for internet platforms. A particular section of the law, Section 230, currently “provides immunity from liability for providers and users of an ‘interactive computer service’ who publish information provided by third-party users,” according to the legislation.

“Right now, it’s a free pass. It provides no incentives for platforms to protect the vulnerable,” said Citron, who added there are platforms whose business model is based on abuse and destruction. “They make money off of eyeballs. They make money when stuff goes viral.”

Brave new (weird looking) world.

 

Top