Contact Form

 

reddit.com: over 18?


Want to join? Log in or sign up in seconds.


You must be 18+ to view this community

You must be at least eighteen years old to view this content. Are you over eighteen and willing to see adult content?


Popular GIF hosting and creation site Gfycat has come out against the rising new trend of artificial intelligence-generated fake pornography. Colloquially known as “deepfakes,” after the Reddit user who popularized the technique back in December, these short-form videos typically feature porn stars with the faces of celebrities, with the face swapping achieved by feeding a neural network with thousands of photos and videos and then training the network using popular, open source machine learning techniques.

News of Gfycat’s stance on deepfakes was first reported by Motherboard, which also first reported on the existence of deepfakes, as well as the subreddit dedicated to the practice and its growing popularity.

“Our terms of service allow us to remove content that we find objectionable. We are actively removing this content,” a Gfycat spokesperson told The Verge. A little under two months after deepfakes effectively went viral, the practice has become more widespread and even easier to perform, with another Redditor creating a user-friendly piece of software known as FakeApp that lets basically anyone start training a neural network to perform these face swaps.

Gfycat, like Discord, is taking a stand against AI-generated fake porn shared on Reddit

As Motherboard reports, users on the deepfakes subreddit have started noticing dead Gfycat links in what some users are calling a purge. Gfycat, one of the most popular GIF platforms on the web, is a common way of translating video into an easily digestible and shareable format, giving it a Gfycat-hosted link to live on that can then be shared easily on Reddit and other platforms. That the company is taking a stand against the practice — which is done without the consent of the celebrities involved, many of whom are women — speaks to the larger question of legality around deepfakes and the future of the now-accessible technology used to create them.

It’s still not entirely clear if pasting someone’s face on another person’s naked body is illegal, although it does tread into defamation and copyright category, as The Verge reported just yesterday. However, platform technology companies often take stances against many practices that are unsavory and harmful but otherwise legal, like verbal harassment and abuse. Gfycat isn’t even the first platform to effectively ban deepfakes, with voice chat app Discord shutting down dedicated deepfakes servers last week as the practice gained more exposure.

Still, Reddit remains the core pillar of the movement, and the site has been noticeably silent on the subject even as the deepfakes community has ballooned from around 15,000 members last week to 55,000 today. The company has often treaded a fine line between protecting freedom of speech and hosting questionable, cruel, and objectionable content.

In rare instances over the years, the site has taken stances against some content, like when it shut down a subreddit dedicated to hosting creepshots of women in public taken without their consent in an infamous case back in 2012. (The site acted only in response to a bombshell investigation from former Gawker writer Adrian Chen, who now writes for The New Yorker.) More recently, Reddit took action against the subreddit used to host hacked celebrity nude photos and shut down a number of communities dedicated to hate as part of a broader anti-harassment initiative. Reddit did not respond to a request for comment.

As it stands right now, the proverbial genie is out of the bottle, and deepfakes are only going to get more sophisticated and more widespread as AI technology is better understood, improved upon, and made even more accessible. The implications are far-reaching, into the realms of news and politics, and there exists a real possibility that counterfeit video becomes as commonplace as Photoshopped images in the near future. Companies like Reddit, with its defense of anonymity, are central to the growing popularity using AI for nefarious purposes. So it’s only a matter of time before a broader cultural and legal definition of where the line is with deepfakes is more firmly established.

Total comment

Author

fw

0   comments

Cancel Reply