Stability AI plans to let artists opt out of Stable Diffusion 3 image training #5789
Replies: 3 comments 5 replies
-
Seems sensible. Nothing wrong with this in my opinion. Covers the ass of Stability from legal actions, dodges possible regulators, and keeps copyright holders happy. Other than validating the actual copyright status of the one claiming opt-out still being an issue, I can only see this as a positive thing. Those who some fucking reason are upset about this should just skip out on SD3 and focus on some other base model not made by stability, or just make their own - all the tools are available at huggingface. Honestly more base models is better for this field. However as we know, nothing is worse than giving collective internet power over you. There will be trolls who will make a bot that will go on to flag every image they can on LAION just out of spite. |
Beta Was this translation helpful? Give feedback.
-
您好,已收到您的邮件。
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
On Wednesday, Stability AI announced it would allow artists to remove their work from the training dataset for an upcoming Stable Diffusion 3.0 release. The move comes as an artist advocacy group called Spawning tweeted that Stability AI would honor opt-out requests collected on its Have I Been Trained website. The details of how the plan will be implemented remain incomplete and unclear, however.
As a brief recap, Stable Diffusion, an AI image synthesis model, gained its ability to generate images by "learning" from a large dataset of images scraped from the Internet without consulting any rights holders for permission. Some artists are upset about it because Stable Diffusion generates images that can potentially rival human artists in an unlimited quantity. We've been following the ethical debate since Stable Diffusion's public launch in August 2022.
To understand how the Stable Diffusion 3 opt-out system is supposed to work, we created an account on Have I Been Trained and uploaded an image of the Atari Pong arcade flyer (which we do not own). After the site's search engine found matches in the Large-scale Artificial Intelligence Open Network (LAION) image database, we right-clicked several thumbnails individually and selected "Opt-Out This Image" in a pop-up menu.
Once flagged, we could see the images in a list of images we had marked as opt-out. We didn't encounter any attempt to verify our identity or any legal control over the images we supposedly "opted out."
Other snags: To remove an image from the training, it must already be in the LAION dataset and must be searchable on Have I Been Trained. And there is currently no way to opt out large groups of images or the many copies of the same image that might be in the dataset.
Full news: https://arstechnica.com/information-technology/2022/12/stability-ai-plans-to-let-artists-opt-out-of-stable-diffusion-3-image-training/
Beta Was this translation helpful? Give feedback.
All reactions