Is NSFW AI Dangerous?

Worries over unsafe-for-work AI are growing around its ability to manage, and in this case curate that content which could inadvertently caused. A 2022 report from Gartner found that content over-filtering affected almost 60% of companies with NSFW AI systems, resulting in complaints by a large number of influencers and even significant financial losses. According to one report, more than $50 million in advertising revenue was lost because of overzealous blocking.

Content moderation powered by AI, which employs algorithms trained on vast data -- a method known as machine learning in computer science and also called neural networks when big tech companies provide billions of labeled images to OpenAI or Meta. However, as systems scale back, they might start removing content that doesn't quite reach the "NSFW" threshold. A 2021 Stanford study found that AI systems flagging “nonpornographic” content gets it wrong about 25% of the time, including for artistic or educational content (which is further evidence independent testing methodologies might not be working effectively to identify illegally explicit materials).

They realize that misclassifications are just the tip of the iceberg in how AI can go wrong: Their critics will argue that bias within these systems leads to more than a few far-reaching dangers. For example, in a widely covered incident from 2020 an algorithmic content moderation tool unfairly over-flagged minority groups. These errors demonstrated that when you have biases in your training data, gender discrimination is also going to be a result. As Cathy O'Neil wrote in her influential book, Weapons of Math Destruction: How Big Data Increases Inequity and Threatens Democracy, "Algorithms are opinions embedded in code," (O'neil).

At that danger level, it only increases when we start looking at the scaling problem. Given the billions of images uploaded to social media each month, automatically moderation becomes a critical necessity as platforms scale beyond hundreds-of-millions. In 2023, University of Edinburgh published a study proving this rate is only reduced to 85% as AI-based moderation tools still largely depend on human staff for edge cases. Companies that fail to regularly audit their AI models like Facebook in 2019 will continue shooting themselves in the foot and open them up for million-dollar lawsuits.

However, the propensity of NSFW AI to potentially restrict content and encourage prejudice is something that needs significantly more attention. We knew this was a risk from the beginning, especially for platforms like TikTok that rely on automated moderation at scale — and we know they have to continue updating these models regularly in order not too overreach. On the other hand, development continues of nsfw ai for safer yet more open and invasive digital world.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart