Advanced as it may be, NSFW AI is not always 100% accurate when it comes to determining the presence of explicit content. Properly trained NSFW AIs, which are non-trivial to build and deploy but also not impossibly difficult neither have perfect accuracy. Instead the very best such systems typically get around 90–95% or so (honestly in that range they tend mostly to be about equally good), ie from time-to-time a small enough amount of content will simply naturally go unclassified whether because it’s unusual/less frequent-type-content for training sets compared with testing data being served to them live etc. Down the road, maybe we make image data too nuanced for any state-of-the-art machine learning algorithm to learn anything from. Despite the name, AI this is definitely not safe for work: research has demonstrated that such filters will also falsely identify art or medical images and even entirely non-explicit family snaps as inappropriate.
A well-known instance of this with NSFW AI offending people happened on Facebook, where autosilo white lists would tag images that are otherwise not safe for work such as breastfeeding photos. Facebook believed that there are instances wherein its algorithms detected nudity but were not able to put context before taking action — this in turn triggered complaints and amendments to the moderation toolbox on its system. Cases like this reveal just how tricky it is to maintain uniform accuracy, especially when an image can be deemed explicit or not based on its context. Constant updates have resulted in much more accurate algorithms as companies invest tens of millions per year to improve and retrain models with better user feedback data from use cases across the digital world.
Another issue is one of false negatives – when NSFW AI overlooks content that in fact should be flagged as explicit. Adversarial methods, like changing pixel patterns succeed to evade AI filters that is with a decrease in the detection rate of 20%. A 2020 MIT study — which also includes Xinjiang-based companies in its scope, the same area as where many Uighurs are located — showed how slight alterations to images could deceive existing NSFW AI systems and correct less than have of detection errors with manipulated content. This susceptibility serves to highlight the restrictions of current models in AI — all-powerful but fundamentally exploitable due to their inability (or rather, ability) for evasion.
AI accuracy causing concern to the tech industry leaders, for example Elon Musk said: ““Artificial Intelligence is as good at what it trains and receives“. It emphasises of the need for an extensive & efficient diverse dataset to train higher accurate models. In an attempt to enhance this, Google and Microsoft work on strengthening datasets by providing more diverse examples of content (and often explicit material), increasing the research data collection algorithms’ capacity for understanding what is or isn’t nudity.
Another issue with AI bias is the effect that it has had on accuracy, as some training sets may not represent specific demographics equally and can cause certain groups to have a higher false positive rate. This bias has caused companies like Facebook and Instagram, as well strategy more broadly writ large to invest heavily in reducing biases within their models. By dedicating as much as 10 percent of the budget for AI development department to fix biases that can have a disparate impact on particular user slices. These investment improves the accuracy, but not eliminates bias in order to remind people that itself reminding us as critics NSFW AI needs AnomalyScan+, a combination of NLP anomaly detection and natural language processing for a system which must be updated regularly.— This story is published within The Startup,… — Quang Đinh.required means it may look fine now days requires continuous upade due evolve user behavior.”””redirectTo”:”platform/medium-cloud/sfw-ai-needs-updating-till-the-sun-burns-out-have-you-committed-this-common-blunderopinions-flowinginefficiency”,”nav_label”:”””Navigate”}}]}]}{“id”…
To get more of the emerging advancements in nsfw ai accuracy, check out a comprehensive piece about how AI-powered moderation technology evolved with time on AItake. At the end of the day NSFW AI is only but to remain imperfect if high accuracy, showing that room for improvement will continue for content moderation.