Microsoft’s Azure AI Content Safety service includes image and text detection to identify and grade content based on the likelihood that it will cause harm. Credit: Shutterstock Microsoft has announced the general availability of its Azure AI Content Safety, a new service that helps users detect and filter harmful AI- and user-generated content across applications and services. The service includes text and image detection and identifies content that Microsoft terms “offensive, risky, or undesirable,” including profanity, adult content, gore, violence, and certain types of speech. “By focusing on content safety, we can create a safer digital environment that promotes responsible use of AI and safeguards the well-being of individuals and society as a whole,” wrote Louise Han, product manager for Azure Anomaly Detector, in a blog post announcing the launch. Azure AI Content Safety has the ability to handle various content categories, languages, and threats to moderate both text and visual content. It also offers image features that use AI algorithms to scan, analyze, and moderate visual content, ensuring what Microsoft terms 360-degree comprehensive safety measures. The service is also equipped to moderate content across multiple languages and uses a severity metric which provides an indication of the severity of specific content on a scale ranging from 0 to 7. Content graded 0-1 is deemed to be safe and appropriate for all audiences, while content that expresses prejudiced, judgmental, or opinionated views is graded 2-3, or low. Medium severity content is graded at 4-5 and contains offensive, insulting, mocking, intimidating language or explicit attacks against identity groups, while high severity content, which contains the harmful and explicit promotion of harmful acts, or endorses or glorifies extreme forms of harmful activity towards identity groups, is graded 6-7. Azure AI Content Safety also uses multicategory filtering to identify and categorize harmful content across a number of critical domains, including hate, violence, self-harm, and sexual. “[When it comes to online safety] it is crucial to consider more than just human-generated content, especially as AI-generated content becomes prevalent,” Han wrote. “Ensuring the accuracy, reliability, and absence of harmful or inappropriate materials in AI-generated outputs is essential. Content safety not only protects users from misinformation and potential harm but also upholds ethical standards and builds trust in AI technologies.” Azure AI Content Safety is priced on a pay-as-you-go basis. Interested users can check out pricing options on the Azure AI Content Safety pricing page. Related content news analysis EU commissioner slams Apple Intelligence delay Margrethe Vestager, Europe's chief gatekeeper, takes a shot at Apple's decision to delay rolling out the company's AI. By Jonny Evans Jun 28, 2024 7 mins Regulation Apple Generative AI how-to Download our unified communications as a service (UCaaS) enterprise buyer’s guide Does your phone system date back to the last century? If so, you’re missing out on new technologies that can increase productivity and support a more distributed workforce. That’s where unified communications as a service, or UCaaS, comes By Andy Patrizio Jun 28, 2024 1 min Unified Communications Enterprise Buyer’s Guides Cloud Computing feature Enterprise buyer’s guide: Android smartphones for business Security is the biggest — but not only — factor when deciding what Android devices to support in your enterprise. See how Google, Honor, Huawei, Infinix, Itel, Motorola, Nokia, OnePlus, Oppo, Realme, Samsung, Tecno, Vivo, and Xiaomi stack By Galen Gruman Jun 28, 2024 23 mins Google Samsung Electronics Smartphones news Box announces upgrade to Box AI, integration with GPT-4o Box needed its own generative AI function to retain market share, says analyst. By Paul Barker Jun 27, 2024 4 mins Box Generative AI Collaboration Software Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe