Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual content, or self-harm, or managing custom blocklists.
Use this skill when the task matches the description above or the source path clearly applies. Start with this concise entrypoint; open ../../../../../skills/by-category/cloud-azure-microsoft-sdks/official-vendor-reference/azure-ai-contentsafety-ts/SKILL.md only when implementation details, commands, assets, or references are needed.
AGENTS.md; use one AI session only.../../../../../skills/by-category/cloud-azure-microsoft-sdks/official-vendor-reference/azure-ai-contentsafety-ts/SKILL.mda2c05249c4a20103dd954ca702467aa328aac031docs/benchmark-results.md0skill-proof-microsoft-skills-github-plugins-azure-sdk-typescript-skills-azure-ai-contentsafety-ts-skill-md, , , cloud-azure-and-microsoft-sdks-kubernetes-examplescloud-azure-and-microsoft-sdks-opentelemetry-democloud-azure-and-microsoft-sdks-sec-edgar-companyfactsDo not claim this skill passed a runtime benchmark until a validated artifact exists.