Jump to content

Comprehensive AI Safety and Security with defense in depth for Enterprises


Recommended Posts

Guest Jaswant_Singh
Posted

largevv2px999.png.bd027e540730de1a6f3075715bb90d90.png

 

largevv2px999.png.faedb5fd643fac91122f65179e800aba.png

 

Azure AI Content Safety APIs

 

Azure AI Content Safety is a new service that helps detect hateful, violence, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review. Unlike most solutions used today, Azure AI Content Safety can handle nuance and context, reducing the number of false positives and easing the load on human content moderator teams.

 

 

 

Prompt Shields (preview)
Identify and block direct and indirect prompt injection attacks before they impact your model, scans text for the risk of a User input attack on a Large Language Model. Quickstart
Groundedness detection (preview)
It detect model “hallucinations” so you can block or highlight ungrounded responses, detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Quickstart
Protected material text detection (preview)
Blocks copyrighted or known content like song lyrics, scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Quickstart
Custom categories (rapid) API (preview)
Create and deploy your own content filters, lets you define emerging harmful content patterns and scan text and images for matches. How-to guide
Analyze text API Scans text for sexual content, violence, hate, and self harm with multi-severity levels.
Analyze image API Scans images for sexual content, violence, hate, and self harm with multi-severity levels.

 

largevv2px999.png.e07cb4ab40f5b22673fb849284ebebc2.png

 

 

 

largevv2px999.png.009e9f9bc9d21b9ef3f1d6730e0eee17.png

 

largevv2px999.png.31ea05c33bf746e9a677996917c0b75b.png

 

largevv2px999.png.af3384945e7463c0742db44a1dc4112a.png

 

largevv2px999.png.dcf6417fd3fd83950564b7c1147113a5.png

 

largevv2px999.png.b39168d6beb9a3c3cdd32de2dcadb279.pngResources

 

Designing and implementing a gateway solution with Azure OpenAI resources

 

GitHub - Azure-Samples/genai-gateway-apim: sample repo for APIM + Gen AI

 

apim-landing-zone-accelerator/scenarios/workload-genai/README.md at main · Azure/apim-landing-zone-accelerator

 

Develop AI apps using Azure AI services

 

Türkiye's stony spires

 

Secure your AI applications from code to runtime with Microsoft Defender for Cloud

 

Secure and Govern Your Custom-Built AI Apps with Microsoft Purview

 

https://github.com/Azure/PyRIT

 

azure-sdk-for-python/sdk/contentsafety/azure-ai-contentsafety/samples at main · Azure/azure-sdk-for-python (github.com)

 

azure-sdk-for-net/sdk/contentsafety/Azure.AI.ContentSafety/samples at main · Azure/azure-sdk-for-net (github.com)

 

Microsoft Threat Modeling Tool overview - Azure | Microsoft Learn

 

Configure GitHub Advanced Security for Azure DevOps features - Azure Repos | Microsoft Learn

 

Enterprise AppSec with GitHub Advanced Security

 

What is Azure AI Content Safety? - Azure AI services | Microsoft Learn

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...