A
April_Speight
Last week at our first Microsoft AI Tour stop in Mexico City, we announced Trustworthy AI, our One Microsoft approach towards how we deliver privacy, safety, and security, through both commitments and capabilities.
Every AI innovation at Microsoft is grounded in a comprehensive set of AI principles, policies and standards. This includes foundational commitments, such as our Secure Future Initiative, AI Principles, and Privacy Principles. These commitments give you confidence that you control your data, and your data is secure in any state, whether at rest or in transit. We're transparent about where data is located and how it’s used, and we’re committed to making sure AI systems are developed responsibly. These commitments also ensure that the AI systems we build have the right Privacy, Safety, and Security in mind from the start. We use our own best practices and learnings to provide you with capabilities and tools to help you build your own AI applications that share the same high standards that we strive for. Whether you are an enterprise leader, an AI developer, or a copilot enthusiast, Microsoft provides the foundation you need to build and use generative AI that you can trust.
In true fashion, with new announcements come new products and features! In his blog post, Takeshi Numoto, Executive Vice President and Chief Marketing Officer, shares more about Trustworthy AI and all of the product announcements we shared in Mexico City. However, for our Responsible AI blog series, we want to take a moment to dive deeper into the announcements for Responsible AI!
Evaluating your generative AI application is a key part in the Measure stage of the generative AI development lifecycle. While you may consider relying on intuition or applying mitigation strategies based on sporadic feedback you’ve received about your apps output, executing evaluations by way of a methodical, systematic approach provides you with signals that inform targeted mitigation steps.
We announced 4 new capabilities in public preview to help you evaluate and improve your application’s outputs with greater ease:
In her blog post, my colleague Minsoo provides more details about these new capabilities and provides step-by-step tutorials that you can try today!
One key change you may have noticed is that we’ve migrated our evaluators from the promptflow-evals package to the new Azure AI Evaluation SDK! I’d highly recommend putting together a migration plan to migrate your existing evaluations to the new SDK. If you continue to use the promptflow-evals package in your existing evaluations, you may run into an error regarding missing inputs. We’ve modified property names, and therefore your existing dataset may be using the incorrect ones.
Looking for the reference docs? Don’t worry, I’ve got you covered! You can explore the new evaluation package here: azure.ai.evaluation package | Microsoft Learn.
Azure AI Content Safety provides a robust set of guardrails for generative AI. We have a growing list of features and capabilities which you can explore within our RAI Playlist. Not to mention, we also have a new Operationalize AI Responsibly with Azure AI Studio Learn Path which provides guided instruction on applying these features in either a UI-based or code-first approach. However, joining the list of Content Safety capabilities are some amazing new features:
As a former XR developer, I was pleasantly surprised to learn that Unity uses our content filtering models for Muse Chat! Carlotta recently shared more about Content Filtering in her Content Filtering with Azure AI Studio post. I’d suggest giving the article a read to learn more!
For those developing AI solutions within highly regulated sectors, a paramount consideration is data protection. Data privacy is a universal concern, but the challenge of processing sensitive or regulated data in the cloud—ensuring it remains encrypted at all times, including during processing—is an issue we are actively addressing. Explore our latest features and capabilities designed to tackle this concern:
As a heads up, we’re starting with a limited preview for Azure AI Confidential Inferencing. Have a use case in mind? We want to hear from you! Fill out this form to sign up for a preview of our confidential inferencing service: Azure AI Confidential Inferencing Preview Sign-Up (office.com).
Many of our existing products and Azure services support our approach to Trustworthy AI. As a developer, if you’re wondering where’s the best place to start, I’d suggest assessing your existing generative AI solution(s) and pinpoint how you’ve integrated features which support security, privacy, and safety. Maybe you’ll uncover an opportunity to leverage one of the new capabilities I shared within this post? Or maybe an idea will spark while exploring our new Operationalize AI Responsibly with Azure AI Studio Learn Path!
If you’re still in the ideation phase and haven’t quite put keystroke to code editor, check out Pablo’s lesson on the Generative AI Application Lifecycle, which is part of our Generative AI for Beginners course. As you review the lesson, pinpoint areas of opportunity to integrate privacy, safety, and security. There are more ways than 1 and I’m excited to see what you decide!
Whichever path you pursue, just know that you’re taking a great stride towards building AI-solutions that are trustworthy!
Continue reading...
Every AI innovation at Microsoft is grounded in a comprehensive set of AI principles, policies and standards. This includes foundational commitments, such as our Secure Future Initiative, AI Principles, and Privacy Principles. These commitments give you confidence that you control your data, and your data is secure in any state, whether at rest or in transit. We're transparent about where data is located and how it’s used, and we’re committed to making sure AI systems are developed responsibly. These commitments also ensure that the AI systems we build have the right Privacy, Safety, and Security in mind from the start. We use our own best practices and learnings to provide you with capabilities and tools to help you build your own AI applications that share the same high standards that we strive for. Whether you are an enterprise leader, an AI developer, or a copilot enthusiast, Microsoft provides the foundation you need to build and use generative AI that you can trust.
In true fashion, with new announcements come new products and features! In his blog post, Takeshi Numoto, Executive Vice President and Chief Marketing Officer, shares more about Trustworthy AI and all of the product announcements we shared in Mexico City. However, for our Responsible AI blog series, we want to take a moment to dive deeper into the announcements for Responsible AI!
Evaluations
Evaluating your generative AI application is a key part in the Measure stage of the generative AI development lifecycle. While you may consider relying on intuition or applying mitigation strategies based on sporadic feedback you’ve received about your apps output, executing evaluations by way of a methodical, systematic approach provides you with signals that inform targeted mitigation steps.
We announced 4 new capabilities in public preview to help you evaluate and improve your application’s outputs with greater ease:
- Risk and safety evaluations for indirect prompt injection attacks
- Risk and safety evaluations for protected material (text)
- Math-based metrics ROUGE, BLEU, METEOR, and GLEU
- Synthetic data generator and similar for non-adversarial tasks
In her blog post, my colleague Minsoo provides more details about these new capabilities and provides step-by-step tutorials that you can try today!
One key change you may have noticed is that we’ve migrated our evaluators from the promptflow-evals package to the new Azure AI Evaluation SDK! I’d highly recommend putting together a migration plan to migrate your existing evaluations to the new SDK. If you continue to use the promptflow-evals package in your existing evaluations, you may run into an error regarding missing inputs. We’ve modified property names, and therefore your existing dataset may be using the incorrect ones.
Looking for the reference docs? Don’t worry, I’ve got you covered! You can explore the new evaluation package here: azure.ai.evaluation package | Microsoft Learn.
Azure AI Content Safety
Azure AI Content Safety provides a robust set of guardrails for generative AI. We have a growing list of features and capabilities which you can explore within our RAI Playlist. Not to mention, we also have a new Operationalize AI Responsibly with Azure AI Studio Learn Path which provides guided instruction on applying these features in either a UI-based or code-first approach. However, joining the list of Content Safety capabilities are some amazing new features:
- Correction capability in Groundedness Detection
- Protected material detection for code
- Embedded content safety
As a former XR developer, I was pleasantly surprised to learn that Unity uses our content filtering models for Muse Chat! Carlotta recently shared more about Content Filtering in her Content Filtering with Azure AI Studio post. I’d suggest giving the article a read to learn more!
Data Protection
For those developing AI solutions within highly regulated sectors, a paramount consideration is data protection. Data privacy is a universal concern, but the challenge of processing sensitive or regulated data in the cloud—ensuring it remains encrypted at all times, including during processing—is an issue we are actively addressing. Explore our latest features and capabilities designed to tackle this concern:
As a heads up, we’re starting with a limited preview for Azure AI Confidential Inferencing. Have a use case in mind? We want to hear from you! Fill out this form to sign up for a preview of our confidential inferencing service: Azure AI Confidential Inferencing Preview Sign-Up (office.com).
Next Steps
Many of our existing products and Azure services support our approach to Trustworthy AI. As a developer, if you’re wondering where’s the best place to start, I’d suggest assessing your existing generative AI solution(s) and pinpoint how you’ve integrated features which support security, privacy, and safety. Maybe you’ll uncover an opportunity to leverage one of the new capabilities I shared within this post? Or maybe an idea will spark while exploring our new Operationalize AI Responsibly with Azure AI Studio Learn Path!
If you’re still in the ideation phase and haven’t quite put keystroke to code editor, check out Pablo’s lesson on the Generative AI Application Lifecycle, which is part of our Generative AI for Beginners course. As you review the lesson, pinpoint areas of opportunity to integrate privacy, safety, and security. There are more ways than 1 and I’m excited to see what you decide!
Whichever path you pursue, just know that you’re taking a great stride towards building AI-solutions that are trustworthy!
Continue reading...