5 Ways to Implement Enterprise Security with Azure AI Studio

  • Thread starter Thread starter meera-kurup
  • Start date Start date
M

meera-kurup

Azure is an innovation platform trusted by millions of customers, and today over 60,000 are using Azure AI to bring their ambitious ideas to life with AI. Azure AI Studio is a trusted, enterprise-ready platform to build, test, deploy, and manage generative AI applications at scale. Enterprise readiness encompasses the many promises enterprise customers expect from Azure, from securing resources with virtual networks to data encryption.



In this blog, we’ll recap five capabilities that every IT Admin, Security Engineer, or Developer shifting left should know when planning their enterprise applications in Azure AI Studio.



#1 Use hubs and projects to streamline secure workspace setup


In Azure AI Studio, teams can work in two layers: hubs and projects. Typically, a hub is managed by an IT Admin or technical lead. These IT Admins or technical leads can use hubs to govern infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) and configure relevant Azure AI services.



Each hub typically contains one or several projects. Projects function as isolated development spaces, allowing developers and data scientists to build, test, and deploy AI systems. Each time a new project gets created within a hub, it automatically inherits that hub’s security settings. This means developers can create their own projects and dive into their ideas quickly, knowing the correct security guardrails are already in place. It also means IT admins aren’t stuck feeling like a bottleneck for rapid innovation. Get started with hubs and projects: Manage, collaborate, and organize with hubs.



#2 Secure hubs, projects, and chat playgrounds with private endpoints


A private endpoint is a network interface for a specific resource, providing it with a private IP address assigned from your virtual network (VNET). When enabled, inbound and outbound communication with that resource is only available via your VNET. This setup is an increasingly popular approach among enterprises that want granular, resource-level controls to improve their security posture.



To secure inbound access to your hubs and projects, disable public network access for all resources. Disabling the public network access flag ensures that inbound access to the hub is private, allowing inbound access only through a private endpoint. Ensure the public network access flag is disabled for the following resources for the most secure set-up:

  • AI Studio
  • AI Services
  • Azure OpenAI
  • AI Search
  • Default resources-Storage, Key Vault, Azure Container Registry (optional)



To chat with your enterprise data in the Chat Playground of AI Studio securely, ensure the following are set on your resources and refer to our documentation:

  1. Public network access flag is disabled and ensure private endpoints from the AI Studio managed VNET to the Azure resources are created
  2. Microsoft Entra ID is selected for service authentication to AI Search, AI Services, Azure OpenAI and Storage
  3. Azure Role-based access control (RBAC) is enabled for service permissions
  4. Trusted services are enabled for securely sending data between Azure resources

See our documentation for more details: Securely use playground chat - Azure AI Studio.



Securely use the Chat Playground in AI Studio with disabled public access and private endpoints on your AI hub, AI Search, AI Services, and Storage.Securely use the Chat Playground in AI Studio with disabled public access and private endpoints on your AI hub, AI Search, AI Services, and Storage.



To secure outbound access from your hub and projects, we recommend enabling managed virtual network isolation. With managed virtual networks, you can centrally manage compute egress network isolation for all projects within a hub using a single managed network. This allows for more streamlined network configuration and management. To get started, read our documentation: How to configure a managed network for Azure AI Studio.



#3 Go credential-less with Microsoft Entra ID


For more control when granting and restricting project users, you can create connections with Microsoft Entra ID as the authentication type. Entra ID allows admins to assign specific roles and access permissions to users. It is the most secure way to access a resource, and it is now enabled for Azure OpenAI Service, AI Services, and AI Search connections in Azure AI Studio. With Entra ID, you can consistently enforce access controls and permissions, reducing the risk of unauthorized access to your resources and projects. Get started with code samples or check our documentation: Role-based access control in Azure AI Studio.



Set your workspace connection authentication type to Microsoft Entra ID in AI Studio.Set your workspace connection authentication type to Microsoft Entra ID in AI Studio.



#4 Bring your own encryption keys (BYO keys)


To support the most confidential data workloads in Azure AI Studio, you can secure your data using customer-managed key encryption. Using customer-managed key encryption in AI Studio ensures that you have full control over your data security, allowing you to manage keys according to your organization’s policies. These controls enhance compliance with regulatory requirements and provide an additional layer of protection. Get started with our documentation on customer-managed keys for Azure AI services or follow our documentation steps to use the Azure AI Studio Chat Playground securely.



Set custom-managed keys for encryption of your new Azure AI hub.Set custom-managed keys for encryption of your new Azure AI hub.



#5 Start strong with Bicep templates for Azure AI Studio


Ready to get started, but not sure where to start? Azure AI provides Bicep templates that cover enterprise security capabilities, making it easier to build on best practices right away. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure in a consistent manner. Bicep provides concise syntax, reliable type safety, and support for code reuse. Check out these Bicep templates to get started:




Build secure, production-ready GenAI apps with Azure AI Studio


Security is our top priority at Microsoft, and our expanded Secure Future Initiative (SFI) underscores the company-wide commitments and the responsibility we feel to make our customers more secure. SFI is guided by three principles: secure by design, secure by default and secure operations. We apply these principles to AI services like Microsoft Copilot, and the AI development platforms that our engineers and our customers use to build custom apps, like Azure AI Studio.



To recap, here are five ways you can implement enterprise security in Azure AI Studio:

  1. Use hubs and projects to streamline secure workspace setup
  2. Secure hubs, projects and chat playgrounds with private endpoints
  3. Go credential-less with Entra ID
  4. Bring your own encryption keys (BYO keys)
  5. Start strong with Bicep templates for Azure AI Studio

Ready to go deeper? Explore more ways to enable enterprise-grade security capabilities and controls in Azure AI Studio:

  1. Azure security baseline for Azure AI Studio
  2. Customer enabled disaster recovery - Azure AI Studio
  3. Vulnerability management - Azure AI Studio

Continue reading...
 
Back
Top