Securing AI in Microsoft Azure: A Risk-Based Approach to AI Governance and Controls
As AI adoption accelerates, so do the risks. From prompt injection attacks to unauthorized model access, enterprises must adapt their security strategy to address the unique threats introduced by AI systems. Informed by the SANS Critical AI Security Guidelines v1.1, this post explores how to implement risk-based AI security using Microsoft Azure and Azure OpenAI services.
Key AI threats in the cloud
Several threats can impact the security of using AI in a cloud environment, including:
- Model manipulation via unauthorized access.
- Prompt injections and adversarial inputs that cause hallucinations or leakage.
- Data poisoning in training pipelines.
- Compliance gaps due to a lack of transparency or monitoring.
The 6 AI security control categories and how to implement them in Azure
Access controls
Goal: Restrict unauthorized access to models, APIs, and data.
Azure Implementation:
- Use Azure Role-Based Access Controls (RBAC) to limit access to OpenAI endpoints, Machine Learning (ML) workspaces, and storage.
- Enforce Azure AD Conditional Access with Multi-Factor Authentication (MFA).
- Use Private Endpoints for Azure OpenAI and Azure ML.
- Monitor and rotate API keys; use Managed Identities where possible.
Data protection
Goal: Safeguard prompt, training, and inference data.
Azure Implementation:
- Classify and protect data with Microsoft Purview.
- Encrypt data with Azure Storage Encryption and Azure Key Vault.
- Use Defender for Cloud for Data Loss Prevention (DLP) and alerts.
- Sanitize input/output using Azure Content Safety in Azure OpenAI.
Inference security
Goal: Prevent prompt injection, jailbreaks, and output manipulation.
Azure Implementation:
- Use system prompts/templates to prevent injection.
- Enable Azure AI Content Filters in Azure OpenAI.
- Add input validation and rate limits with API Management.
- Apply prompt engineering best practices.
Monitoring
Goal: Detect suspicious activity or model misuse.
Azure Implementation:
- Track usage with Azure Monitor and Application Insights.
- Alert on anomalies with Microsoft Sentinel.
- Log OpenAI request/response data.
- Monitor model drift with Azure ML monitoring.
Deployment Strategies
Goal: Harden infrastructure and pipelines.
Azure Implementation:
- Secure Continuous Integration and Continuous Deployment (CI/CD) with Azure DevOps or GitHub Actions.
- Isolate environments in Azure ML.
- Apply Defender for Cloud recommended AI-Security Posture Management (AI-SPM) settings.
- Use Azure ML Registries to track the AI Bill of Materials (AIBOM).
Governance, Risk & Compliance (GRC)
Goal: Ensure ethical, compliant, and transparent AI operations.
Azure Implementation:
- Use Azure Policy and Blueprints for governance.
- Track lineage with ML Registries and Data Catalogs.
- Conduct assessments with the Responsible AI dashboard.
- Document model usage with metadata and prompt tracking.
Recommendations for organizations
- Start with non-critical AI workloads.
- Centralize AI governance using Azure tools.
- Develop AI incident response playbooks using Sentinel and Defender for Cloud.
ArcherPoint can help
Securing AI is not just a technical issue; it’s a governance and risk challenge. With the proper Azure-native controls and SANS-aligned strategy, organizations can confidently adopt AI while staying secure, compliant, and resilient.
Contact ArcherPoint by Cherry Bekaert for help securing your Azure AI for Business Central.