check before: 2025-12-01
Product:
Copilot, Purview, Purview Communication Compliance, Purview compliance portal, Purview Information Protection, Purview Insider Risk Management
Platform:
Online, Web, World tenant
Status:
Launched
Change type:
New feature, User impact, Admin impact
Links:
Details:
Summary:
Microsoft Purview Insider Risk Management will extend to detect and manage risky AI agent activities in enterprise environments. Features include integration with Copilot Studio and Azure AI Foundry, AI-specific risk policies, and governance of agent workflows. Public preview starts December 2025; general availability by December 2026.
Details:
[Introduction]
As AI agents become deeply embedded in enterprise ecosystems, they are evolving beyond simple tools or workflows into an autonomous digital workforce. These agents can interpret user intent, access and manipulate enterprise data, execute actions on behalf of users, and even make real-time decisions. In many ways, they operate like human insiders only with machine-speed data processing capabilities.
To govern and protect these agents effectively, organizations require visibility into their activities, contextual understanding of their actions, and the ability to flag or block risky behavior. Now, Insider Risk Management can be extended to detect and remediate potentially risky agent activities.
Features:
Copilot Studio & Azure AI Foundry Integration: Detect potentially risky activities of agents hosted on Copilot Studio, Azure AI Foundry, and Agent 365 platforms
Risky AI Usage Policies: Define and enforce policies specific to AI agents accessing sensitive data or performing high-risk actions.
IRM for Agent Users: Extend IRM in Purview to govern agent-driven workflows and protect organizational data.
This message is associated with Microsoft 365 Roadmap ID 516032.
[When this will happen:]
Public Preview: We will begin rolling out early December 2025 and expect to complete by mid-January 2026.
General Availability (Worldwide): We will begin rolling out early December 2026 and expect to complete by late December 2026.
Change Category:
XXXXXXX ... free basic plan only
Scope:
XXXXXXX ... free basic plan only
Release Phase:
General Availability, Preview
Created:
2025-12-20
updated:
2025-12-20
Public Preview Start Date
XXXXXXX ... free basic plan only
Task Type
XXXXXXX ... free basic plan only
Docu to Check
XXXXXXX ... free basic plan only
MS How does it affect me
XXXXXXX ... free basic plan only
MS Preperations
XXXXXXX ... free basic plan only
MS Urgency
XXXXXXX ... free basic plan only
MS workload name
XXXXXXX ... free basic plan only
linked item details
XXXXXXX ... free basic plan only
Pictures
XXXXXXX ... free basic plan only
summary for non-techies**
XXXXXXX ... free basic plan only
Direct effects for Operations**
Risk of Data Breaches
Without proper preparation, the deployment of AI agents may lead to unauthorized access to sensitive data, increasing the risk of data breaches.
- roles: Data Protection Officer, IT Security Manager
- references: https://www.microsoft.com/microsoft-365/roadmap?filters=&searchterms=516032, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection
User Experience Disruption
The introduction of AI agents without adequate training and policy enforcement may confuse users, leading to decreased productivity and frustration.
- roles: End User, IT Support Specialist
- references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-policy-templates
Compliance Violations
Failure to implement and monitor AI-specific risk policies may result in non-compliance with data protection regulations, leading to legal repercussions.
- roles: Compliance Officer, Legal Advisor
- references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection
Configutation Options**
XXXXXXX ... paid membership only
Opportunities**
Enhanced Monitoring of AI Agent Activities
Implementing Microsoft Purview Insider Risk Management for AI agents allows organizations to monitor agent activities in real-time, identifying potentially risky behaviors before they escalate. This can lead to improved security posture and reduced risk of data breaches.
- next-steps: Integrate the Insider Risk Management solution with existing security protocols and conduct training sessions for IT and security teams on how to interpret and act on alerts generated by the system.
- roles: IT Security Manager, Compliance Officer, Data Protection Officer
- references: https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection" target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection, https://learn.microsoft.com/purview/insider-risk-management-policies
" target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-policies
Customizable Risk Policies for AI Agents
The ability to define and enforce AI-specific risk policies allows organizations to tailor their risk management strategies according to their unique operational requirements, enhancing the governance of AI agents' interactions with sensitive data.
- next-steps: Review existing organizational policies and adapt them to the new AI agent risk management capabilities, ensuring that all stakeholders are involved in the policy creation process.
- roles: Compliance Officer, Risk Manager, IT Administrator
- references: https://learn.microsoft.com/purview/insider-risk-management-policy-templates, https://www.microsoft.com/microsoft-365/roadmap?filters=&searchterms=516032
Automated Risk Alerts for Proactive Management
The automatic deployment of the Risky Agents policy enables organizations to receive timely alerts when AI agents exhibit risky behaviors, facilitating proactive risk management and timely interventions.
- next-steps: Set up a notification system for relevant stakeholders to ensure they receive alerts and establish a response protocol for managing flagged activities of AI agents.
- roles: IT Operations Manager, Security Analyst, Business Continuity Manager
- references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection" target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection
Potentional Risks**
XXXXXXX ... paid membership only
Data Protection**
XXXXXXX ... paid membership only
IT Security**
XXXXXXX ... paid membership only
Hypothetical Work Council Statement**
XXXXXXX ... paid membership only
DPIA Draft**
XXXXXXX ... paid membership only
explanation for non-techies**
Microsoft is expanding its Purview Insider Risk Management to include AI agents, which are becoming a significant part of enterprise environments. Think of these AI agents as digital employees. Just like human employees, they can understand instructions, access company data, perform tasks, and make decisions quickly. However, because they operate at machine speed, they can process data much faster than humans.
To ensure these AI agents act responsibly and don't pose a risk to the organization, Microsoft is introducing tools to monitor and manage their activities. This is similar to how a company might have policies and systems in place to oversee employee actions to prevent data breaches or other security issues.
The new features will allow organizations to integrate with platforms like Copilot Studio and Azure AI Foundry, where these AI agents are hosted. They can set specific policies for AI agents, much like setting rules for employees, to control what data they can access and what actions they can perform. If an AI agent's activity seems risky or exceeds certain thresholds, alerts will be generated, allowing the organization to take action.
For example, imagine a security guard at a museum. The guard monitors visitors to ensure they don't touch or damage the exhibits. Similarly, these new tools will act as digital security guards, watching over AI agents to ensure they don't misuse data or perform unauthorized actions.
Organizations will have the ability to customize these policies based on their specific needs and can even choose not to track agent activities if they prefer. The rollout for these features will begin in December 2025, with full availability expected by December 2026. This development aims to provide organizations with the necessary tools to manage the growing presence of AI agents in their digital workforce effectively.
** AI generated content. This information must be reviewed before use.
a free basic plan is required to see more details. Sign up here
A cloudsocut.one plan is required to see all the changed details. If you are already a customer, choose login.
If you are new to cloudscout.one please choose a plan.
Last updated 1 week ago ago