MC1200579 – Microsoft Purview | Insider risk management- Insider risk management for risky agents

cloudscout.one Icon

check before: 2025-12-01

Product:

Copilot, Purview, Purview Communication Compliance, Purview compliance portal, Purview Information Protection, Purview Insider Risk Management

Platform:

Online, Web, World tenant

Status:

In development

Change type:

New feature, User impact, Admin impact

Links:

516032

Details:

Summary:
Microsoft Purview Insider Risk Management will extend to detect and manage risky AI agent activities in enterprise environments. Features include integration with Copilot Studio and Azure AI Foundry, AI-specific risk policies, and governance of agent workflows. Public preview starts December 2025; general availability by December 2026.

Details:
[Introduction]
As AI agents become deeply embedded in enterprise ecosystems, they are evolving beyond simple tools or workflows into an autonomous digital workforce. These agents can interpret user intent, access and manipulate enterprise data, execute actions on behalf of users, and even make real-time decisions. In many ways, they operate like human insiders only with machine-speed data processing capabilities.
To govern and protect these agents effectively, organizations require visibility into their activities, contextual understanding of their actions, and the ability to flag or block risky behavior. Now, Insider Risk Management can be extended to detect and remediate potentially risky agent activities.
Features:
Copilot Studio & Azure AI Foundry Integration: Detect potentially risky activities of agents hosted on Copilot Studio, Azure AI Foundry, and Agent 365 platforms
Risky AI Usage Policies: Define and enforce policies specific to AI agents accessing sensitive data or performing high-risk actions.
IRM for Agent Users: Extend IRM in Purview to govern agent-driven workflows and protect organizational data.
This message is associated with Microsoft 365 Roadmap ID 516032.
[When this will happen:]
Public Preview: We will begin rolling out early December 2025 and expect to complete by mid-January 2026.
General Availability (Worldwide): We will begin rolling out early December 2026 and expect to complete by late December 2026.

Change Category:
XXXXXXX ... free basic plan only

Scope:
XXXXXXX ... free basic plan only

Release Phase:
General Availability, Preview

Created:
2025-12-20

updated:
2025-12-20

Public Preview Start Date

XXXXXXX ... free basic plan only

Task Type

XXXXXXX ... free basic plan only

Docu to Check

XXXXXXX ... free basic plan only

MS How does it affect me

XXXXXXX ... free basic plan only

MS Preperations

XXXXXXX ... free basic plan only

MS Urgency

XXXXXXX ... free basic plan only

MS workload name

XXXXXXX ... free basic plan only

linked item details

XXXXXXX ... free basic plan only

Pictures

XXXXXXX ... free basic plan only

summary for non-techies**

XXXXXXX ... free basic plan only

Direct effects for Operations**

Risk of Data Breaches
Without proper preparation, the deployment of AI agents may lead to unauthorized access to sensitive data, increasing the risk of data breaches.
   - roles: Data Protection Officer, IT Security Manager
   - references: https://www.microsoft.com/microsoft-365/roadmap?filters=&searchterms=516032, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection

User Experience Disruption
The introduction of AI agents without adequate training and policy enforcement may confuse users, leading to decreased productivity and frustration.
   - roles: End User, IT Support Specialist
   - references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-policy-templates

Compliance Violations
Failure to implement and monitor AI-specific risk policies may result in non-compliance with data protection regulations, leading to legal repercussions.
   - roles: Compliance Officer, Legal Advisor
   - references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection

Configutation Options**

XXXXXXX ... paid membership only

Opportunities**

Enhanced Visibility into AI Agent Activities
With the integration of Insider Risk Management for AI agents, organizations can gain deeper insights into the activities of AI agents. This visibility allows for better monitoring of potentially risky behaviors, which is crucial for compliance and security.
   - next-steps: Implement the Risky Agents policy and customize it according to organizational needs. Train IT staff on monitoring and interpreting the alerts generated by the system.
   - roles: IT Security Manager, Compliance Officer, Data Protection Officer
   - references: https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection" target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection, https://learn.microsoft.com/purview/insider-risk-management-policies " target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-policies

Customized Risk Policies for AI Agents
The ability to define and enforce AI-specific risk policies enables organizations to tailor their risk management strategies to the unique behaviors and capabilities of AI agents. This can lead to improved data protection and compliance with internal policies.
   - next-steps: Develop a cross-functional team to identify key risks associated with AI agent activities and draft customized policies that address these risks. Pilot these policies with a small group of agents before full deployment.
   - roles: Risk Management Officer, AI Operations Manager, Compliance Officer
   - references: https://learn.microsoft.com/purview/insider-risk-management-policy-templates, https://www.microsoft.com/microsoft-365/roadmap?filters=&searchterms=516032

Governance of Agent Workflows
Extending Insider Risk Management to govern agent-driven workflows helps in ensuring that sensitive data is handled appropriately, thereby minimizing the risk of data breaches or misuse.
   - next-steps: Review existing workflows involving AI agents and identify areas where governance can be improved. Utilize the new features in the Purview portal to establish oversight mechanisms for these workflows.
   - roles: IT Operations Manager, Data Governance Officer, Compliance Officer
   - references: https://learn.microsoft.com/purview/insider-risk-management-policies, https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection" target="_blank" rel="nofollow noopener noreferrer">https://learn.microsoft.com/purview/insider-risk-management-adaptive-protection

Potentional Risks**

XXXXXXX ... paid membership only

Data Protection**

XXXXXXX ... paid membership only

IT Security**

XXXXXXX ... paid membership only

Hypothetical Work Council Statement**

XXXXXXX ... paid membership only

DPIA Draft**

XXXXXXX ... paid membership only

explanation for non-techies**

Imagine you have a team of digital assistants, like a group of virtual employees, working within your organization. These AI agents are becoming more advanced, much like hiring new team members who can process information at lightning speed and make decisions in real-time. Just as you would want to monitor and manage the activities of human employees to ensure they are following company policies and not engaging in risky behavior, the same level of oversight is necessary for these AI agents.

Microsoft Purview's Insider Risk Management is being extended to include these AI agents. Think of it as setting up a security system that not only watches over your human employees but also keeps an eye on your digital workforce. This system will help detect any potentially risky activities by these AI agents, such as accessing sensitive data or performing actions that could pose a security threat.

To make this work, Microsoft is integrating this risk management with platforms like Copilot Studio and Azure AI Foundry. It's similar to having a specialized team that understands the unique behaviors and risks associated with these digital assistants. The system will automatically deploy policies that act like guidelines for these AI agents, ensuring they operate within safe and acceptable boundaries.

For instance, if an AI agent starts accessing more data than usual or attempts to perform an unauthorized action, the system will generate an alert. It's like having a notification system that lets you know when something unusual is happening, allowing you to take action before any potential issues escalate.

If you decide that monitoring these AI agents isn't necessary for your organization, you have the option to turn off this feature. It's akin to choosing whether or not to install security cameras in certain areas of your office.

Overall, this development aims to provide organizations with the tools needed to manage the risks associated with AI agents, ensuring they remain valuable assets rather than potential liabilities.

** AI generated content. This information must be reviewed before use.

a free basic plan is required to see more details. Sign up here


A cloudsocut.one plan is required to see all the changed details. If you are already a customer, choose login.
If you are new to cloudscout.one please choose a plan.



Leave a Reply

Share to MS Teams

Login to your account

Welcome Back, We Missed You!