411568 – Microsoft Copilot (Microsoft 365): Copilot inference and evaluation policy for Copilot in Teams meetings (archived)

Microsoft Teams logo

*For this entry exists the more relevant or more recent entry MC916990

check before: 2024-11-01

Product:

Copilot, Teams

Platform:

Android, iOS, Mac, Windows Desktop, World tenant

Status:

Launched

Change type:

Links:

MC916990

Details:

This policy enables IT admins to block Copilot responses that might infer emotions, make evaluations, discuss personal traits, and use context to deduce answers. When this setting is applied it restricts copilot’s ability to make inferences or evaluations about people or groups when prompted to do so by users. Users cannot disable this setting. The setting applies to Copilot in Teams meetings.

Change Category:
XXXXXXX ... free basic plan only

Scope:
XXXXXXX ... free basic plan only

Release Phase:
General Availability

Created:
2024-09-10

updated:
2024-12-11

Docu to Check

XXXXXXX ... free basic plan only

MS workload name

XXXXXXX ... free basic plan only

Direct effects for Operations**

Please, look at the most relevant linked item for details

explanation for non-techies**

In the ever-evolving world of technology, it's important to understand how new tools and settings can impact our daily work routines. Recently, Microsoft introduced a new policy for its Copilot feature in Microsoft 365, specifically for Teams meetings. This policy is designed to help IT administrators control how Copilot interacts during these meetings.

Think of Copilot as a helpful assistant in your virtual meetings. Just like a real-life assistant, Copilot can take notes, summarize discussions, and help with various tasks. However, just like you wouldn't want your assistant to make personal comments or assumptions about your colleagues, this new policy ensures that Copilot doesn't either.

Imagine you're in a meeting, and someone asks Copilot to summarize a discussion. Without this policy, Copilot might inadvertently make comments that infer emotions or personal traits about the participants. For example, it might say, "John seemed frustrated when discussing the project deadline." While this might be accurate, it's not always appropriate or helpful in a professional setting.

The new policy allows IT admins to block Copilot from making such inferences or evaluations. This means Copilot will stick to the facts and avoid any comments that could be seen as personal or subjective. It's like setting boundaries for your assistant to ensure they remain professional and neutral.

This setting is applied across all Teams meetings and cannot be turned off by individual users. It's a bit like having a company-wide rule that everyone must follow, ensuring consistency and professionalism in all interactions.

In summary, this new policy for Microsoft Copilot in Teams meetings helps maintain a professional and respectful environment by preventing Copilot from making personal inferences or evaluations. Just as you would guide a human assistant to stay neutral and factual, this policy ensures that Copilot does the same, creating a more comfortable and focused meeting experience for everyone involved.

** AI generated content. This information must be reviewed before use.

a free basic plan is required to see more details. Sign up here


A cloudsocut.one plan is required to see all the changed details. If you are already a customer, choose login.
If you are new to cloudscout.one please choose a plan.



change history

DatePropertyoldnew
2024-12-11RM StatusRolling outLaunched
2024-11-16RM StatusIn developmentRolling out
2024-10-03RM Product TagsMicrosoft TeamsMicrosoft Copilot (Microsoft 365), Microsoft Teams
2024-09-28RM ReleaseOctober CY2024November CY2024

Last updated 2 months ago ago

Leave a Reply

Share to MS Teams

Login to your account

Welcome Back, We Missed You!