check before: 2024-11-01
Product:
Copilot, Entra, Microsoft 365 Groups, Microsoft Graph, Teams
Platform:
Android, Developer, iOS, Mac, Online, Windows Desktop, World tenant
Status:
In development
Change type:
New feature, User impact, Admin impact
Links:

Details:
Summary:
The message details a policy that allows IT admins to restrict Copilot in Teams from making inferences or evaluations. This policy cannot be disabled by users and applies to sentiment-related prompts. It includes instructions for IT admins on how to enable or disable Copilot limited mode using Microsoft Graph API, with a rollout starting in November 2024.
Details:
This policy enables IT admins to block Copilot responses that might infer emotions, make evaluations, discuss personal traits, and use context to deduce answers. When this policy is applied it restricts Copilot's ability to make inferences or evaluations about people or groups when prompted to do so by users. Users cannot disable this policy. The policy applies to Copilot in Teams meetings.
This message is associated with Microsoft 365 Roadmap ID 411568
In-meeting Copilot experience UI
[When this will happen:]
General Availability (Worldwide): We will begin rolling out early November 2024 and expect to complete by early December 2024.
Change Category:
XXXXXXX ... free basic plan only
Scope:
XXXXXXX ... free basic plan only
Release Phase:
General Availability
Created:
2024-10-24
updated:
2024-10-24
Task Type
XXXXXXX ... free basic plan only
Docu to Check
XXXXXXX ... free basic plan only
MS How does it affect me
XXXXXXX ... free basic plan only
MS Preperations
XXXXXXX ... free basic plan only
MS Urgency
XXXXXXX ... free basic plan only
MS workload name
XXXXXXX ... free basic plan only
linked item details
XXXXXXX ... free basic plan only
Pictures
XXXXXXX ... free basic plan only
summary for non-techies**
XXXXXXX ... free basic plan only
Direct effects for Operations**
Restriction of Copilot Functionality
Users in the specified group will not receive Copilot responses to sentiment-related prompts, limiting their ability to gain insights during meetings.
- roles: End Users, Team Leaders
- references: https://www.microsoft.com/microsoft-365/roadmap?rtc=1%26filters=&searchterms=411568
User Experience Degradation
The inability to use Copilot for emotional or evaluative insights may lead to a less engaging and interactive meeting experience.
- roles: End Users, Facilitators
- references: https://www.microsoft.com/microsoft-365/roadmap?rtc=1%26filters=&searchterms=411568
Increased IT Admin Workload
IT admins will need to manage and configure the new policy settings, which may increase their workload and require additional training.
- roles: IT Administrators, Support Staff
- references: https://www.microsoft.com/microsoft-365/roadmap?rtc=1%26filters=&searchterms=411568
Configutation Options**
XXXXXXX ... paid membership only
Opportunities**
XXXXXXX ... free basic plan only
Potentional Risks**
XXXXXXX ... paid membership only
Data Protection**
XXXXXXX ... paid membership only
IT Security**
XXXXXXX ... paid membership only
Hypothetical Work Council Statement**
XXXXXXX ... paid membership only
DPIA Draft**
XXXXXXX ... paid membership only
explanation for non-techies**
Imagine you have a personal assistant who helps you with various tasks, like scheduling meetings or summarizing documents. Now, let's say this assistant also tries to guess how you or others are feeling based on the conversations you have, or it tries to make judgments about people based on what it hears. Sometimes, this might be helpful, but other times, it could lead to misunderstandings or privacy concerns.
In the world of Microsoft Teams, there's a digital assistant called Copilot that can help users during meetings. However, it has the capability to make inferences or evaluations about people's emotions or traits based on the conversations it processes. To address potential concerns about privacy and accuracy, Microsoft is introducing a policy that allows IT administrators to limit this feature.
Think of it like putting a filter on your assistant so it only focuses on the factual tasks and doesn't try to read between the lines or make assumptions about people. This policy ensures that Copilot will not provide responses to prompts that require it to infer emotions or make evaluations about individuals or groups. It's like telling your assistant to stick to the facts and not guess how someone is feeling.
This change is managed through the Microsoft Graph API, a tool that IT administrators use to control various settings. By logging into this tool, they can set Copilot to a "limited mode" for specific groups, meaning Copilot will not engage in making inferences or evaluations for those groups. This setting is controlled by the organization and cannot be turned off by individual users, ensuring consistent application across the team.
This approach is similar to how a company might decide to use certain communication tools but restrict features that could lead to privacy issues or misinterpretations. By controlling these settings, organizations can better manage how technology interacts with sensitive information and maintain a focus on clear, factual communication.
** AI generated content. This information must be reviewed before use.
a free basic plan is required to see more details. Sign up here
A cloudsocut.one plan is required to see all the changed details. If you are already a customer, choose login.
If you are new to cloudscout.one please choose a plan.
Last updated 4 weeks ago