Blog Post

Microsoft Security Community Blog
9 MIN READ

How to deploy Microsoft Purview DSPM for AI to secure your AI apps

Sophie_Ke's avatar
Sophie_Ke
Icon for Microsoft rankMicrosoft
Apr 15, 2025

Learn how to secure and govern the interactions of Microsoft Copilot experiences, Enterprise AI apps, and all other AI apps with Microsoft Purview Data Security Posture Management (DSPM) for AI

Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications:

  • Microsoft Copilot experiences, including Microsoft 365 Copilot.
  • Enterprise AI apps, including ChatGPT enterprise integration.
  • Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser.

In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications.

Prerequisites

Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs.

Login to the Purview portal

To begin, start by logging into Microsoft 365 Purview portal with your admin credentials:

Figure 1. DSPM for AI in the Microsoft Purview portal

1. Securing Microsoft 365 Copilot

Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot.

Discover potential data security risks in Microsoft 365 Copilot interactions
  1. In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences

     

    Figure 2. Get started with DSPM for AI and activate Microsoft Purview Audit

     

  2. In the Recommendations tab, review the recommendations that are under “Not Started”.
  3. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it.
    • Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy.

       

      Figure 3. Recommended policy to detect risky interactions in AI apps
  1. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: 

     

      • Total interactions over time (Microsoft Copilot)
      • Sensitive interactions per AI app
      • Top unethical AI interactions
      • Top sensitivity labels references in Microsoft 365 Copilot
      • Insider Risk severity
      • Insider risk severity per AI app
      • Potential risky AI usage
Protect sensitive data in Microsoft 365 Copilot interactions
  1. From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions.

     

    Figure 4. Activity explorer in DSPM for AI

 

  1. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies:
      • Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped.
      • Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot.

         

        Figure 5. Recommended policy to protect sensitive data referenced in Microsoft 365 Copilot

         

      • Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! 

         

        Figure 6. Recommended action to protect sensitive data referenced in Copilot responses

         

      • Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org.

         

        Figure 7. Recommended action to use Copilot to improve your data security posture

         

  2. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps.
Govern the prompts and responses in Microsoft 365 Copilot interactions
  1. Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”.

     

    Figure 8. Recommendation to get guided assistance to AI regulations using Compliance Manager

     

  2. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization.

     

    Figure 9. Recommended policy to control unethical behavior in AI

 

  1. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header.

 

Figure 10. Setup a Data lifecycle policy with Purview DLM to retain and/or delete Microsoft 365 copilot interactions

 

  1. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case.

 

Figure 11. Create an eDiscovery case to preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions

2. Securing Enterprise AI apps

Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation.

3. Securing other AI

Microsoft Purview DSPM for AI currently supports the following list of AI sites.

Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps.

Discover potential data security risks in prompts sent to other AI apps
  1. In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions:Figure 12. Get started with steps to discover potential data security risks with other AI apps
    • Install Microsoft Purview browser extensionFigure 13. Install Microsoft Purview browser extension

For Windows users:

The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows.

For MacOS users:

The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS.

 

    • Figure 14. Onboard devices to Microsoft Purview
    • Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps:
      • Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only.
      • Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites.
      • Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only.Figure 15. Extend your insights for data discovery for other AI apps
  1. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps:
      • Total interactions over time (other AI apps)
      • Total visits (other AI apps)
      • Sensitive interactions per AI app
      • Insider Risk severity
      • Insider risk severity per AI app
Protect sensitive info shared with other AI apps
  1. From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more.Figure 16. DSPM for AI activity explorer for visits to other AI apps
  2. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies:
  1.  
      • Fortify your data security – This will create three policies to manage your data security risks with other AI apps:

        1)  Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention.

        2)  Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention.

        3)  Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge.   

         

        Figure 17. recommended policies to fortify your data security for other AI apps
  1. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps.

Conclusion

Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps.

We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions.

We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page!

Follow-up Reading

Updated May 30, 2025
Version 3.0

1 Comment

  • Dean_Gross's avatar
    Dean_Gross
    Silver Contributor

    Sophie_KeI have reviewed the linked articles about licensing and it's not clear to me how the license type assigned to users will affect the data shown in the activity explorer. Will activities be shown for all user or only those that have a certain type of license? If so, which licenses? Will the type of activities that are shown depend on the license assignment?

    TIA