ProArch Blogs

AI Security Trends from Microsoft Secure Conference | ProArch

Written by ProArch | May 7, 2025 1:32:08 PM

At Microsoft Secure Conference, one message came through loud and clear: AI security is no longer optional. With the widespread adoption of AI tools, organizations must rethink how they secure data, models, and user interactions. Microsoft unveiled new features, threat insights, and strategies designed to tackle this challenge head-on—and ProArch is helping businesses stay ahead.

Key Trends You Need to Know

57% of organizations reported a rise in security incidents from AI usage.

Whether it’s employees using AI tools without oversight or generative models making automated decisions, the reality is clear: unchecked AI expands the potential for compromise.

60% have yet to implement any AI-specific controls.

The gap between adopting AI and securing it is only widening. If AI is in use, but not governed, your organization is at real risk.

88% are concerned about indirect prompt injection attacks.

These attacks manipulate AI input (prompts) to change how a model behaves—often in subtle and undetectable ways. Microsoft is emphasizing the need for new threat detection strategies in this area.

Microsoft highlighted many more insights in the Microsoft Defense Report, emphasizing data and AI security, identity-based attacks on the rise, and APTs targeting IT systems and services.

Insights That Stood Out

  • Insider Threats Are on the Rise: Microsoft is seeing a rise in nation-state actors (like North Korea) creating fake companies and profiles to infiltrate organizations through fake employee interviews and onboarding processes.
  • The Secure Future Initiative (SFI): Microsoft’s SFI is now centered around threat intelligence and powers tools like the Microsoft Defender Suite and generally guides where Microsoft is heading.
  • Securing Agentic AI and Local LLMs: AI security in the face of the rise Agentic AI and ease of access to local LLM/AI models have substantially increased since even a few months ago. Microsoft, through Security Copilot and Security Copilot Agents, are developing the capabilities to secure AI workloads and mitigate data leak potential with commercial AI products.

New Capabilities & Features

Microsoft Purview for Generative AI Apps

Admins can provide role-based access (RBAC) to Gen AI apps like ChatGPT for research purposes and marketing but is capable of blocking that same access for more sensitive departments. The Purview browser extension can detect sensitive data uploads directly in the chat form by users who are not following policy.

Security Copilot Secures More LLM Models

Similar to Microsoft Defender for Cloud capabilities that secure cloud platforms outside Microsoft like GCP and AWS, Microsoft is implementing security controls for data security and access to other LLM models allowing administrators to support any business use case and continue to utilize the Microsoft stack.

Microsoft Purview Automated Security Policies

Provides automation capabilities directly in Purview to push out security control policies with a few clicks based on analysis of your organizations data by AI. Security controls have never been easier to push to your organization based on current analytics.

Incident AI Remediation now includes Verdict Analysis for the SOC

Now it’s easier to determine why an alert or incident is automatically resolved with additional investigation details on reasoning for AI actions directly in the Defender XDR portal.

New AI-Powered Tools Announced

Microsoft Purview Data Security Investigations

Purview Data Security

This new capability in Purview rapidly performs analysis of dense datasets utilized for incident response like Unified Access Logs (UAL) via an automated process supported by Agentic AI. This allows SOC analysts to focus on the remediation and response to the incident findings as opposed to spending hours of labor analyzing logs. These cases inside the insider risk management can be used to track incidents and understand the data risks associated with the incident.

Data Leak

AI-specific Detections in Microsoft Defender XDR

Defender Cloud Protection

  • Jailbreak, novel prompt injection, sensitive data and secret exposure, malicious URL’s, anomalies in access, wallet abuse and more!
  • Organizations looking to build their own internal AI tooling can integrate with Microsoft security tools and monitor for malicious interactions and attacks natively in the Microsoft stack and the Defender XDR portal.

Microsoft Security Copilot Agents

Threat Intel Agent: Provides threat intelligence information consistently and alerts on potential known attackers in your environment.

Alert Triage Agent: Helps Automate Alert investigation and mitigates alert fatigue for your SOC Analysts.

Entra ID Agent: Constantly scans conditional access policies in search of misconfigurations.

Vulnerability Remediation Agent: Attempts to automatically update applications on vulnerable endpoints

TL;DR: AI is evolving fast—and the bad actors are, too. Microsoft is giving security teams the tools they need to keep up, but it’s on every organization to act now. To learn more about detecting and responding to AI threat, contact ProArch.

Conclusion

AI is advancing—and so are cyber threats. Microsoft is delivering tools to meet this challenge. But security teams must act now to close the AI security gap.

Want to know how your organization can strengthen AI defenses?

ProArch Can Help You Stay Ahead

Our MXDR solution, powered by Microsoft Defender for Cloud and Defender XDR, helps you detect and respond to AI-driven threats in real time.