Leveraging Microsoft Copilot for Credential Access

Bottom line up front

In a recent LinkedIn post, I highlighted a potential security concern with Microsoft Copilot: its ability to surface sensitive credentials stored across internal platforms like SharePoint and OneDrive.

AI is Here. Is Your Security Posture Ready?

Copilot isn’t malicious, but it’s effective at retrieving data based on prompts like:

"Find all documents that contain plaintext passwords"

This functionality can unintentionally (or intentionally) expose secrets, credentials, API keys, database URLs if they’re stored without proper controls. It’s not an external breach, but it does qualify as Credential Access under tactics like those tracked in MITRE ATT&CK.

Following my post, several colleagues have shared that this issue is already being discussed in their organizations which is always a good sign for security awareness. The conversation is shifting from “how AI helps productivity” to “how AI magnifies existing security hygiene problems.”

This also ties into Nabil Aitoumeziane’s post, which explores how user prompts can drive sensitive data exposure. When combined with permissive file access, Copilot can 'show you where the risks already are'.

Nabil also references a great article written by Roman Avanesyan to support this idea.

What Security Teams Should Do

Conclusion

AI tools like Copilot don’t create new threats. They expose existing ones with unprecedented efficiency. It’s up to security teams to catch up before threat actors and red teamers inevitably take advantage.

Further exploration of everyday use tools for Red Teaming

Check out how Anthony Fu leveraged DocuSign to catch voluteered credentials.