CalypsoAI’s numbers, he said, “show how strong that pull is. Their data shows that more than half of workers say they would use AI even if their organization’s policy says no, a third have already used it on sensitive documents, and almost half of surveyed security teams admitted to having pasted proprietary material into public tools. I’m not sure it’s as much about disloyalty as it is about how governance and enablement lag behind how people work today.”
The risk here is clear, added St-Maurice, because every unmonitored prompt can lead to intellectual property, corporate strategies, sensitive contracts, or customer data leaking out to the public. “And naturally,” he noted, “if IT blocks these AI services, it’ll drive users further underground to look for new ways to access them. The practical fix is through structured enablement.”
A proper strategy, he said, is to provide a sanctioned AI gateway, connect it to identity, log prompts and outputs, apply redaction for sensitive fields, and publish a few clear and plain rules that people can remember. This should be paired with short, role-based training and a catalog of approved models and use cases. This gives employees a safe path to the same gains.