
Security teams are being asked to do two new jobs at once. They must secure a new generation of autonomous agents being deployed across the company—agents with access to internal systems and decision-making authority that didn't exist a year ago. Simultaneously, they must integrate AI into their own workflows to manage a volume of alerts and design reviews that is outstripping their ability to scale given how fast new features are being developed.
While these are often treated as separate challenges, they are fundamentally the same engineering problem. The principles that make an AI agent safe to ship are the exact same principles that make an AI agent a reliable security partner.
Using Asana’s AI Teammates as a continuous case study, this session examines how familiar security engineering frameworks apply to both sides of the AI coin. We will walk through how to design agents that are trustworthy enough to do real security work and secure enough to exist within your ecosystem. By the end, attendees will have a unified set of principles to guide their AI strategy, grounded in engineering rigor rather than industry hype.



