Artificial Intelligence isn’t just transforming business, it’s reshaping how industries compete. Finance, healthcare, retail, energy and manufacturing are all racing to stitch together multiple AI systems. In many cases, these deployments rely on emerging tools like the Model Context Protocol (MCP) or agent-to-agent (A2A) communication.
The problem? Security hasn’t kept pace.
According to IBM, 97% of AI breach victims lacked proper access control. The rush to “move fast and deploy” means many teams are plugging AIs together like Lego blocks without guardrails. And much like stepping barefoot on an actual Lego, the results can be painful: sensitive data leaks, model hijacking and lost trust in outcomes.
We’ve seen this story before. Early cloud adopters loved the efficiency but skipped basics like identity checks, logging or encryption, at least until the breaches came. The good news is that security teams don’t have to wait for AI-native safeguards to mature. By applying seven practical steps today, organizations can set themselves up to move quickly while emphasizing security.
1. Anchor Security in Frameworks Like MAESTRO
When cloud adoption took off, the NIST Cybersecurity Framework gave security teams a common playbook. The Cloud Security Alliance recently formulated something similar for multi-AI systems: the MAESTRO framework. It guides data operations, agent frameworks and threat modeling.
Take an insurer wiring an AI chatbot into underwriting. Without traceability, it’s impossible to know which AI or person approved a high-risk policy. MAESTRO calls for end-to-end accountability, which makes investigations possible when things go wrong. The same holds true for IoT or ICS/SCADA networks. If a rogue AI agent changes sensor readings or alters PLC commands, traceability is the only way to understand what happened and why.
2. Pick a Primary Platform
Security teams can’t protect every shiny AI tool. Gartner predicts that by 2026, 75% of enterprises will run multiple AI models in production, but few will standardize environments.
The smart move is to choose one primary platform: Amazon, Google or Microsoft, and let the rest run on a best-effort basis. This keeps monitoring and identity management manageable. It’s like choosing one main bank even if you experiment with fintech apps on the side. In IoT-heavy industries, centralization reduces the risk of sprawling, unmonitored edge devices running uncontrolled AI code.
3. Build Developer Sandboxes That Work
If you don’t give developers a safe playground, they’ll create their own, and that’s rarely secure. That’s why an easy-to-use sandbox on a hyperscaler platform is critical.
One global retailer offered developers a secure Google Cloud sandbox for MCP servers. It wasn’t just safer; onboarding sped up because developers didn’t waste time cobbling together environments. Security became an enabler, not a blocker. For ICS and IoT environments, dedicated sandboxes can let engineers test AI-driven automation without risking disruptions to live production lines or energy systems.
4. Control Where MCP Servers Run
AI security unravels when MCP servers live on personal laptops or rogue servers. Just as production databases don’t belong on a MacBook, MCP and agentic code should only run in controlled environments with proper privilege management.
Microsoft’s Intune and Apple’s JAMF Pro can block unauthorized AI software on endpoints. Enforcing these policies cuts down on shadow AI, where well-meaning developers accidentally expose sensitive data through unsanctioned tools. For critical infrastructure, ensuring MCP servers don’t appear on unprotected operator workstations is essential to preventing attacks that jump from IT into OT networks.
5. Treat MCPs Like APIs
MCPs are essentially APIs with more intelligence. Luckily, we already know how to secure APIs: use gateways that enforce authentication, rate limiting and logging.
For example, a financial services firm used Azure’s API Management in front of MCP servers. They gained visibility into requests, flagged anomalies and cut off abusive traffic. Without the gateway, those interactions would have stayed a mystery. Similarly, energy utilities can use API gateways to monitor AI commands sent to smart meters or industrial controllers, ensuring no rogue requests slip through.
6. Make OAuth Non-Negotiable
Every MCP interaction should carry a traceable identity. OAuth tokens make sure that whether it’s a user or another AI, you know who did what. In fact, Verizon’s 2025 Data Breach Investigations Report found that nearly 88% of breaches stem from compromised credentials, highlighting how vital access controls like OAuth really are.
Think of OAuth like a boarding pass. Without it, anyone could stroll into the cockpit. With it, only the right person, at the right time, with the right access gets through. Combined with API gateways, OAuth creates a layered defense against impersonation and privilege misuse. In IoT and ICS settings, OAuth can help ensure that only verified maintenance AIs can adjust industrial sensors or robotics equipment.
7. Secure Back-End Connections with SPIFFE
Don’t forget the back end. Data sources tied to MCPs are prime targets. The SPIFFE framework issues short-lived certificates so MCPs and data sources can mutually authenticate.
Large multi-cloud organizations already use SPIFFE to prevent unauthorized workloads from posing as legitimate ones. Think of it as two colleagues with a secret handshake: if you don’t know it, you’re not in. For ICS and IoT, SPIFFE can provide certificate-based trust between AI-driven monitoring tools and critical SCADA servers, reducing the chance of forged connections.
The Bigger Picture: Security as a Business Enabler
Multi-AI deployments are accelerating. The winners won’t just deploy the most AI, they’ll deploy it securely.
This is especially urgent for industries like IoT and industrial control systems (ICS/SCADA). When multiple AI agents are tied into connected sensors, manufacturing lines or energy grids, weak access controls don’t just mean lost data, they can disrupt physical operations. A compromised AI agent in an oil refinery or power plant could manipulate safety systems, echoing the same kind of risks that made Stuxnet infamous. The stakes are higher when AI decisions interact directly with machines.
History shows that security eventually catches up with innovation, but waiting isn’t a strategy. Early cloud adopters learned to enforce identity and logging before it was common practice. Today’s AI leaders, especially in IoT-heavy sectors, need to bake in frameworks, gateways and strong identity controls from the start.
Securing AI isn’t about slowing progress. It’s about building trust: with customers, regulators, employees and (in the case of ICS and IoT) with the public, whose safety depends on resilient infrastructure. Because at the end of the day, what good is a faster AI if no one trusts the answers it gives? And let’s be honest, no one wants to explain to their board (or to regulators) that the company’s AI let a factory line or energy grid go offline because “we thought guardrails would slow it down.”
Interested in reading more articles like this? Subscribe to the ISAGCA blog and receive regular emails with links to thought leadership, research and other insights from the OT cybersecurity community.