• Zero Trust
  • Security Architecture
  • Enterprise Security
  • ·
  • Mar 25, 2026

5 Things About Zero Trust You Only Learn at Scale

Zero Trust was never just a security initiative. It was an architectural bet - and the companies that made it early are about to collect.

Nikola Novoselec

Nikola Novoselec

Founder & CTO

5 Things About Zero Trust You Only Learn at Scale

The cloud gave you velocity. Scaling. Innovation. The ability to ship in days what used to take quarters. Then legacy security strapped a metal weight to your leg and told you to run. Firewall change requests. VPN provisioning tickets. Manual reviews for every new integration. Security that existed to say no.

Now AI is offering that same exponential velocity - only faster, and the stakes are higher. Your employees are already feeding data into generative AI. Your competitors are deploying autonomous agents. And your security team’s first instinct? Ban it. Block it. Slow it down. Employees will find workarounds anyway - and now you have shadow AI. Same playbook, different decade.

That’s not a technology problem. That’s a framing problem. And it’s the reason most organisations still don’t understand what Zero Trust actually is.

After three years architecting Zero Trust for critical infrastructure - a user base the size of Switzerland, legacy systems older than the internet - here are the five things I wish more executives understood before their next strategy meeting.


1. It Was Never Just a Security Initiative.

This is the framing error that keeps Zero Trust boxed in as a cost center instead of recognised as business infrastructure.

Zero Trust started as Kindervag’s network security model - and he was right. But at scale, it behaves like something much bigger: an architectural model built on granular enforcement, risk-based evaluation, and declarative policy that decouples organisations from legacy dependencies and adapts to anything. What started as a better way to protect networks became an integration superpower.

When Zero Trust gets pitched as security transformation, it gets a security budget, security KPIs, security sponsorship. Breach prevention. Risk reduction. Compliance scores. All real outcomes - and all reasons to underfund it.

The full picture only emerges at scale: when you build security into the architecture instead of bolting it on, the architecture starts accelerating everything around it. That vendor integration that used to require weeks of network segmentation meetings and routing table approvals? It becomes a policy definition in Git. Pull request, review, merge, deployed. Onboarding becomes configuration, not a project. Compliance review becomes repository review. Rollbacks are git reverts.

In business terms: partner onboarding measured in days instead of quarters, lower integration cost per relationship, and less revenue delayed by control-plane bureaucracy.

Zero Trust is what happens when security becomes infrastructure instead of bureaucracy. The security is real. The velocity is the reason the CFO should care.


2. It Doesn’t Start With Clarity. It Creates It.

There’s a myth that you need perfect visibility before you begin - every flow mapped, every dependency documented, every exception understood.

You don’t.

As you move a system, a segment, or a workflow under tighter, more explicit policy, something interesting happens: ambiguity surfaces. That “temporary” firewall rule from 2014 turns out to support three business processes. That shared account exists because two teams never agreed on ownership. That exception no one could explain suddenly requires a clear decision.

Tribal knowledge becomes code. Implicit becomes declarative. Accidents become choices.

And once it’s a choice, you can improve it. For leadership, that means less key-person dependency, fewer mystery exceptions, and a system that becomes easier to change over time instead of harder.

At scale, the policy layer starts to look less like a collection of controls and more like an operating specification for how access, identity, and business exceptions are allowed to work.

In a large enterprise, this rarely happens bottom-up. Decades of decisions have created a spiderweb of dependencies - substrate on top of substrate of implicit choices that no single team can untangle from below. It requires an enterprise-wide architectural vision, executive alignment, and a unified policy model that covers all entity types. But full visibility isn’t a prerequisite for starting - it’s a consequence of building. The vision tells you where to go. The clarity emerges on the way there. Pick one integration path - a SaaS application, a partner connection, an internal API - and force it through a single policy decision point. That first migration will surface more hidden assumptions than any architecture review.

This is also the hidden prerequisite for AI. If your access logic is implicit and scattered, deploying autonomous agents just automates your chaos at machine speed. If your logic is declarative and defined, automation amplifies your control.


3. It’s the Control Plane Your AI Strategy Needs.

Your employees are already pasting corporate data into public AI chatbots, uploading code to coding assistants, feeding confidential documents into models because the summary feature is convenient.

Shadow IT was someone spinning up an unauthorized Dropbox. Shadow AI is someone feeding your acquisition strategy into a public model during lunch.

The instinct is familiar: restrict access, lock it down. It will fail for the same reason it always fails - the value is too high for users to comply. Workarounds are ungoverned by definition.

Zero Trust offers a different path: say yes, with governance.

The market is treating AI security as a net-new problem that demands net-new products. If your Zero Trust architecture is built properly, much of it is already embedded:

  • The same identity layer that verifies a human accessing a SaaS application can verify a human accessing an AI service.
  • The same data loss prevention that inspects SaaS traffic can inspect prompts flowing to AI endpoints.
  • The same behavioural baselines that detect anomalous user activity can detect someone uploading an entire codebase to a public model at 2 AM.

The policy fabric doesn’t care whether the destination is a SaaS app or a language model. Data is data. Exfiltration is exfiltration. The only difference is the user thinks they’re being productive while doing it.

Which users can access which AI services? What data classifications can flow to external models versus your private deployment? What gets logged, blocked, or allowed with conditions? That isn’t a separate governance model. It extends the same identity, policy enforcement, and inspection patterns Zero Trust already relies on.

AI security without Zero Trust is a silo. AI security on top of Zero Trust is just another policy. The organisations that already built this don’t need to start from scratch. They need to extend the fabric they already have.


4. It Solved Agent Governance Before Agents Existed.

Point 3 was about governing humans who use AI. This one is about governing AI that acts on your behalf.

Chatbots answer questions. Agents take actions. The first governance question isn’t which model you’re using. It’s this: who is the agent?

What identity does this autonomous system carry? What permissions does it hold? When it does something wrong - and it will - can you trace the action to a specific agent, policy path, and confidence threshold?

Most enterprises still can’t. Agents use shared service accounts with broad data access, and their actions are logged as “system.” When something goes wrong, the forensic trail ends at a shrug. We force humans through MFA, conditional access, and behavioural analytics - yet we hand autonomous agents static API keys with broad permissions. The gap isn’t technical. Dedicated agent identity primitives already exist. It’s organisational.

Every primitive agentic AI demands, Zero Trust already defined: identity for non-human entities, scoped permissions, continuous verification, and full attribution. That lets you introduce autonomy gradually. Start with heavy oversight. Expand permissions only where the agent earns trust through performance and policy compliance. The agents that prove themselves get more room. The ones that don’t, don’t.

This isn’t just conceptual. In the Graduated Autonomy system I engineered, one agent proposes security actions; another validates against the Zero Trust policy fabric. Neither can act alone. Attribution is precise: not “the system blocked this” but “agent X approved action Y at timestamp, based on Z evidence, with N% confidence.”

An AI agent without identity is an autonomous system with no accountability. You wouldn’t accept that from a contractor. Don’t accept it from software that makes a thousand decisions while you read this sentence.


5. It Rewards Practice. Not Receipts. Not Anymore.

Most organisations that claim to have “implemented” Zero Trust deployed a product and declared victory. Bought ZTNA, checked the box, moved on. That’s like buying a gym membership and framing the receipt.

The gap between “we have Zero Trust” and “we practice Zero Trust” used to be a vulnerability. Now it’s a capability gap - and it compounds.

Organisations with a real Zero Trust operating model - where policy lives in Git, every entity type flows through the same fabric, and attribution is architectural not aspirational - already have much of the control plane AI depends on. They’re extending a foundation, not inventing one under pressure.

Organisations without that foundation are trying to build identity, policy, attribution, and AI adoption simultaneously - because the board saw the competitor’s press release. That’s not impossible. It’s just expensive, slow, and politically fragile.

The organisations that skipped the foundation aren’t just less secure. They’re less capable. And in an economy where AI velocity is competitive advantage, “less capable” is existential.


The Bottom Line

For years, the question was “how do we implement Zero Trust?” That was always the wrong question. The right question is “what do we want our architecture to enable?” - and the answer keeps getting bigger. First it was secure access. Then it was velocity. Then clarity. Now it’s AI governance. Next year it will be something else.

The architecture doesn’t care. That’s the point. It was never scoped to a single problem.

Three years into designing this architecture, the thing that keeps surprising me isn’t the security model. That was the plan. It’s the side effects. An integration pattern that should take weeks reducing to a policy definition. A governance model that turns out to already have the primitives AI agents need. A new requirement arriving and the answer being “extend the fabric” instead of “start a new initiative.” At scale, the policy layer starts to resemble a software-defined discipline:

  • Policy fabric = the specification
  • Enforcement points = the execution
  • Continuous audit = the tests

Zero Trust, done right, is spec-driven development for your entire security architecture - and when the specification is precise enough, execution becomes mechanical. Nobody puts that in the business case.

If your Zero Trust program only delivered what was scoped, the best part hasn’t started. The scope was breach prevention and compliance. What nobody scoped - because nobody knew to - was an architecture that absorbs every new wave without starting over. That’s the part worth building for.


Get Started

Ready to Transform Your Architecture?

Whether you need a Zero Trust assessment, an AI governance architecture, guidance on agentic coding adoption, or help selecting the right technology - let's discuss your specific situation. Direct conversation with the architect who does the work.

00 +

Years Experience

From assessment through architecture to implementation

0

Industries

Logistics, Transportation, Finance, Public Sector

000 %

Technology Advisory

Recommendations grounded in architectural fit, integration needs, and your operating model.