
The perimeter is gone.
For years, federal agencies and enterprises alike operated under a model of implicit trust, assuming that anything inside the network was safe, and that a strong firewall at the edge was enough to keep threats out.
That assumption has grown to be catastrophically wrong.
Between the rise of remote work, the explosion of cloud environments and SaaS applications, and the growing sophistication of insider threats and cyberattacks, the old network perimeter no longer maps to how organizations actually operate.
Zero Trust architecture is a response to this reality. Rooted in the principle of “never trust, always verify,” a Zero Trust security model eliminates implicit trust from the equation entirely. Instead of assuming users and devices are safe because they’re inside the network, Zero Trust requires continuous verification of every user identity, every device, and every request, regardless of where it originates. Access is granted on the basis of least-privilege access and scoped to exactly what’s needed, for exactly as long as it’s needed, nothing more.
Zero Trust is a strategic, iterative journey, and knowing where you stand on that journey is the first step toward meaningful progress. That’s what the CISA Zero Trust Maturity Model is designed to do. Developed by the Cybersecurity and Infrastructure Security Agency, the ZTMM defines what mature Zero Trust implementation looks like across five critical pillars and provides a structured path to reach a fully optimized security posture.
This guide walks through the CISA ZTMM pillar by pillar, covering every function across Identity, Devices, Networks, Applications & Workloads, Data, and the Cross-Cutting Capabilities that tie them all together. It will offer a clear picture of what each pillar demands, where gaps commonly emerge, and what the path to maturity actually looks like in practice.
Explaining the Four Stages of the Zero Trust Maturity Model
The ZTMM is grounded in guidance from the National Institute of Standards and Technology (NIST) and aligned with OMB M-22-09, which mandates Zero Trust adoption across federal civilian agencies. One of the most useful things about the model is that it doesn’t treat Zero Trust as a binary. Instead, assesses maturity across four progressive maturity stages that reflect how organizations realistically adopt and scale a Zero Trust strategy over time.
Traditional is where most organizations begin. At this level, security controls are static and largely manual. Zero Trust policies, if they exist at all, are configured once and rarely revisited. Least-privilege access isn’t dynamically enforced or regularly reviewed, perimeter-based defenses like firewalls and VPNs carry most of the weight, and the attack surface that exists beyond that perimeter is largely unmanaged.
Initial happens when organizations are in the early stages of adopting Zero Trust principles and automation solutions. Organizations at this stage have begun to apply automated controls to critical functions while still relying on manual processes elsewhere. Access policies are becoming more formalized, and some identity and device signals are beginning to inform access decisions, but coverage is incomplete and enforcement inconsistent.
Advanced represents a meaningful shift in operating posture. Controls are largely automated, policy enforcement is integrated rather than siloed, there is centralized visibility into most of the environment, and least-privilege access is dynamically adjusted based on real-time risk assessments rather than static rules. Threat detection capabilities are maturing, and the organization is actively closing its remaining gaps.
Optimal is the target state. Access is fully automated and just-in-time. Policies are dynamic, triggered by observed behaviors and real-time risk signals rather than static rules, and the whole organization exhibits holistic, always-on Zero Trust security.
The Five Pillars
The CISA ZTMM organizes Zero Trust implementation across five pillars: Identity, Devices, Networks, Applications and Workloads, and Data. Each pillar represents a distinct domain of the enterprise environment with multiple functions, and each is rated independently across the four maturity stages. Included in all five pillars are three cross-cutting functions — Visibility and Analytics, Automation and Orchestration, and Governance — that provide enterprise-wide connective tissue and make Zero Trust a coherent security strategy.
Each of the five pillars (and each of the functions within those pillars) is assessed for maturity level separately. That means an organization may find themselves at the Initial stage in one pillar but the Advanced or even Optimal stage in another.
Together, these five pillars and their many functions define the full scope of what a mature Zero Trust architecture looks like.
1. The Identity Pillar
Identity is the foundation of Zero Trust. Every access decision, whether it involves a human user, a service account, or a non-person entity, must be grounded in a verified, continuously validated identity.
The CISA ZTMM defines a total of seven functions (including the three cross-cutting functions) within the Identity pillar. Each function addresses a different dimension of how organizations and agencies govern their identity management at the enterprise scale.
- Authentication. How does the system authenticate user identity? At the Traditional stage, systems rely on passwords or basic MFA with static access controls. Mature Zero Trust implementation moves well beyond this, first toward phishing-resistant MFA using FIDO2 or PIV credentials, and ultimately toward continuous identity validation that eliminates any implicit trust after initial authentication.
- Identity Stores. These are the repositories that underpin authentication. Traditional environments rely exclusively on self-managed, on-premises identity stores with minimal integration between systems. As maturity advances, organizations begin consolidating and federating those stores with hosted identity providers using standard protocols like SAML and OAuth.
- Risk Assessments. How dynamically does your system respond to identity-based threats? At lower maturity levels, identity risk is evaluated using manual methods and static rules, with limited ability to respond to emerging signals. Advanced and Optimal implementations use automated, real-time continuous analysis and dynamic rules to evaluate identity compromise and adjust access.
- Access Management. This is where the principle of least-privilege access is operationalized. Traditional systems grant permanent access with only periodic review, a model that creates significant exposure to insider threats and credential-based attacks. Mature access management replaces permanence with precision through automated review, customized need-based and session-based access, and fully automated just-in-time, just-enough access that is continuously scoped to exactly what a user or entity requires in the moment.
- Visibility and Analytics. This function ensures that identity activity across the enterprise is observable and actionable. Early-stage implementations collect logs for privileged credentials and performs routine manual analysis. As maturity increases, automation is applied across a broader range of user and entity activity log types until comprehensive visibility is achieved across the enterprise.
- Automation and Orchestration. How are identities managed throughout their lifecycle, including onboarding, offboarding, role changes, and everything in between? Traditional environments handle this manually, with little integration across systems. Mature implementations progressively automate orchestration for non-privileged users, then privileged users, then external identities, until all identity lifecycle events are handled automatically and consistently across all environments based on behaviors, enrollments, and deployment needs. This is the function that makes identity governance scalable.
- Governance. This function ties the entire Identity pillar together. It covers how authentication requirements, credential standards, access policies, and lifecycle rules are defined and enforced across the enterprise. At the Traditional stage, these policies exist but are enforced through static technical mechanisms and manual review. By the Optimal stage, identity policies are fully automated, continuously enforced, and dynamically updated so that as the threat landscape shifts, your Zero Trust policies shift with it.
2. The Devices Pillar
Every device that connects to your network — whether it’s a managed laptop, a mobile phone, a printer, an IoT device, or a cloud-hosted virtual asset — represents a potential entry point for attackers. The Devices pillar of the CISA ZTM addresses how organizations establish visibility into those endpoints.
This pillar underpins the way that companies enforce compliance, manage supply chain risk, and ensure that access decisions take device health into account. Like the Identity pillar, it encompasses seven functions.
- Asset and Supply Chain Risk Management. The Devices pillar begins with knowing what you have, and this function establishes the baseline visibility that everything else depends on. Traditional environments track physical devices through labeled inventories and ad hoc processes, with limited visibility into virtual assets and no systematic approach to supply chain risk. Maturing organizations build toward a comprehensive, near real-time view of all physical and virtual assets across vendors and service providers. This includes automating supply chain risk management, verifying acquisitions, tracking development cycles, and incorporating third-party assessments.
- Device Threat Protection. How are security capabilities deployed and maintained across the company’s assets? At lower maturity levels, threat protection is manually deployed to some devices, with limited policy enforcement and compliance monitoring. More advanced implementations consolidate threat protection into centralized solutions covering both physical devices and virtual assets across the entire enterprise. The Optimal stage includes managing endpoints that have historically been difficult to manage, such as IoT devices and BYOD assets.
- Resource Access. This function governs the relationship between device health and access decisions. Traditional systems don’t require visibility into devices at all; access is granted regardless of endpoint posture. As maturity increases, device characteristics begin informing access decisions. At the Optimal stage, resource access decisions incorporate real-time risk analytics from within devices and virtual assets, meaning a device that becomes non-compliant mid-session can trigger an immediate access response.
- Visibility and Analytics. This function, implemented at the device level, is what transforms a static asset inventory into a dynamic security tool. Early-stage implementations use physical labels and limited software monitoring, with some manual analysis. As maturity advances, digital identifiers and automated scanning expand, until ultimately all network-connected devices and virtual assets are under continuous automated status collection.
- Automation and Orchestration. This function covers the full device lifecycle: provisioning, configuration, registration, monitoring, isolation, remediation, and deprovisioning. Traditional environments manage this manually. Mature implementations progressively automate each stage, first using scripts and tools, then adding monitoring and enforcement mechanisms to detect and isolate non-compliant devices, and finally achieving fully automated processes across the entire lifecycle. This is particularly critical at scale, where manual device management can cause lags between when a vulnerability is identified and when the device is actually remediated.
- Policy Enforcement and Compliance Monitoring. How are device lifecycle policies (including procurement, configuration, patching, and decommissioning) applied and enforced across the enterprise? At lower maturity levels, policies exist but rely on manual maintenance. Mature implementations move toward automated, enterprise-wide enforcement of device lifecycle policies for all network-connected devices and virtual assets, eliminating gaps from manual processes.
- Governance. Finally, this function ensures that the policies governing your endpoint environment are themselves governed, i.e. defined, documented, enforced, and continuously updated as the device landscape evolves. The technological landscape for devices is changing rapidly, particularly with the proliferation of IoT devices and cloud-hosted virtual assets, and governance in the Devices pillar ensures that your Zero Trust policies keep pace.
3. The Networks Pillar
While traditional security focused on building a strong perimeter and trusting everything inside it, Zero Trust assumes that threats already exist inside the network. The Networks pillar defines seven functions that describe how organizations can move from perimeter-dependent architectures toward dynamic, micro-segmented environments where every traffic flow is authenticated, encrypted, and continuously monitored.
- Network Segmentation. This is the structural foundation of a Zero Trust network. Traditional environments use large perimeter-based macro-segmentation with minimal internal restrictions, a design that allows attackers who breach the perimeter to move around with little resistance once they’re inside. At the Optimal stage, the network consists of fully distributed ingress and egress micro-perimeters with extensive micro-segmentation built around application profiles. At this level, connectivity is dynamic: just-in-time and just-enough, scoped to specific service interactions. This reduction in attack surface is one of the highest-impact changes an organization can make on its Zero Trust journey.
- Traffic Encryption. This function ensures network communications cannot be intercepted or tampered with, regardless of origin or destination. Traditional implementations encrypt minimal traffic and manage keys through manual processes, leaving significant exposure. At the Optimal stage, encryption is enforced across all applicable internal and external traffic protocols, with enterprise-wide least-privilege key management and cryptographic agility built in.
- Network Traffic Management. How do organizations control and adapt traffic flows across their environment? Traditional systems implement static rules at service provisioning with limited monitoring. At the Optimal stage, network rules and configurations continuously evolve in response to application profile needs, reprioritizing traffic based on mission criticality and real-time risk.
- Network Resilience. This function ensures that availability demands are met across all workloads, not just mission-critical ones. Traditional environments configure resilience mechanisms on a case-by-case basis with limited coverage for lower-priority workloads. At the Optimal stage, holistic delivery awareness continuously adapts to changes in availability demands enterprise-wide, providing proportionate resilience across all workloads.
- Visibility and Analytics. At the network level, this function allows security teams to detect threats that have bypassed perimeter controls. As maturity advances, monitoring expands from limited boundary-focused monitoring toward anomaly-based detection. At the Optimal stage, advanced monitoring automates telemetry correlation across all detection sources, enabling continuous monitoring and enterprise-wide situational awareness.
- Automation and Orchestration. How are network configuration and resource lifecycle changes managed? Traditional environments rely on manual processes with periodic policy integration, while at the Optimal stage, network infrastructure is defined as code and managed entirely by automated change management methods.
- Governance. Finally, this function in the Networks pillar ensures that the security policies governing network segmentation, access, protocols, alerting, and remediation are consistently defined, enforced, and updated. Traditional implementations rely on static, perimeter-focused policies, while Optimal ones implement enterprise-wide network policies that adapt fluidly to application and user workflows.
4. The Applications and Workloads Pillar
Applications and workloads encompass the systems, programs, and services that execute on-premises, on mobile devices, and across cloud environments. This is where users and data actually interact.
The Applications and Workloads pillar of the CISA ZTMM defines the eight functions that together address how organizations govern access to applications, protect them from threats, secure how they are built and deployed, and maintain the visibility needed to detect and respond to application-specific risks.
- Accessible Applications. How are applications made available to authorized users? Traditional implementations use private networks or protected connections like VPNs, limiting access for remote work and creating friction for legitimate users while doing little to stop determined attackers with compromised credentials. Maturing organizations, however, move applicable mission-critical applications to open public network connections, brokered through Zero Trust controls that verify identity and device posture rather than relying on network location as a proxy for trust. At the Optimal stage, all applicable applications are available over open public networks to authorized users and devices as needed while keeping the attack surface minimal (because access is controlled by identity and context rather than by network perimeter).
- Application Access. How are authorization decisions made for individual application requests? The more mature the implementation, the more contextual information (e.g. user identity, device compliance, and other dynamic attributes) will be incorporated into access decisions. The most advanced implementations enforce least-privilege principles automatically, with application access continuously authorized using real-time risk analytics.
- Application Threat Protections. This function ensures that security controls are integrated directly into application workflows rather than applied as a separate layer around them. At lower maturity levels, threat protections are minimally integrated with applications, applying only general-purpose defenses against known threats. As maturity increases, protections are integrated across all applications, covering both known threats and application-specific vulnerabilities and offering content-aware defenses against sophisticated, targeted attacks.
- Secure Application Development and Deployment Workflow. This function addresses the security of the software development lifecycle. Traditional environments have ad hoc development, testing, and production environments with informal code deployment mechanisms. Maturing organizations establish formal CI/CD pipelines with requisite access controls, separate development and production environments, and enforce least-privilege principles across development infrastructure. This is where DevSecOps principles become operational rather than aspirational.
- Application Security Testing. This function ensures that security validation is integrated throughout the development lifecycle rather than treated as a final gate before deployment. Traditional implementations perform security testing primarily through manual methods prior to deployment. Maturing organizations progressively introduce static and dynamic testing methods (including automated SAST, DAST, and software composition analysis) into the development and deployment process.
- Visibility and Analytics. At the application level, this function provides security teams with the situational awareness needed to detect application-specific threats and respond before they escalate. Traditional implementations perform limited performance and security monitoring of mission-critical applications with minimal aggregation or analytics. As maturity advances, automated monitoring expands across a broader application portfolio, with heuristics identifying application-specific and enterprise-wide trends. At the Optimal stage, continuous and dynamic monitoring spans all applications, maintaining the kind of enterprise-wide comprehensive visibility that allows security teams to detect even the most subtle behavioral anomalies.
- Automation and Orchestration. How are application configurations managed and optimized over time? Traditional environments establish hosting location and access at provisioning and change them infrequently. Maturing organizations periodically modify configurations to meet security and performance goals, then automate those modifications to respond to operational and environmental changes.
- Governance. This function ensures that the full scope of application security — access controls, development practices, deployment processes, software asset management, security testing, patching, and dependency tracking — is governed through consistent, enforceable policies. Traditional implementations rely primarily on manual enforcement, while maturing organizations progressively automate policy enforcement across development and deployment lifecycles by using tools like Software Bills of Materials. At the Optimal stage, policies governing application development and deployment are fully automated, with dynamic updates flowing through the CI/CD pipeline — ensuring that security policies are as agile as the applications they govern.
5. The Data Pillar
While the other ZTMM pillars work to ensure that data is accessed only by the right people, for the right reasons, under the right conditions, the Data pillar addresses how organizations inventory, categorize, protect, and govern their data across its entire lifecycle.
- Data Inventory Management. This is the starting point, since you can’t protect data you don’t know exists. Traditional implementations manually inventory only the most critical assets, leaving significant portions of the data estate untracked. At the Optimal stage, on the other hand, inventory is continuous and dynamic, with robust data loss prevention strategies that automatically detect and block suspected exfiltration in real time.
- Data Categorization. This function ensures that once data is inventoried, it is classified in a way that allows security controls to be applied proportionately to its sensitivity. Traditional implementations rely on ad hoc categorization with no consistent labeling framework. At the Optimal stage, categorization and labeling is fully automated enterprise-wide, using granular structured formats that address all data types, including unstructured data.
- Data Availability. How is data stored and made accessible to authorized users? Traditional implementations rely on on-premises stores with limited backup, creating availability risks in the event of infrastructure failure. Moving toward maturity means implementing dynamic methods to continuously optimize availability according to user and entity need, ultimately balancing accessibility with security.
- Data Access. This function governs user and entity permissions, the operational expression of least-privilege access at the data layer. Organizations in early stages manage this through static access controls that are rarely revisited; more mature organizations have permissions that are fully automated and dynamic, granted just-in-time and just-enough enterprise-wide and continuously reviewed.
- Data Encryption. This function ensures data is protected regardless of where it resides or how it travels. Traditional implementations encrypt minimal data and manage keys through manual or ad hoc processes. At the Optimal stage, encryption is applied to data in use where appropriate, least-privilege principles govern key management enterprise-wide.
- Visibility and Analytics. This function transforms raw data activity logs into actionable security intelligence. Traditional implementations rely primarily on manual analysis with limited visibility into data location, access, and usage. At the Optimal stage, robust analytics, including predictive capabilities, provide comprehensive views of system data and support continuous security posture assessment across the full data lifecycle.
- Automation and Orchestration. Are data lifecycle and security policies (including access, storage, encryption, categorization, backup, and sanitization) implemented consistently and at scale? Traditional environments handle this through manual, often ad hoc processes. At the Optimal stage, data lifecycles and security policies are automated to the maximum extent possible across all system data and all environments.
- Governance. This function ties the Data pillar together, ensuring that protection, categorization, access, storage, recovery, and removal policies are unified and dynamically enforced across the enterprise. Organizations move from the Traditional stage, which relies on ad hoc governance with manual enforcement, to the Optimal stage, where data lifecycle policies are as consistent and automated as the data controls they govern.
Building Toward Zero Trust: A Journey, Not a Destination
Taken pillar by pillar, the Zero Trust Maturity Model can seem overwhelming to organizations that are just starting out on their Zero Trust path. But the good news is that the ZTMM expects progress, not perfection.
Across all five pillars and three cross-cutting capabilities, the approach is the same: start where you are, prioritize your highest-value assets, and advance deliberately. As functions mature, manual processes are automated, and static policies are replaced with dynamic ones, your attack surface will be reduced and your security posture will be strengthened in measurable ways.
That progress doesn’t happen in isolation, and it doesn’t have to happen without support. RegScale’s Continuous Controls Monitoring platform is built to operationalize exactly the kind of governance, visibility, and policy management that the ZTMM demands across all five pillars. Rather than replacing the security tools your organization already relies on, RegScale serves as the authoritative system of record for risk and compliance data across every pillar of the ZTMM. The platform gives security teams, ISSOs, and leadership a unified, continuously updated picture of where they stand.
- In the Identity pillar, RegScale enforces role-based and attribute-based access controls through Azure Active Directory integration, supports just-in-time privileged access elevation via Azure PIM with full audit trails, and automates access revocation tied to identity lifecycle events.
- In the Devices, Networks, and Applications pillars — where RegScale functions as a governance layer rather than a direct enforcement tool — the platform maintains control documentation, monitors compliance evidence, tracks risks and remediation activities, and integrates with external tools while remaining a neutral, always-current source of truth.
- In the Data pillar, RegScale’s robust access control model operationalizes least-privilege data access, and detailed history logs are retained for every record and user action across the platform.
- Across the three cross-cutting capabilities — Visibility and Analytics, Automation and Orchestration, and Governance — RegScale’s Compliance-as-Code foundation allows organizations to define policies and requirements within the platform and monitor their enforcement continuously across integrated security tools.
The ZTMM is ultimately a governance challenge as much as a technical one — and that is precisely where RegScale is designed to help. Whether your organization is just beginning its Zero Trust journey or working toward the Optimal stage across multiple pillars, RegScale provides the Continuous Controls Monitoring, policy management, and cross-pillar visibility that makes progress possible.
To learn more about how RegScale supports Zero Trust, contact us today.
Ready to get started?
Choose the path that is right for you!
Skip the line
My organization doesn’t have GRC tools yet and I am ready to start automating my compliance with continuous monitoring pipelines now.
Supercharge
My organization already has legacy compliance software, but I want to automate many of the manual processes that feed it.