Cloud Maturity Model: The Definitive 2026 Guide
The Flexera State of the Cloud Report 2025 found that 41% of European enterprises with more than 250 employees still operate at the two lowest levels of cloud maturity: no consistent Infrastructure as Code, no formal cost management, and incidents resolved reactively by whoever is available. At the other end, only 8% reach the Optimized level — where infrastructure is an internal product and development teams provision environments autonomously. The remaining 51% is caught in the middle: the technical migration to cloud is done, but the operational and organizational problems that determine whether that migration generates real value are unsolved.
Gartner’s 2024 cloud infrastructure research puts the cost of this stall in concrete terms: organizations at Levels 1–2 spend an average of 35% more on cloud than organizations at Levels 3–4 when total cost of operations is measured, not just the cloud bill itself. The excess comes from manual processes, undetected idle resources, incidents that take hours to resolve, and the cost of the engineers required to keep the chaos manageable.
This guide provides a precise reference framework — with criteria, tools, metrics, and per-stage roadmaps — for technical teams and executives to diagnose their current position and define concrete next steps.
What is a Cloud Maturity Model?
A cloud maturity model is a reference framework that describes an organization’s progression from unstructured use of cloud services to fully automated, business-value-oriented operations. Unlike certification frameworks (ISO 27001, SOC 2), a maturity model does not certify a state: it describes a continuous improvement path across measurable dimensions.
The three principal reference frameworks are:
-
AWS Cloud Adoption Framework (CAF): organizes maturity across six perspectives (Business, People, Governance, Platform, Security, Operations) and places organizational transformation on equal footing with technical change. AWS CAF uses phases (Envision, Align, Launch, Scale) rather than numbered maturity levels, but its capabilities map directly to the five standard stages.
-
Microsoft Cloud Adoption Framework (Azure CAF): more prescriptive about network architecture and identity management. Introduces the concept of a Landing Zone as the core unit of cloud adoption — a preconfigured Azure environment with governance, security, and network controls serving as the foundation for each workload. Most useful as an architectural reference for Azure-first environments.
-
Google Cloud Maturity Assessment: more diagnostic than prescriptive. It evaluates four domains (Infrastructure, Applications, Data and ML, Security) through binary questions and generates a maturity profile per domain, making it a useful starting point for organizations that want a structured gap analysis before committing to a roadmap.
All three frameworks reach the same fundamental conclusions: cloud maturity requires organizational and cultural change, not just technical change; progress is observable through objective metrics; and there are dependencies between dimensions that make skipping stages ineffective in practice.
The 5 Maturity Stages
The five-level model has its roots in the CMM (Capability Maturity Model) from the Software Engineering Institute, adapted to the cloud context. Each level describes an observable state, not an aspiration.
| Level | Infrastructure | Operations | Security | Cost management | Typical team size |
|---|---|---|---|---|---|
| 1 · Ad hoc | Resources created manually, no IaC | Reactive incidents, no runbooks | Broad permissions, no formal policy | No tagging, no spending visibility | 1–3 people with admin access |
| 2 · Repeatable | Partial IaC (new resources), modules not yet standardized | On-call documented, basic runbooks | Least privilege policy started, MFA enforced | Partial tagging, basic budget alerts | 2–5 infrastructure engineers |
| 3 · Defined | 100% IaC in git, validation pipelines, policy-as-code | SLOs defined, blameless postmortems, MTTR < 60 min | CSPM active, secrets management, DR tested | FinOps monthly cycle, showback by team | Dedicated SRE or shared DevOps team |
| 4 · Managed | Internal reusable modules, automatic drift detection | 360 observability (correlated metrics, logs, traces), automated remediation | Automatic anomaly detection, continuous compliance | Chargeback by product, automated rightsizing | SRE + FinOps engineer |
| 5 · Optimized | Platform engineering, self-service infrastructure, AI-assisted operations | Proactive reliability engineering, zero-touch deployments | Security-as-code, threat modeling integrated in dev cycle | ML-based cost forecasting, continuous commitment optimization | Dedicated platform team |
Self-Assessment Checklist
Mark the statements that are true for your organization today. The concentration of marks in each block indicates your predominant maturity level.
Level 1 — Ad hoc
- Every cloud resource has an identified and documented owner
- There is an up-to-date inventory of all active cloud services
- Admin access is restricted to a small number of people with documented justification
- MFA is enforced on all accounts with cloud console access
- At least one budget alert is configured per cloud account
- Secrets (API keys, passwords, certificates) are not stored in code repositories
- A periodic review (at least quarterly) of active resources and their cost is performed
Level 2 — Repeatable
- At least 50% of infrastructure resources are managed with IaC (Terraform, Pulumi, CDK)
- There is a documented on-call process with defined escalation paths
- Runbooks for the most frequent incidents are written and accessible to the whole team
- Regular backups are performed with documented retention and tested restoration
- Resource tagging covers environment, team, and project for more than 70% of resources
- CI/CD pipelines exist for at least half of production services
- Security reviews are performed before each significant deployment
Level 3 — Defined
- 100% of production infrastructure is managed via IaC in version control
- Policy-as-code (Open Policy Agent, Sentinel, AWS SCPs) validates every change before it is applied
- SLOs (Service Level Objectives) are defined and visible to the business team
- Mean MTTR for high-priority incidents is under 60 minutes
- A formal blameless postmortem process exists for each high-severity incident
- The DR strategy is documented, tested, and has known RTO/RPO targets
- FinOps has a monthly cycle with anomaly review and showback by team
- Security posture is monitored continuously with CSPM (Prisma Cloud, Wiz, AWS Security Hub)
Level 4 — Managed
- Metrics, logs, and distributed traces are correlated in a single platform (Datadog, Grafana + Tempo, Elastic)
- Remediation of common incidents is automated (auto-scaling, pod restarts, failover)
- DORA metrics (deployment frequency, lead time, MTTR, change failure rate) are measured and reported monthly
- Resource rightsizing runs automatically or on a bi-weekly review cycle
- Cloud cost chargeback is allocated by product or business line
- Automatic anomaly detection is active for cost and security
- Development teams have direct visibility into their services’ cloud costs
Level 5 — Optimized
- Developers can provision new environments in self-service in under 30 minutes
- An internal platform portal (Backstage or equivalent) documents and exposes all internal services
- Cloud commitments (Reserved Instances, Savings Plans, CUDs) are managed with forecasting tools
- Chaos engineering (Chaos Monkey, LitmusChaos) is part of the resilience validation process
- Security threat modeling is performed as part of the design cycle for each new service
- Infrastructure is deployed via GitOps (ArgoCD, Flux) without manual intervention in production
Level 1: Ad hoc — What It Looks Like and What to Fix First
At the ad hoc level, cloud resources are created when someone needs them, in the fastest way available at the time. Infrastructure exists in the cloud provider’s console but nowhere else: nobody knows with certainty what is active, who created it, or why. Incidents are resolved from individual memory, not documented process. The cloud bill is a monthly surprise.
What it looks like day-to-day: Slack messages asking “do you know who has access to that database?”, S3 buckets or Azure blobs created for a project that no longer exists but that nobody dares delete, and cloud invoices growing 20–30% each month without anyone being able to explain the increase.
What to fix first (in this order):
-
Emergency inventory and tagging. Without knowing what you have, you cannot manage anything. Run
aws resourcegroupstaggingapi get-resourcesor the equivalent in Azure/GCP to get all active resources and their tagging status. Goal: within 30 days, all resources have at least three tags: environment, team, project. -
Least privilege and identity. Eliminate all shared access accounts. Implement IAM/Entra ID with least-privilege principals. Enforce MFA without exception. Recommended tools: AWS IAM Access Analyzer or Azure AD Privileged Identity Management for detecting excessive permissions.
-
First IaC repository. There is no need to migrate the entire environment. The goal is that every new resource created from this point forward has a definition in Terraform or Pulumi. This establishes the habit without creating a multi-month migration project.
Exit metric: 100% of resources created during the last calendar month are documented in IaC and have complete tagging.
Level 2: Repeatable — IaC, Baseline Monitoring, Cost Tagging
Level Repeatable is the most populated level among European mid-market companies: processes exist, but they depend on specific individuals. If the engineer who set up the CI/CD pipeline is unavailable, the process stops. If the senior SRE goes on holiday, on-call becomes chaos.
The goal of Level 2 is to eliminate dependency on specific people through documentation and basic automation.
Consistent Infrastructure as Code. The work at this level is migrating the existing environment to IaC, not just new resources. The recommended strategy is incremental import: use terraform import to bring existing resources into Terraform state without recreating them. Priority: highest-cost and highest-operational-risk resources first (databases, load balancers, security groups).
Baseline monitoring. Implement infrastructure dashboards (CPU, memory, disk, network latency) and alerts on critical thresholds. Tools: Datadog (best cloud integration), Grafana + Prometheus (more control, lower cost), CloudWatch/Azure Monitor (sufficient to start without additional cost). The tool is less important than the discipline: every alert must have an associated runbook that any team member can follow.
Cost tagging and governance. Implement mandatory tagging policies via policy-as-code. In AWS: Service Control Policies that reject resource creation without required tags. In Azure: Azure Policy in Deny mode. Enable AWS Cost Explorer or Azure Cost Management with views by tag.
The cost benchmark at Level 2 exit: I know how much I spend on cloud, I know which environment and team each cost belongs to, and I receive alerts when spending exceeds expected thresholds.
For a detailed analysis of how cost visibility compounds with managed operations, see the managed services cost model.
Key tools: Terraform 1.x with remote state in S3+DynamoDB or Terraform Cloud, Datadog or Grafana Cloud for monitoring, AWS Cost Anomaly Detection or Azure Cost Alerts for cost surveillance.
Level 3: Defined — FinOps, DR, CI/CD Maturity, Compliance Baseline
Level Defined is where organizations shift from managing incidents to managing services. The difference is fundamental: at Level 2, you respond to what happens; at Level 3, you define what you expect (SLOs) and measure the gap between expectation and reality.
FinOps as a formal process. The FinOps cycle at Level 3 has three phases, borrowed directly from FinOps Foundation’s crawl-walk-run model:
- Inform: complete cost visibility by team, product, and environment, updated daily.
- Optimize: identifying and executing savings opportunities (rightsizing, removing idle resources, shifting to spot/preemptible instances where appropriate).
- Operate: continuous governance with anomaly alerts and monthly commitment reviews.
The State of FinOps 2024 report found that organizations at FinOps Level 3 (Walk) save 18–28% of annual cloud spend compared to organizations without a formal process.
Tested Disaster Recovery. A DR plan that has never been exercised is not a plan: it is a document. At Level 3, the disaster recovery process must have been executed at least once in a non-production environment, with results that match the documented RTO/RPO. Tools: AWS Elastic Disaster Recovery, Azure Site Recovery, or Velero for Kubernetes workloads.
Mature CI/CD pipelines. Level 3 pipelines include IaC validation (checkov, tfsec), static application security testing (SAST), automated integration tests, and deployment approvals with full auditability. ArgoCD for Kubernetes environments with GitOps brings the deployment definition into version control and enables drift detection from day one.
Compliance baseline. Implement CSPM (Cloud Security Posture Management) for continuous security posture monitoring. The most widely used tools are Prisma Cloud (Palo Alto Networks), Wiz, and AWS Security Hub with AWS Config Rules. The goal is not to pass an audit: it is to have continuous visibility of deviations from a security baseline, so that the gap between “current state” and “required state” is always known.
Exit metrics: MTTR < 60 min, SLOs defined and measured, 100% of resources in IaC, monthly FinOps review with documented showback.
Level 4: Managed — SRE Practices, 360 Observability, Automated Remediation
Level Managed marks the transition from reactive operations to engineering operations. At this level, the team does not just respond to incidents: it prevents them through error budgets, manages reliability as an engineering function, and automates response to the most frequent anomalous conditions.
SRE practices in concrete form. Site Reliability Engineering at Level 4 means: SLOs per service visible to the business team, error budgets that govern deployment velocity (when the error budget is exhausted, non-urgent deploys pause), and blameless postmortems that generate measurable improvement actions with owners and deadlines.
360 observability. The difference between monitoring (Levels 2–3) and observability (Level 4) is the ability to answer unanticipated questions about system behavior. This requires all three pillars correlated: business and technical metrics (Prometheus + Grafana or Datadog), structured logs with trace context (OpenTelemetry + Elasticsearch or Datadog Logs), and distributed traces that allow reconstructing the full flow of a request across all services (Jaeger, Tempo, Datadog APM).
Implementing OpenTelemetry as a standardized instrumentation layer avoids vendor lock-in in observability and simplifies migration between analysis platforms. For a practitioner-level look at observability in microservices environments, see microservices observability: lessons from the field.
Automated remediation. Recurring, well-understood incidents should have automated response. Concrete examples: pod auto-scaling in Kubernetes on latency increase (HPA + KEDA), automatic restart of services that stop responding to health checks, automatic database failover with Amazon RDS Multi-AZ or Azure SQL Business Critical.
Cost: chargeback and rightsizing. At Level 4, Level 3 showback evolves to actual chargeback: product teams are allocated a cloud budget and are accountable for managing it. Rightsizing runs with automated tools such as AWS Compute Optimizer or Datadog Cloud Cost Management, on bi-weekly review cycles.
Exit metrics: MTTR < 30 min for P1 incidents, weekly or higher deployment frequency, distributed traces available for 80% of critical services, per-product-team chargeback operational.
Level 5: Optimized — Platform Engineering, Self-Service, Continuous Infrastructure Delivery
The Optimized level is where infrastructure stops being a bottleneck for product development. The platform team is no longer managing servers: it is managing an internal product that enables development teams to provision, deploy, and operate their services autonomously.
Platform engineering. The central concept is the Internal Developer Platform (IDP): an abstraction layer over cloud infrastructure that exposes high-level services (databases, queues, caches, CI/CD pipelines) through a self-service interface. The reference tool is Backstage (open-source, created by Spotify), which acts as a software catalog, self-service portal, and centralized documentation hub simultaneously.
A team that reaches Level 5 can measure the outcome concretely: the time from when a developer requests a new environment to when it is available drops from days (Level 2–3) to under 30 minutes (Level 5). That compression is what unlocks genuine engineering velocity.
GitOps and continuous infrastructure delivery. ArgoCD or Flux manage the state of Kubernetes clusters through continuous reconciliation with the Git repository. Any drift between desired state (Git) and actual state (cluster) is detected and corrected automatically. Production deployments require no manual intervention: they execute through automated validation pipelines triggered by pull requests.
Chaos engineering as standard practice. LitmusChaos or Chaos Monkey inject failures in a controlled way in production (or high-fidelity staging environments) to validate system resilience under real conditions. The results of each chaos experiment feed the reliability improvement backlog.
Continuous cloud commitment optimization. At Level 5, capacity commitments (Reserved Instances, Savings Plans in AWS; Reserved VM Instances in Azure; Committed Use Discounts in GCP) are managed with forecasting tools such as CloudHealth (Broadcom) or Spot.io (Flexera) to maximize discount without overprovisioning. Flexera 2025 found that Level 5 organizations achieve 85–92% utilization of cloud commitments versus 45–60% at Level 3.
Representative toolchain: Backstage (IDP), ArgoCD (GitOps), LitmusChaos (chaos engineering), Crossplane (infrastructure as Kubernetes API), Datadog or Grafana + OpenTelemetry (observability), Spot.io or CloudHealth (commitment optimization).
How to Advance from One Stage to the Next: 6–12 Month Plans
Progression between levels does not happen on its own. It requires a project with executive sponsorship, allocated budget, and success metrics defined before the work starts. Organizations that “improve gradually” without an explicit plan tend to stall at the boundary between Levels 2 and 3 for years.
Level 1 → Level 2 (6 months)
| Month | Action |
|---|---|
| 1–2 | Complete inventory, emergency tagging, IAM with least privilege |
| 2–3 | First IaC repository, import the 10 highest-cost resources |
| 3–4 | Documented on-call, runbooks for the 5 most frequent incidents |
| 4–5 | Baseline monitoring alerts, budget alerts |
| 5–6 | Mandatory tagging policy, quarterly access reviews |
Level 2 → Level 3 (9–12 months)
The Level 2 to Level 3 jump is the most demanding in organizational terms. It requires introducing SLOs (which change how the operations team is accountable to the business), FinOps as a formal process (which introduces deliberate friction into resource creation), and tested DR (which requires time outside normal sprint cycles).
Recommended structure: establish a transition team with members from infrastructure, development, and the business side, with monthly progress review against Level 3 exit metrics.
Level 3 → Level 4 (12–18 months)
The transition to Level 4 is primarily about instrumentation and culture. Full observability requires instrumenting all services with OpenTelemetry — in legacy environments, this can be a months-long project. Adopting SRE practices (error budgets, SLO reviews) requires changes to how engineering and the business negotiate development velocity.
Level 4 → Level 5 (12–24 months)
Level 5 requires creating a platform team that works for all other teams as their internal customer. This organizational change is the hardest of all because it means redefining who works on what and how infrastructure engineers measure their own success.
Mistakes That Block Progression
The following anti-patterns appear repeatedly in organizations that have spent years trying to advance without success:
-
FinOps without prior tagging governance. Implementing cost dashboards without first solving tagging and attribution produces data that nobody trusts and nobody uses to make decisions.
-
Kubernetes without SRE. Adopting Kubernetes as a deployment platform without the corresponding operational practices (on-call, runbooks, measured MTTR) results in a more complex platform with the same reliability as before.
-
IaC only for new resources. The unmanaged legacy environment continues to generate operational debt and hidden cost. IaC migration must have a completion plan and an end date.
-
Measuring VM uptime instead of service SLOs. A server that is active 99.9% of the time can be serving errors on 30% of requests. Infrastructure uptime is not a valid proxy for user experience.
-
Security as an external audit function. When the security team only reviews systems after changes are deployed, problems are detected late and corrections are expensive. Security must be integrated into CI/CD pipelines through SAST, container image scanning, and policy-as-code.
-
Confusing infrastructure dashboards with observability. Having CPU and memory metrics is not observability. Real observability means being able to reconstruct system behavior for any failure mode without having added specific instrumentation for that failure in advance.
-
Skipping postmortems. Organizations that do not analyze their incidents make the same mistakes repeatedly. A blameless postmortem process that generates measurable actions with owners and dates is the most effective learning mechanism available.
-
Solving process problems with headcount. Adding engineers to a team with immature processes multiplies the chaos. Process first, then capacity.
Cloud Maturity vs DORA Metrics vs FinOps Maturity
The three frameworks are measured separately but advance interdependently. The table below shows the expected correlations between levels:
| Cloud maturity level | Expected DORA performance | Equivalent FinOps level | Key correlation indicators |
|---|---|---|---|
| Level 1 · Ad hoc | Elite: impossible. Low performer: common | Crawl (FinOps Foundation) | No tagging, no pipeline, no cost visibility |
| Level 2 · Repeatable | Low to medium performer | Crawl → Walk | Partial tagging, basic CI/CD, budget alerts |
| Level 3 · Defined | Medium performer (weekly deploys, MTTR < 1h) | Walk | Showback operational, tested DR, measured SLOs |
| Level 4 · Managed | High performer (daily deploys, MTTR < 30 min) | Walk → Run | Real chargeback, error budgets, anomaly detection |
| Level 5 · Optimized | Elite (multiple deploys/day, MTTR < 10 min) | Run | Optimized cloud commitments, platform self-service |
The DORA reference is the Google Accelerate State of DevOps Report. Elite DORA metrics in 2024 include deployment frequency multiple times per day, change lead time under one hour, MTTR under one hour, and change failure rate below 5%.
A team that reaches Level 4 in cloud maturity almost always reaches high or elite DORA performance simultaneously, because the practices that enable one enable the other. Attempting to improve DORA metrics without improving cloud operational maturity — or vice versa — produces partial results that do not hold over time.
Sources
- Flexera — State of the Cloud Report 2025
- AWS — AWS Cloud Adoption Framework (CAF)
- Microsoft — Microsoft Cloud Adoption Framework for Azure
- Google Cloud — Google Cloud Adoption Framework Whitepaper
- European Commission / Eurostat — DESI 2024: Digital Economy and Society Index
- DORA / Google — Accelerate State of DevOps Report — DORA metrics
- FinOps Foundation — Cloud FinOps Framework
- CMMI Institute — Capability Maturity Model Integration (CMMI)
How abemon Supports Your Progression
Advancing through cloud maturity levels requires two things that rarely coexist in internal teams: the technical capacity to implement the changes and the external perspective to identify blockers that are invisible from the inside.
abemon’s cloud and DevOps services cover current-state assessment, transition plan definition, and the technical implementation of each stage. For organizations that do not want to build a full-time SRE team permanently, managed services allow operating at Level 3–4 without the cost of maintaining that team internally.
abemon’s cloud solutions are designed for organizations at Levels 2 and 3 that need to advance without disrupting current operations. The starting point is always an honest technical diagnosis, without a sales agenda attached.
If you want to discuss where your organization currently sits and which next steps offer the best return on effort, contact the technical team.
Frequently asked questions
- What is a cloud maturity model?
- A cloud maturity model is a reference framework that describes an organization's progression from ad hoc use of cloud services to fully automated, business-value-oriented operations. It establishes objective criteria across five dimensions — infrastructure, operations, security, cost management, and team culture — so teams can identify their current position and define the specific actions needed to advance. The most widely cited frameworks are AWS Cloud Adoption Framework (CAF), Microsoft Cloud Adoption Framework, and Google Cloud Maturity Assessment, though all three converge on the same fundamental stages.
- How do I know what cloud maturity level my organization is at?
- The most reliable diagnosis combines three pieces of evidence: the state of your Infrastructure as Code (IaC) coverage, the depth of observability available to your team, and whether you have a formal cloud cost management process. An organization without IaC and without resource tagging is at Level 1 or 2. An organization with mature CI/CD pipelines, documented on-call, and monthly FinOps reviews is at Level 3 or 4. The self-assessment checklist in this guide lets you make that diagnosis in under 30 minutes.
- How long does it take to go from Level 2 to Level 4?
- According to Gartner (Magic Quadrant for Cloud Management Platforms, 2024), the average progression from Level 2 to Level 4 takes 18 to 24 months with dedicated investment. Organizations with small teams and accumulated technical debt can take up to 36 months. The most significant acceleration happens when two blockers are resolved simultaneously: consistent IaC implementation and an actual cloud cost review process. Without those two foundations, teams advance in one dimension while regressing in others.
- What is the difference between AWS CAF and Microsoft CAF?
- AWS CAF organizes maturity across six perspectives (Business, People, Governance, Platform, Security, Operations) and places organizational transformation at the same level as technical change. Microsoft CAF (Azure CAF) has a similar structure but adds a 'Ready' phase centered on Landing Zones — preconfigured Azure environments with governance, security, and network controls — making it more prescriptive about network architecture and identity management. Google Cloud Maturity Assessment is more diagnostic than prescriptive. In practice, the differences are smaller than the similarities; any team benefits from treating them as complementary references.
- Do organizations have to go through every level?
- Yes, though the pace can vary by dimension. There is no shortcut from Level 1 to Level 4: organizations that attempt advanced-level practices without the underlying foundations accumulate operational debt that eventually surfaces as incidents. What is possible is prioritizing some dimensions ahead of others — for example, a development team can mature its CI/CD practice to Level 3–4 while infrastructure operations remain at Level 2, provided that asymmetry is deliberate and temporary.
- What metrics prove progress in cloud maturity?
- The most reliable metrics are: time to provision new infrastructure (decreases as IaC matures), percentage of resources correctly tagged (proxy for FinOps governance), MTTR for incidents, deployment frequency, and change failure rate (the last two are DORA metrics). At Level 4, a mature team has MTTR below 30 minutes for high-priority incidents and a deployment frequency of at least once per week per service.
- How do FinOps and cloud maturity relate?
- FinOps and cloud maturity are complementary but independent frameworks. A team can have mature FinOps practices (Level 3 per FinOps Foundation) and be at Level 2 in cloud operational maturity, or vice versa. In practice, progress is correlated: without consistent resource tagging (cloud maturity Level 2), cost attribution by team or product (FinOps Level 2) is impossible. The recommendation is to advance both in parallel, prioritizing tag governance as the shared foundation.
- What mistakes block organizations from advancing to the next level?
- The eight most common anti-patterns are: (1) implementing FinOps before establishing tagging governance, (2) deploying Kubernetes without SRE practices in place, (3) using IaC only for new resources while the existing environment remains unmanaged, (4) measuring availability via VM uptime rather than service-level SLOs, (5) treating security as an external audit function rather than integrating it into pipelines, (6) confusing infrastructure dashboards with observability (without distributed traces and log correlation), (7) having no formal blameless postmortem process, and (8) assuming headcount growth solves what are fundamentally process problems.
Sources
- Flexera — Flexera State of the Cloud Report 2025
- AWS — AWS Cloud Adoption Framework (CAF)
- Microsoft — Microsoft Cloud Adoption Framework for Azure
- Google Cloud — Google Cloud Adoption Framework Whitepaper
- European Commission / Eurostat — DESI 2024 — Digital Economy and Society Index (European Commission)
- DORA / Google — Accelerate State of DevOps Report — DORA metrics
- FinOps Foundation — FinOps Foundation — Cloud FinOps Framework
- CMMI Institute — CMMI Institute — Capability Maturity Model Integration
