GreenOps and sustainable cloud: measuring and reducing digital carbon footprint
The invisible pollution of the datacenter
Every API call, every CI/CD pipeline run, every ML model training job consumes electricity. That electricity produces emissions. The technology sector accounts for roughly 2-4% of global CO2 emissions, a figure comparable to commercial aviation. And it grows every year.
What makes these emissions different from, say, a factory’s is their invisibility. Nobody sees smoke coming out of an EC2 instance. But Amazon’s datacenters consumed 33 TWh in 2023. Google’s consumed 24 TWh. That is real energy with real consequences.
The question is no longer whether this matters. The EU’s Corporate Sustainability Reporting Directive (CSRD) requires large companies to report Scope 3 emissions from January 2025, and mid-size companies from 2026. Your cloud provider is Scope 3. Your digital infrastructure is already part of your corporate carbon footprint, whether you know it or not.
What GreenOps actually means
GreenOps applies the same logic as FinOps (financial optimization of cloud spend) but adds the environmental dimension. Where FinOps asks “how much does this workload cost?”, GreenOps asks “how much carbon does this workload emit, and what can we do about it?”
These are not separate disciplines. In practice, most GreenOps optimizations also reduce cost, because the cheapest resource is the one you do not consume. But there are decisions where cost and carbon diverge, and that is where GreenOps provides a distinct decision framework.
The three pillars of GreenOps:
Measure. You cannot reduce what you do not measure. This requires visibility into the energy consumption and associated emissions of your infrastructure.
Reduce. Optimize workload efficiency, eliminate idle resources, design architectures that consume less.
Shift. Move workloads to times and regions where electricity is cleaner.
Measuring digital carbon footprint
Cloud provider tools
The three major hyperscalers offer native carbon measurement tools:
- AWS: Customer Carbon Footprint Tool. Monthly emissions by service and region. Data available with a 3-month lag. No per-resource breakdown.
- Google Cloud: Carbon Footprint Dashboard. Data by project and region, updated monthly. Includes total energy and renewable mix per region. The most granular of the three.
- Azure: Emissions Impact Dashboard. Total emissions by subscription. Integrates with Power BI for reporting.
All three are useful starting points but have significant limitations. Data arrives weeks or months late, granularity is insufficient for service-level optimization, and calculation methodologies are not comparable across providers. AWS reports Scope 1 and 2 datacenter emissions; Google includes partial Scope 3; Azure falls somewhere in between.
Cloud Carbon Footprint (CCF)
For independent, comparable measurements, Cloud Carbon Footprint (cloudcarbonfootprint.org) is the open-source reference. CCF connects to the billing APIs of AWS, GCP, and Azure, and estimates emissions based on resource type, utilization, and the carbon intensity of the local electrical grid.
The methodology is transparent: it converts usage hours for each instance type into kWh using PUE (Power Usage Effectiveness) factors published by each provider, then applies the local grid’s carbon intensity (gCO2e/kWh) from sources like electricityMap or the IEA.
We deployed CCF for a multi-cloud logistics client with 120 EC2 instances. The results were illuminating. 35% of their cloud emissions came from 8 GPU instances running an ML model that only executed 4 hours per day but was provisioned around the clock.
Software Carbon Intensity (SCI)
The Green Software Foundation has defined a standard metric: SCI (Software Carbon Intensity), measured in gCO2e per functional unit. You define the functional unit: per request, per transaction, per active user, per order processed.
SCI = (E * I) + M, where:
- E = energy consumed by the software
- I = carbon intensity of the electricity
- M = embodied carbon of the hardware (amortized)
The advantage of SCI is normalization. A service processing 10 million requests at 0.3 gCO2e/request is more efficient than one processing 1 million at 2.5, even though the first emits more in absolute terms. This enables comparison across services, temporal tracking, and meaningful target-setting.
Reduce: efficiency optimization
Right-sizing with a carbon lens
Classic FinOps right-sizing (matching instance size to actual usage) is the first GreenOps lever. But it adds a nuance: not all instance types have the same energy efficiency.
ARM processors (AWS Graviton, Google Axion, Azure Cobalt) consume 30-60% less energy than equivalent x86 for most workloads. This is not theoretical. AWS reports that Graviton3 delivers up to 60% less energy consumption for equivalent workloads compared to C5 instances. In our measurements with real data processing workloads, the reduction was 35-45%.
Migrating to ARM is not trivial (native dependencies need recompilation, Docker image compatibility needs verification), but the combined cost and carbon impact justifies the effort for most cloud-native applications.
Serverless and carbon efficiency
Serverless architectures are inherently more carbon-efficient than provisioned servers because the consumption unit is the invocation, not the server-hour. A server provisioned at 15% average utilization wastes 85% of the resources (and energy) it consumes. A Lambda that runs only when there is work wastes nothing.
Not everything can be serverless. Workloads with strict latency requirements, persistent connections, or long-running processing fit better in containers. But for event-driven processing, variable-traffic APIs, and batch jobs, serverless reduces both cost and emissions by orders of magnitude.
One client migrated an invoice processing pipeline from EC2 (2 m5.xlarge instances, 24/7) to Step Functions plus Lambda. Processing went from 730 server-hours per month to roughly 45 equivalent Lambda-hours. Emissions dropped 92%. (Costs dropped 78%.)
Eliminating obvious waste
Before optimizing architecture, eliminate the obvious waste:
- Orphaned resources: unattached EBS volumes, stale snapshots, unused elastic IPs, load balancers with no targets. AWS estimates that 30% of cloud spend is waste.
- Dev/staging environments running 24/7: if your team works 9 to 7, those environments can be off 14 hours a day and all weekend. That is 65% less consumption.
- Excessive logging and metrics retention: every stored and processed byte consumes energy. 90-day log retention for data nobody queries after 7 days is pure waste.
- Over-provisioned databases: the classic multi-AZ RDS instance sitting at 8% average CPU.
There is no glamour in turning off unused resources. But that is where the immediate 30-40% reduction lives, in both cloud spend and carbon.
Shift: carbon-aware computing
This is where GreenOps fundamentally differs from FinOps. Not all electricity is equal. A kWh in Ireland on a windy afternoon might have an intensity of 50 gCO2e. The same kWh in Virginia on a hot evening might hit 400 gCO2e.
Green regions
Each cloud provider publishes (with varying transparency) the energy mix of its regions. The differences are enormous:
- GCP europe-north1 (Finland): 97% carbon-free energy (CFE). Average intensity: ~30 gCO2e/kWh.
- AWS eu-west-1 (Ireland): strong wind energy connection. Variable, but low annual average.
- AWS us-east-1 (Virginia): the most popular region and one of the dirtiest. Average intensity: ~350 gCO2e/kWh.
- GCP us-central1 (Iowa): 98% CFE thanks to renewable energy agreements.
Moving a workload from us-east-1 to eu-north-1 can reduce its emissions by 85% without changing a single line of code. Latency and data compliance (GDPR, residency requirements) are real constraints. But for batch workloads, ML training, backups, and async processing, geographic location is flexible.
Temporal workload scheduling
Grid carbon intensity varies throughout the day. During sunny hours, solar generation reduces intensity. During evening demand peaks, natural gas and coal plants fill the gap.
Carbon-aware scheduling runs non-urgent workloads when the grid is cleaner. The electricityMap API provides real-time and historical carbon intensity data for most regions worldwide.
Implementation requires two components:
-
A scheduler that queries carbon intensity before launching batch jobs. If current intensity exceeds a threshold, the job queues until the grid cleans up. With a 24-hour deadline, there is nearly always a low-intensity window.
-
Workload classification: separating loads that need immediate execution from those that tolerate delay. User requests: immediate. Daily report generation: can wait 6 hours. Model retraining: can wait 24 hours.
The Green Software Foundation publishes the Carbon Aware SDK, an open-source library that encapsulates the logic of querying intensity data and making scheduling decisions. It integrates with Kubernetes, Azure Batch, and AWS Step Functions.
We implemented carbon-aware scheduling for a client running nightly ETL pipelines. By shifting the execution window an average of 3 hours (from 2 AM to 5 AM, when wind contribution peaks in Western Europe), pipeline emissions dropped 22% with zero functional impact.
ESG reporting: from the datacenter to the sustainability report
CSRD and digital emissions
The CSRD requires reporting under European Sustainability Reporting Standards (ESRS). Cloud infrastructure emissions fall under ESRS E1 (climate change), specifically Scope 3, Category 1 (purchased goods and services).
The challenge is that cloud providers do not deliver data in the format ESRS requires. You need to transform measurement tool outputs (CCF, native dashboards) into the corporate reporting framework. This means:
- Defining boundaries: which infrastructure do you include? Production only? Development too? Edge? User devices?
- Consistent methodology: market-based vs. location-based. Cloud providers purchase RECs (Renewable Energy Certificates) and PPAs (Power Purchase Agreements), which reduce their market-based emissions but not necessarily location-based ones.
- Auditability: data must be traceable from the final report back to the original measurement source.
The reporting framework we use
For clients subject to CSRD, we implement a data pipeline that feeds ESG reporting:
- Ingestion: cloud billing data + CCF data + provider emissions data.
- Calculation: conversion to tCO2e using updated emission factors (IEA, AIB residual mix).
- Normalization: emissions per business functional unit (tCO2e per million euros of revenue, per employee, per shipment processed).
- Storage: time series with origin metadata for audit.
- Reporting: export to ESRS E1 format, integration with corporate reporting tools (Workiva, Sphera, or a well-structured spreadsheet).
Normalization is the key. A board of directors learns nothing from hearing that your cloud emitted 42 tCO2e last quarter. They learn a great deal from hearing that your digital emissions per million euros of revenue dropped 18% year-over-year, or that you process 30% more shipments with the same emissions.
Practical implementation: a GreenOps roadmap
Phase 1: Visibility (months 1-2)
- Activate native carbon footprint tools from your provider.
- Deploy CCF for multi-cloud environments.
- Define your SCI: choose the functional unit relevant to your business.
- Establish the baseline: current emissions by service, region, and resource type.
- Identify hot spots: the 5 resources that emit the most.
Implementation cost: minimal. These are free or open-source tools. The main effort is configuration and analysis.
Phase 2: Quick wins (months 3-4)
- Eliminate orphaned and over-provisioned resources. Typical savings: 20-30% of emissions.
- Schedule non-production environment shutdowns outside working hours.
- Migrate non-critical workloads to regions with higher renewable percentages.
- Evaluate ARM migration for top-10 services by consumption.
Implementation cost: low. Most are configuration changes. ARM migration requires testing.
Phase 3: Architecture (months 5-8)
- Implement carbon-aware scheduling for batch workloads.
- Migrate serverless candidates.
- Establish carbon budgets per team/service (analogous to FinOps cost budgets).
- Integrate SCI metrics into operations dashboards.
Implementation cost: medium. Requires architecture changes and development work.
Phase 4: Reporting and governance (months 9-12)
- Implement data pipeline for ESG reporting.
- Define digital sustainability KPIs and review them quarterly.
- Include carbon in cloud procurement decisions (alongside cost and performance).
- Train the team: GreenOps is a practice, not a project.
The business case
The question that always comes up: does this generate ROI or is it a compliance cost?
Our experience across 8 GreenOps engagements shows that cost savings exceed implementation costs by a factor of 3x to 5x in the first year. Not because GreenOps is magic, but because the optimizations that reduce carbon (right-sizing, waste elimination, efficient scheduling) are the same ones that reduce the cloud bill.
A fintech client spending EUR 800K per year on cloud implemented the full roadmap in 10 months. Result: 31% emissions reduction, 27% cost reduction (EUR 216K), and CSRD compliance from the first report. Project cost was EUR 45K in consulting and EUR 20K in tooling.
Not everything is economic. Regulatory pressure is growing. Investors ask. Large-account RFPs include ESG criteria. Having real data and a reduction plan is a competitive differentiator, particularly in sectors like logistics, finance, and retail where ESG pressure is most intense. For companies looking to optimize their multi-cloud infrastructure, GreenOps and FinOps share the same action levers.
The reality of digital sustainability
Let us be honest: GreenOps will not save the planet. The individual contribution of a mid-size company to climate change through its cloud infrastructure is negligible in global terms.
But that does not mean it is irrelevant. CSRD is not voluntary. ESG criteria in procurement are a reality. And waste, whether measured in euros or in carbon, is still waste.
What makes GreenOps pragmatic is that it does not ask you to sacrifice performance or competitiveness. It asks you to measure, eliminate the unnecessary, and make informed decisions. It is, in essence, good engineering applied to a new dimension. And good engineering always ends up being profitable. Our cloud and DevOps team helps companies implement GreenOps from the visibility phase through full ESG reporting.
Tags
About the author
abemon engineering
Engineering team
Multidisciplinary engineering, data and AI team headquartered in the Canary Islands. We build, deploy and operate custom software solutions for companies at any scale.