The average company running workloads on AWS wastes between 28% and 35% of its cloud spend every month. That's not speculation — it's the consistent finding from Gartner, Flexera's annual State of the Cloud reports, and our own analysis of hundreds of AWS accounts at Cloud Hero AI. For a team spending $50,000/month on AWS, that's $14,000–$17,500 disappearing into idle resources, oversized instances, and forgotten storage every single month.
The good news: most of these savings are low-hanging fruit. You don't need to re-architect your entire infrastructure. In this guide, we'll walk through each major category of waste, show you exactly what to look for, and give you realistic savings estimates for each lever you can pull.
1. Rightsize Your EC2 Instances (Saves 15–25%)
EC2 compute is typically the largest line item on any AWS bill — often 40–60% of total spend. And a huge portion of that is wasted on instances that are dramatically oversized for their actual workload. The classic pattern: an engineer provisions an m5.2xlarge during a load test, the test passes, and the instance stays that size forever — even though average CPU utilization is 4%.
How to find oversized instances: In the AWS Console, navigate to EC2 → Instances and look at the CloudWatch metrics for CPU utilization over the past 30 days. Any instance consistently running below 20% CPU utilization is a rightsizing candidate. AWS Cost Explorer's rightsizing recommendations (under "Cost Optimization" in the left nav) will also show you specific downsizing opportunities with estimated savings.
As a rule of thumb, an instance running at less than 20% average CPU utilization can typically be downsized by one instance size (e.g., m5.2xlarge → m5.xlarge), cutting that instance's cost by roughly 50%. Even moving from m5 to m6i or m7g (Graviton) at the same size delivers 10–20% cost reduction with equal or better performance.
Quick win: Check for instances in a stopped state. You're not paying for compute, but you ARE paying for the attached EBS volumes. A stopped m5.4xlarge with a 500 GB gp2 EBS volume still costs you $50/month even while off. Terminate stopped instances (after snapshotting if needed) or at minimum detach and delete unused volumes.
Graviton Migration: 20% Savings at No Risk
AWS's ARM-based Graviton3 processors (the m7g, c7g, r7g families) offer 20% better price-performance than equivalent x86 instances. For containerized workloads or anything running modern Linux, migration is usually a matter of updating your instance type and rebuilding your AMI for ARM. Most Java, Python, Node.js, and Go applications run without any code changes.
If you're running 20 x m5.xlarge instances at $0.192/hr each, that's $2,765/month. Migrating to m7g.xlarge at $0.1632/hr drops that to $2,350/month — saving $415/month or nearly $5,000/year, just from a type change.
2. Eliminate EBS Orphans and Oversized Volumes (Saves 5–10%)
Amazon EBS volumes persist independently of the instances they're attached to. When you terminate an instance without deleting its volumes, those volumes keep billing you indefinitely. In large AWS environments, unattached EBS volumes can represent 8–12% of total storage spend — all of it pure waste.
The AWS CLI command to find all unattached volumes in a region:
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[*].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}' \
--output table
Beyond orphaned volumes, many teams are still running gp2 volumes when they should be on gp3. The gp3 volume type is 20% cheaper than gp2 at the same size, and lets you independently provision IOPS (up to 16,000 IOPS) and throughput (up to 1,000 MB/s) without paying for baseline IOPS you don't need. Migrating all gp2 volumes to gp3 is a zero-downtime, zero-risk operation that delivers immediate savings.
Also check for oversized EBS snapshots. AWS keeps snapshots around forever unless you configure a Data Lifecycle Manager policy. Snapshots older than 90 days that are not referenced by any AMI can usually be safely deleted — or at minimum, moved to EBS Snapshot Archive (which reduces storage cost by up to 75% for rarely-accessed snapshots).
3. Rightsize Your RDS Instances (Saves 10–20%)
RDS databases are frequently the most over-provisioned resources in an AWS environment. Unlike EC2, engineers are often reluctant to downsize databases because "you don't mess with the database." This caution is understandable but leads to consistent over-provisioning.
A common pattern: a production db.r5.2xlarge (8 vCPU, 64 GB RAM) running a PostgreSQL database that uses maybe 15% of its CPU and 8 GB of RAM at peak. Downsizing to db.r5.xlarge (4 vCPU, 32 GB RAM) cuts the cost from ~$1,000/month to ~$500/month — a $6,000/year saving on a single database.
RDS multi-AZ in dev/staging: One of the most common and most expensive mistakes we see. Multi-AZ RDS doubles your instance cost. Your development and staging databases almost certainly do not need multi-AZ. Disabling multi-AZ on a db.r5.xlarge in us-east-1 saves $420/month per database.
Also review your RDS storage type. Like EC2, many RDS instances were provisioned on gp2 storage. Migrating to gp3 delivers the same 20% storage savings. And check for RDS instances with "automated backups" set to 35 days — the AWS default is 7 days, and most teams don't actually need 35 days of point-in-time recovery, which also inflates snapshot costs.
4. Reserved Instances vs. Savings Plans (Saves 30–72%)
If there's one optimization that delivers the largest absolute dollar savings, it's committing to AWS usage via Reserved Instances (RIs) or Savings Plans. On-demand pricing is the most expensive way to run AWS — and for any workload with predictable, stable usage, there's simply no reason to pay on-demand rates.
| Commitment Type | Max Discount | Flexibility | Best For |
|---|---|---|---|
| 1-Year Compute Savings Plan | up to 40% | Any EC2, Lambda, Fargate | General workloads with predictable spend |
| 3-Year Compute Savings Plan | up to 66% | Any EC2, Lambda, Fargate | Long-running stable workloads |
| 1-Year EC2 Instance RI (No Upfront) | up to 40% | Specific instance family + region | Specific instance families you're locked into |
| 3-Year Standard RI (All Upfront) | up to 72% | Specific instance type + AZ | Stable long-running instances, budget available |
| RDS Reserved Instances | up to 69% | Specific DB engine + instance | Production databases with stable sizing |
The practical recommendation for most teams: Start with a 1-year Compute Savings Plan covering 70–80% of your baseline EC2 spend. This gives you significant discounts without locking you into specific instance types, which is important if you're still rightsizing. Once your fleet is stable, layer in EC2 Instance Savings Plans or Standard RIs for your most predictable instances.
At $15,000/month in EC2 spend, a 1-year Compute Savings Plan covering $12,000/month of that spend at a 33% discount saves $3,960/month — or $47,520/year. That's real money.
5. S3 Lifecycle Policies and Storage Class Optimization (Saves 3–8%)
S3 is deceptively expensive at scale. S3 Standard costs $0.023/GB/month. That sounds cheap until you realize many companies are storing terabytes of data in S3 Standard that hasn't been accessed in years. S3 Glacier Instant Retrieval costs $0.004/GB/month — an 83% reduction — with retrieval times in milliseconds.
The right S3 storage class ladder:
- S3 Standard — Frequently accessed data (daily access). $0.023/GB/month.
- S3 Intelligent-Tiering — Unknown or changing access patterns. Automatically moves objects between tiers. $0.023/GB/month for frequent tier, $0.0125/GB for infrequent.
- S3 Standard-IA (Infrequent Access) — Accessed less than once a month. $0.0125/GB/month.
- S3 Glacier Instant Retrieval — Archive with millisecond access. $0.004/GB/month.
- S3 Glacier Deep Archive — Compliance archives, accessed less than once a year. $0.00099/GB/month.
Create S3 Lifecycle policies to automatically transition objects based on age. A sensible default policy: transition to Standard-IA after 30 days, to Glacier Instant Retrieval after 90 days, and to Glacier Deep Archive after 365 days. For a bucket with 50 TB that hasn't been structured with lifecycle policies, this can easily save $800–$1,200/month.
Enable S3 Intelligent-Tiering for new buckets: For any bucket where you're unsure about access patterns, enable Intelligent-Tiering by default. The monitoring fee ($0.0025 per 1,000 objects) is almost always worth it for buckets with mixed access patterns. AWS will automatically move objects to cheaper tiers without any manual intervention.
6. Eliminate Unused Networking Resources (Saves 2–5%)
AWS charges for data transfer, unused Elastic IPs, and NAT Gateway traffic. These costs are often invisible because they don't show up as a single line item — they accumulate across dozens of small charges.
Unused Elastic IPs: AWS charges $0.005/hr for every allocated Elastic IP not associated with a running instance. That's $3.60/month per idle EIP — small on its own, but it adds up when teams have dozens of them. Run aws ec2 describe-addresses and release any EIPs not attached to running instances.
NAT Gateway costs: NAT Gateways charge $0.045/hr plus $0.045/GB of data processed. For teams with high-bandwidth private subnet workloads, NAT Gateway data processing can easily reach $500–$2,000/month. Consider VPC endpoints for AWS services (S3, DynamoDB, ECR, etc.) — these route traffic within the AWS network, bypassing the NAT Gateway entirely and eliminating data processing charges for those services.
Cross-AZ data transfer: AWS charges $0.01/GB for data transferred between Availability Zones in the same region. In a microservices architecture where services call each other across AZs, this can add up to hundreds of dollars per month. Using AWS PrivateLink or co-locating frequently communicating services in the same AZ can eliminate this cost.
Putting It All Together: Your 30-Day Action Plan
Here's a prioritized sequence for capturing these savings quickly:
- Week 1: Identify and terminate stopped EC2 instances and their orphaned EBS volumes. Migrate gp2 volumes to gp3. Disable multi-AZ on dev/staging RDS. (Estimated savings: 5–10%)
- Week 2: Run EC2 rightsizing analysis in Cost Explorer. Downsize any instances with <20% CPU utilization. Evaluate Graviton migration for top 5 instance types. (Estimated savings: 10–20%)
- Week 3: Implement S3 lifecycle policies across all buckets. Enable Intelligent-Tiering for new data. Release unused Elastic IPs. Add S3 and DynamoDB VPC endpoints. (Estimated savings: 3–8%)
- Week 4: Purchase a 1-year Compute Savings Plan for 70% of your baseline EC2 spend. Evaluate RDS Reserved Instances for stable production databases. (Estimated savings: 20–40% on committed spend)
Following this plan, most teams see a 25–40% reduction in their AWS bill within 60 days. The exact number depends on how over-provisioned your current environment is — and in our experience, the more chaotic the growth has been, the more waste there is to recover.
See Exactly How Much You Can Save
Cloud Hero AI's free savings audit analyzes your AWS account in minutes — no agents to install, no data exported from your environment. Get a line-by-line breakdown of waste by category, with prioritized recommendations and estimated dollar savings.
Get My Free Savings Audit →