Snowflake is one of the most powerful data platforms on the market — and one of the easiest places to accidentally spend a lot of money. Its consumption-based credit model is transparent in theory, but in practice, many organizations watch their Snowflake bill grow month over month without a clear understanding of why.
According to the FinOps Foundation 2026 State of FinOps report, data platform costs (led by Snowflake) are now the third-largest cloud cost category for companies with analytics-heavy workloads — often representing 15–30% of total cloud spend. Here are 8 concrete ways to bring that number down without sacrificing query performance.
1. Set Auto-Suspend Aggressively
Every Snowflake virtual warehouse that's running but idle is consuming credits. The default auto-suspend timeout is 10 minutes — meaning a warehouse that finishes its last query keeps running for 10 more minutes, burning credits the entire time.
Best practice: Set auto-suspend to 60 seconds for most warehouses. The only exception is high-frequency query loads where the cold-start time of resuming a suspended warehouse would meaningfully impact latency. For most analytics workloads, 60-second auto-suspend has no user-visible impact and can reduce warehouse idle costs by 40–70%.
ALTER WAREHOUSE my_warehouse SET AUTO_SUSPEND = 60;
Auto-suspend is the single highest-ROI Snowflake optimization. It takes 30 seconds per warehouse to implement and has zero impact on query results. If you do nothing else from this list, do this.
2. Right-Size Your Warehouses
Snowflake warehouses come in sizes from X-Small (1 credit/hour) to 6X-Large (512 credits/hour). Many organizations run X-Large or 2X-Large warehouses for workloads that would run equally fast — or nearly as fast — on Medium or Large.
Doubling warehouse size doesn't double query speed. Larger warehouses help with highly parallel workloads (many concurrent queries) but provide minimal benefit for sequential single-query workloads. Use QUERY_HISTORY to analyze whether your warehouses are actually utilizing their full compute capacity:
SELECT warehouse_name,
AVG(execution_time) / 1000 AS avg_exec_seconds,
COUNT(*) AS query_count
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD(day, -30, CURRENT_TIMESTAMP)
GROUP BY warehouse_name
ORDER BY 3 DESC;
3. Use Separate Warehouses for Different Workload Types
Running ETL jobs, BI dashboards, and ad-hoc analyst queries on the same warehouse leads to queuing, which can trigger auto-scaling — burning extra credits and not actually solving the latency problem.
Recommended warehouse segmentation:
- ETL warehouse — sized for your heaviest transformation jobs, auto-suspend at 60s
- BI dashboard warehouse — smaller, tuned for concurrent light queries
- Analyst/exploration warehouse — medium size with strict query timeout settings
- Data loading warehouse — X-Small is usually sufficient; COPY INTO is fast regardless of warehouse size
Warehouse proliferation warning: Don't take warehouse segmentation too far. More than 6–8 warehouses creates management overhead and makes cost attribution complex. The goal is separation of workload types, not one warehouse per user.
4. Enable Query Result Caching
Snowflake automatically caches query results for 24 hours. If the same query runs again on unchanged data, it returns the cached result instantly — consuming zero credits. Make sure you're not accidentally bypassing this cache:
- Avoid
CURRENT_TIMESTAMP()in queries that run frequently on static data (it creates a new cache key each time) - Use the same SQL text for repeated queries — even a whitespace difference creates a cache miss
- Check
QUERY_HISTORYfor queries withQUERY_TYPE = 'SELECT'and high execution counts — these are cache hit candidates
5. Implement Resource Monitors
Resource monitors let you set credit budgets at the account, warehouse, or resource group level — and automatically suspend warehouses when the budget is exceeded. This is essential for preventing runaway costs from poorly optimized queries or accidental full-table scans.
CREATE RESOURCE MONITOR monthly_limit
WITH CREDIT_QUOTA = 5000
TRIGGERS ON 80 PERCENT DO NOTIFY
ON 100 PERCENT DO SUSPEND_IMMEDIATE;
ALTER WAREHOUSE analytics_wh SET RESOURCE_MONITOR = monthly_limit;
Resource monitor best practice: Set resource monitors on every production warehouse. A single poorly-written query that runs a full table scan on a billion-row table can consume hundreds of credits in minutes. Resource monitors are your safety net.
6. Optimize Storage: Clustering Keys and Time Travel
Snowflake storage costs include both the compressed size of your data and Time Travel retention (data versions kept for recovery). Two common waste sources:
- Time Travel set too high: The default is 1 day; Enterprise allows up to 90 days. For most tables, 7 days is sufficient. Running 90-day Time Travel on a 50TB table adds significant storage cost with little practical benefit.
- Fail-safe storage: Snowflake keeps an additional 7 days of data in Fail-safe (non-configurable) after Time Travel expires — factor this into storage cost expectations
- Clustering keys on large tables: Well-chosen clustering keys reduce the amount of micro-partitions scanned per query, directly reducing compute consumption on large analytical tables
7. Use Snowpark and Materialized Views Strategically
Two areas where credit costs often surprise teams:
- Materialized views automatically refresh when underlying data changes — consuming credits for the maintenance. Only use materialized views for queries that run very frequently and are expensive to compute from scratch. Otherwise, a scheduled task to rebuild a regular table is often cheaper.
- Snowpark (Python/Java/Scala in Snowflake) runs on warehouse compute — poorly-written Snowpark code can be just as expensive as a bad SQL query. Profile your Snowpark jobs with
QUERY_HISTORYthe same way you would SQL workloads.
8. Purchase a Capacity Commitment
If you have predictable Snowflake usage, Snowflake's Capacity Commitment contracts offer significant discounts over on-demand credit pricing — typically 20–40% off depending on commitment size and term.
| Commitment Type | Typical Discount vs On-Demand | Minimum Commitment |
|---|---|---|
| No commitment (pay-as-you-go) | 0% | None |
| 1-year capacity commitment | 20–30% off | Varies by region/cloud |
| 2-year capacity commitment | 30–40% off | Larger minimum |
| 3-year capacity commitment | 35–45% off | Negotiated enterprise contracts |
Only commit if you have 6+ months of stable usage data. Unused committed credits don't roll over in most standard agreements.
Snowflake costs are one of many line items Cloud Hero AI's Hero Savings monitors. Learn how the 15% performance fee model works — including how it handles data platform costs alongside your cloud infrastructure spend.
See Exactly How Much You're Wasting
Cloud Hero AI scans your AWS, GCP, or Azure account and finds waste automatically. We only charge 15% of what we actually save you. Nothing upfront. Nothing if we find nothing.
Run your free audit →