The maximum electricity price each AI workload tier can profitably sustain, derived from GPU-level economics using exclusively public data.
Where Rw is the revenue per MWh of compute for workload type w, Cnon-elec is the non-electricity operating cost (cooling, networking, real estate, labor, amortized hardware), and m is the required return margin (30%), applied as a markup on electricity cost.
| Workload Tier | R(w) $/MWh | Cnon-elec $/MWh | CHR $/MWh |
|---|---|---|---|
| Frontier Inference (Opus/GPT-5 class) | $74,000 | $4,250 | $53,650 |
| Mid-Tier Inference (Sonnet/GPT-4.1 class) | $14,800 | $4,250 | $8,120 |
| Enterprise Agentic AI | $15,000 | $4,500 | $8,080 |
| Enterprise Contracted | $5,900 | $4,250 | $1,270 |
| Commodity Inference (mini models)* | $1,850 | $3,800 | ~$800† |
| Frontier Model Training* | $2,000 | $5,200 | ~$500† |
| Blended Average (Q1 2026) | $12,500 | $4,350 | $6,350 |
Gas heat rate benchmark: ~$50/MWh. *Non-electricity costs differentiated by infrastructure tier: commodity inference uses lower-cost air-cooled facilities ($3,800/MWh); frontier training requires liquid cooling, high-bandwidth networking, and denser power delivery ($5,200/MWh). †Standalone CHR is negative; effective CHR reflects portfolio-level cross-subsidization by hyperscale operators (see Royal, 2026 for discussion). 30% required margin applied to all tiers.
The gas heat rate has governed wholesale electricity price formation for decades: fuel cost divided by thermal efficiency yields the marginal generation cost. The Compute Heat Rate introduces the demand-side analogue: economic value of compute divided by electricity consumption yields the maximum price the operator can rationally pay.
Even the lowest positive-margin workload tier (enterprise contracted, CHR ~$1,270/MWh) implies a price tolerance approximately 25 times the gas heat rate. For the four workload tiers with positive standalone CHR values, AI demand will not curtail at any price level currently observed in U.S. wholesale markets, including extreme scarcity events.
As AI data center load grows from ~4% to 15-25% of regional peak demand at major hubs, this price-inelastic demand class becomes the marginal price-setter in an increasing number of hours. The CHR quantifies the ceiling toward which prices converge. No existing forward curve model incorporates this variable.
GPU specifications and benchmarks: NVIDIA documentation (H100, B200); MLPerf inference benchmarks.
API pricing: Anthropic (Claude Opus 4.5: $5/$25 per million tokens), OpenAI (GPT-5 class: $1.25/$10), Google (Gemini 2.5 Pro: $1.25/$10). Published pricing as of Q1 2026.
Cloud compute rates: AWS, Google Cloud, Azure published on-demand GPU instance pricing.
Infrastructure costs: Cushman & Wakefield, JLL data center market reports; hyperscaler SEC filings (Meta 10-K, Microsoft 10-K, Alphabet 10-K).
Energy data: EIA (Henry Hub natural gas spot prices, wholesale electricity data).
All inputs are exclusively public data. No proprietary or confidential data sources are used in the CHR calculation.
When referencing CHR values from this publication:
Royal, H. (2026). "CHR Reference Values, Q1 2026." Published at computeheatrate.com, March 2026. Methodology: Royal, H. (2026), "The Compute Heat Rate," SSRN Abstract ID 6322318.
The Compute Heat Rate (CHR) is independent research by Hans Royal. The full methodology, including workload taxonomy, input derivation, sensitivity analysis, and market implications, is published in Royal (2026), available on SSRN (Abstract ID 6322318).
CHR values are updated quarterly as underlying data changes (GPU specifications, API pricing, cloud compute rates, facility cost benchmarks). Material intra-quarter changes (e.g., significant API price shifts or new GPU releases) may trigger interim updates.
The CHR does not constitute investment advice, a price forecast, or a recommendation. It is an analytical framework measuring demand-side electricity price tolerance.