Price your AI calls in satoshis
I built a cost enrichment Lambda that decorates every Bedrock invocation with an estimated_cost_usd field. The numbers are correct and completely useless. Sonnet 4.6 averages $0.000501 per call. Opus 4.6 averages $0.002490. I stared at those in CloudWatch for a week before admitting I couldn’t feel the difference between them. Five-tenths of a thousandth of a dollar versus two and a half thousandths of a dollar — my brain rounds both to “basically zero.”
That rounding is the whole problem. When every individual call feels free, developers stop thinking about model selection. Opus for a classification task that Haiku could handle. Opus for a summarization step that Sonnet handles fine. Each call is a fraction of a cent, so nobody flinches — and then the monthly bill arrives with a number that has commas in it, and nobody can trace it back to any single decision because no single decision felt expensive.
I needed a unit where the per-call cost is an integer. Something where 5x more expensive looks 5x more expensive without doing mental arithmetic on the fourth decimal place.
Satoshis work. One sat is roughly $0.00001 at round numbers. I don’t care about the exact exchange rate — I care about order of magnitude, and sats put AI inference costs into a range where humans can do comparison shopping by glance.
The Lambda now emits both fields:
{
"estimated_cost_usd": 0.000501,
"estimated_cost_sats": 50
}
Sonnet is 50 sats. Opus is 250 sats. Haiku is 2 sats. I don’t need to explain the cost structure to anyone anymore — the integers do it. A developer looking at a log line that says 250 sats next to one that says 2 sats understands immediately that they just spent 125x more on that call. In dollars, the same comparison is $0.002490 versus $0.000020, and the brain glazes over before it finishes parsing.
The conversion is trivial — divide USD by 0.00001, round to the nearest integer. I hardcoded the sat-to-dollar ratio because I don’t need real-time BTC prices. I need a stable human-scale unit. If Bitcoin moves 30%, my sats column is off by 30%, and I still don’t care, because the relative costs between models stay the same, and that’s what drives decisions.
This isn’t a cryptocurrency argument. I don’t hold Bitcoin and I’m not suggesting you settle invoices in it. Satoshis are just the most widely understood unit that happens to sit at the right order of magnitude for per-call AI costs. If someone invented a “millicent” that caught on, I’d use that instead. The insight is that sub-cent costs need a sub-cent unit with a name, and sats already have one.
The moment the per-call cost became a legible integer, the team started making different choices about which model to route to. Not because anyone mandated it — because the cost was finally visible at the granularity where the decision happens.