Token Management Is the Number One Skill You Need to Learn Right Now

The flat-rate era is over.

Anthropic quietly killed bundled enterprise tokens in February. Your seat fee used to cover a token allowance. Now it covers platform access. Tokens are billed separately at API rates, on top of whatever you’re paying per user. Their own help centre confirms it; The Register broke the full story. OpenAI followed in April, officially announcing that Codex was moving from per-message credits to token-based metering. Two of the biggest AI providers, both moving in the same direction, within two months of each other.

This is not a pricing footnote. It is a structural change in what it costs to build with AI.

If you are running AI agents, integrating LLMs into a product, or just using Claude heavily at work, token efficiency is now a direct financial skill. Not a nice-to-have. Every bloated prompt, every unnecessary tool call, every context window you failed to trim hits your bill. The people who learn this now have a compounding cost advantage over everyone who doesn’t.

Here is what I have learned building and running six production AI agents.


First, understand what just changed

Anthropic’s Opus 4.7, released last week, shipped with a new tokenizer. The rate card is unchanged. The real cost is not.

The new tokenizer produces up to 35% more tokens for the same input text. Your prompt costs 35% more to run on Opus 4.7 than it did on Opus 4.6, at the same price-per-token. If you benchmarked your costs on the old model and assumed they’d carry over, they won’t. Test your actual workloads.

This is the pattern to watch: providers change tokenizers, context pricing brackets, and billing structures without changing headline rates. The number on the pricing page stays the same. Your bill does not.


The tips

1. Prompt caching is the single biggest lever

Both Anthropic and OpenAI offer cache-based pricing. Anthropic’s prompt cache cuts cached input token costs by 90%. If you have a system prompt, reference documents, or long context that stays the same across requests, cache it. One setup. Ninety percent reduction on every subsequent call that hits the cache.

Most people using the API are not using this. It is the highest-ROI change you can make.

The rule: anything that appears in every request should be cached. System prompts, persona instructions, knowledge base chunks, code files you’re asking the model to reason about. The cache TTL on Anthropic is five minutes. Build your calls to stay warm.

Structure matters for cache hits. Both OpenAI and Anthropic cache from the start of the prompt forward. Put fixed content first: system instructions, tool schemas, reference documents. Put the changing user-specific content last. A prompt that has dynamic content in the middle breaks the cache for everything that follows it.


2. Right-size the model for each task

Opus costs five times more than Haiku at input, and five times more at output. Claude Sonnet sits between them.

Haiku is fast and cheap. It is entirely capable of routing, classification, summarisation, simple extraction, and structured output generation. Routing an agent decision through Haiku to determine whether a task needs Opus or can be handled locally is not premature optimisation. It is cost architecture.

The mistake is using the most capable model for everything because it feels safer. A planner that decides whether to fetch a file does not need Opus. A model writing a novel does. Know the difference.

I covered the full case for multi-model workflows in Not All AI Is Equal — Stop Pretending It Is — the benchmarks and the practical routing logic are there if you want the detail.


3. Use the Batch API for anything that isn’t real-time

Anthropic’s Message Batches API runs requests asynchronously and returns results within 24 hours at exactly 50% off standard token prices. OpenAI has an equivalent.

If you are running nightly analytics, weekly report generation, bulk data enrichment, or any processing where a human is not waiting on the response, there is no reason to pay full price. Half-price tokens, same quality, same models. The only cost is latency.

I use this for Paper Ritual’s weekly analytics runs. The agent processes a batch of Etsy performance data overnight. The report lands in Telegram by morning. The tokens cost half what they would in real-time mode.


4. Know your context breakpoints

GPT-5.4 introduced a short/long context pricing split. Below the threshold, input tokens cost $2.50 per million. Above it, $5.00. Same model, same output quality, double the input price once you cross the line.

Anthropic’s pricing is currently flat across context sizes, but the pattern is worth knowing. Before assuming a long context call costs the same as a short one, check the current pricing page for the model you are using. Tokenizer changes and pricing bracket changes happen without fanfare.


5. Trim your context window actively

The default behaviour of most LLM frameworks is to pass the entire conversation history on every request. That is fine for short conversations. For agents that run for multiple turns, it is a quiet cost multiplier.

Every input token costs money. Context from turn 1 that is no longer relevant to what the agent is doing now should not be in the prompt at turn 20. The fix: summarise and compress. After a defined number of turns, distil earlier context into a summary and drop the raw messages. The model still has the relevant history. You stop paying for redundant tokens.

In ZeroClaw, Anthropic’s agentic runtime, this is handled automatically above a threshold. If you are rolling your own agent loop, build this in from the start.


6. Control output length deliberately

Output tokens are priced higher than input tokens. On Claude Opus 4.6, input costs $5.00 per million tokens and output costs $25.00.

Tell the model how long its response should be. Set max_tokens in your API call. Use stop sequences when you only need a specific field or a yes/no answer. Ask for a two-sentence summary rather than a full analysis when a full analysis is not what you need.

A model that naturally writes long responses will write long responses unless you tell it not to. Every sentence you didn’t need costs five times more than a sentence of input.

Structured outputs take this further. Asking the model to respond in JSON with a fixed schema, or to use a bullet list instead of prose, constrains how much it can say. Open-ended prose invites padding. A schema does not. Use the structured output parameter in your API call where the task allows it.


7. Put verbose instructions in cached system prompts, not per-request

If you are passing “you are an expert assistant, think step by step, respond in JSON with the following schema…” as part of every user message, you are paying full price for those tokens on every call. Put all persistent instructions in the system prompt and cache it. They cost 90% less on every subsequent request.

This also includes any in-context examples you pass to guide output format. One cache. Permanent discount.


8. Turn down reasoning effort on routine tasks

OpenAI’s reasoning models expose a reasoning.effort parameter. Anthropic’s extended thinking has an equivalent effort control. Both let you dial how much internal reasoning the model runs before answering.

High effort is appropriate when the task is genuinely hard: multi-step planning, complex code generation, tasks where quality visibly improves with more thought. It is not appropriate for extraction, classification, rewriting, or summarisation. Those tasks do not benefit from extended reasoning and you are paying for tokens the model spent thinking, not just tokens in the final response.

Set effort to low by default. Raise it selectively when you have evidence the task needs it.

One thing to watch on OpenAI: reasoning tokens consume context window and budget even when they are not shown in the final answer. If you are watching output tokens and the numbers seem high, check whether reasoning is running in the background.


9. Break complex tasks into stages

One giant prompt that asks the model to extract, reason, transform, and generate all at once is usually more expensive than breaking that work into smaller sequential steps. Each stage operates on only the context it needs. None of them carry the dead weight of the others.

The counterintuitive result: more API calls often means lower total cost. A pipeline that extracts structured data cheaply with Haiku, then passes only that structured result to Sonnet for reasoning, costs less than asking Opus to do everything from raw input in a single call.

Design your pipelines as pipelines. Not as monolithic prompts.


10. Combine tool calls where you can

In an agent loop, every tool call consumes input tokens (the tool call request), output tokens (the tool call content), and then more input tokens when the result is passed back to the model as context.

Agents that make many small, sequential tool calls can accumulate significant token overhead from the scaffolding alone. Where you can, batch operations into single calls. Fetch and summarise in one step rather than two. Retrieve and filter before passing to the model rather than passing raw and asking the model to filter.

This is harder to retrofit than to design in from the start. Think about it early.


11. Test your prompts against the actual tokenizer

Different models tokenize differently. The Opus 4.7 tokenizer change is the most recent example, but tokenizer differences between models have always existed. A prompt that costs X tokens on one model does not necessarily cost X tokens on another.

OpenAI has an official tokenizer at platform.openai.com/tokenizer — paste your prompt and see exactly how it breaks down. Anthropic doesn’t have a first-party equivalent; their token counting is API-based, but claudetokenizer.com is a third-party tool that uses the official API and gives you accurate counts across Claude models. Before optimising, measure. The gains you think you’re getting from shorter prompts may not be what you expect if you haven’t checked what the tokenizer actually does with your text.


12. Build cost visibility into your stack from day one

You cannot manage what you cannot see.

In my agent stack — which I wrote about in A Day in the Life of an Agent — every agent reports token consumption to Prometheus via Pushgateway. I can see which agent is burning the most tokens, which tasks are expensive, and whether a prompt change actually reduced costs or just shifted them. The observability is not optional: it’s how I know whether an optimisation worked.

At minimum, log input and output token counts per request. Aggregate by agent, by task type, and by model. Surface the top ten most expensive operations. You will find the waste quickly once it is visible.


The compounding problem

Agents make this worse than standard API usage.

A user sending a single query to a chatbot makes one API call. An agent completing a complex task might make twenty. Paper Ritual — an autonomous Etsy business running on a Raspberry Pi — makes dozens of API calls per daily run: research, pricing decisions, listing generation, analytics. Each tool call, each planning step, each verification loop is a separate API call with its own token cost. Inefficiency that costs $0.01 per user query costs $0.20 per agent task. At scale, that gap is the difference between a viable product and a product that bleeds money.

Token efficiency matters most in agentic systems. That is exactly where most people are not thinking about it yet.

The other agent-specific failure mode: runaway loops. An agent that retries, re-reads context, or gets stuck in a reasoning loop can burn through token budgets in minutes. Hard-cap your iteration count. Add explicit stopping conditions before the agent starts, not as an afterthought. Log token usage per step so you can see where a task went expensive. Agents don’t fail because they’re unintelligent. They often fail because nobody put a ceiling on how much thinking they were allowed to do.


The shift is permanent

The pricing shift from flat-rate to usage-based is not temporary. Both Anthropic and OpenAI have moved in the same direction. Every AI provider will follow, because subsidised flat-rate AI usage is not sustainable at the token volumes that real production workloads generate.

The developers who learn token management now will build cheaper, faster, and with more headroom than those who learn it later when the bill is already large.

Start with prompt caching. It takes one afternoon and the cost reduction is immediate.


Token prices correct as of April 2026. Check the current pricing pages before optimising for specific numbers. They change.

AI Token Management: You’re Using the Wrong Model, and It’s Costing You More Than You Think

Last week I published a piece about why I use six different AI models and why treating them as interchangeable is a mistake. If you haven’t read it, the short version is: different models are genuinely better at different jobs, and the engineers who’ve figured that out are quietly running rings around everyone else.

What I didn’t cover, what I deliberately parked for a separate article, was the money.

Because that’s where this gets interesting. And urgent.

The bill is coming

Most companies right now are in the honeymoon phase with AI spend. Subscriptions get approved, API keys get shared around, and nobody’s asking hard questions about what the organisation actually got for the investment.

That changes at year-end review. It already is changing. And when someone in finance opens the token usage report and asks “what did we get for this?”, the companies with a good answer will be the ones that treated token spend the way any sensible engineering team treats any other resource: with actual strategy.

The ones without a good answer will be the ones who did what most people do by default.

They used their most expensive model for everything.

Not all tokens are created equal

Here’s the thing most people don’t think about when they reach for Claude Opus or GPT-5 for every task: there’s a 5x pricing gap between the top and bottom tier of models from the same provider.

Current API pricing (April 2026, per million tokens input/output):

Model Cost Best for
Claude Opus 4.6 $5 / $25 Complex design, deep reasoning, multi-file architecture
Claude Sonnet 4.6 $3 / $15 95% of coding and building
Claude Haiku 4.5 $1 / $5 Testing, sub-agents, validation, repetitive tasks
Grok 4.1 Fast $0.20 / $0.50 Brainstorming, adversarial critique (free tier available)
Gemini Flash ~$0.10 / $0.40 Large-context triage, quick summarisation

That’s Opus costing 25x more per output token than Gemini Flash. For a task where both produce the same result, using Opus isn’t being thorough. It’s being negligent.

And across a team running hundreds of tasks a week, that gap compounds fast. We’re talking tens of thousands of pounds a year in pure waste, on work that didn’t need the expensive model and isn’t any better for having used it.

The mental model that actually works

Stop asking “which model is best?” and start asking “which model does this job need?”

Think about it the way you’d think about staffing a software team.

Your principal engineer is brilliant, expensive, and finite. You don’t ask them to write unit tests, review boilerplate, or summarise a Jira ticket. You use them where their judgment is genuinely irreplaceable: the architecture calls, the decisions that stay expensive if you get them wrong. Make sure everything else flows to the right level.

AI model selection is exactly the same problem.

Opus is your principal engineer. Sonnet is your senior developer. Haiku is your capable junior who’s surprisingly good when the task is well-defined. Grok is the brutally honest colleague who’ll tear your idea apart for free, which is exactly what you want before you’ve committed any real resources.

The best AI users I know don’t just prompt better. They assign work better.

The opportunity cost nobody talks about

Here’s the part that really matters, and I never see it discussed.

On consumer plans (Claude Pro, the subscription tiers), you don’t have unlimited tokens. You have a session allocation. Once it’s gone, you wait.

But here’s what makes this worse than most people realise: every message you send doesn’t just cost you the tokens in your new question. It costs you the tokens to re-read your entire conversation history. LLMs are stateless; they have no memory between calls, so every new message includes every previous message as input. By message 30, you might be sending 20,000 tokens of history just to get a 100-token answer. A long Opus chat doesn’t just charge you for your question. It charges you Opus rates to re-read everything you’ve ever said to it.

So if you burn Opus tokens on brainstorming you could have had for free on Grok, those tokens aren’t available when you actually need Opus to do the thing only Opus can do.

There’s a compounding trap on top of this. When Opus gives you a partial answer and you reply with a correction, that failed attempt is now baked into the conversation history, re-read on every future turn. Use the edit button on your original prompt instead. It replaces the branch, removes the mistake from history, and stops paying the re-reading tax on a dead end.

I’ve caught myself doing this. Starting a planning session with Claude (which is my natural reflex) and realising halfway through: I’m not building anything yet. I’m just thinking out loud. This should be Grok.

The discipline of routing tasks to the right model before you start is what separates people who consistently ship good work from people who hit their usage limits at 3pm wondering where all their tokens went.

Route the work, not the ego

Here’s my actual routing flow. I covered the what in the multi-model piece. This is the why, through the lens of cost.

Brainstorming and adversarial critique → Grok (free tier)

Before I spend a single precious Anthropic token on an idea, I’ll throw it at Grok. Grok is ruthless. It’ll find the holes, tell me what’s wrong, push back without the diplomatic softening you sometimes get from Claude. That’s exactly what I want before committing any real resources. And it costs nothing. Why would I use anything else at this stage?

Research → Perplexity

Every time. It’s hypertuned for research in a way that genuinely surprises me. Citations, synthesis, current information: Perplexity just gets this right. So that’s where the exploratory work goes, not my Claude quota.

Large-context triage → Gemini Flash

When a task involves scanning a large codebase or a massive document set, Gemini Flash at near-zero cost handles the breadth. It identifies what matters, isolates the relevant sections, hands a focused context to the model that actually needs to think about it. You don’t need a principal engineer to read the entire file tree; you need them to look at what the triage found.

Architecture and complex design → Claude Opus

This is where the premium tokens earn their keep. When the reasoning chain matters, when a wrong decision stays expensive for years, when I need a thinking partner who’ll push back correctly rather than just agree: that’s Opus. Not because it’s the most powerful model available, but because this is the class of task where the quality difference is real and the stakes justify the cost.

95% of actual coding → Claude Sonnet

This surprises people. The SWE-Bench gap between Sonnet and Opus is now less than 1.5 points. For standard implementation work (which is most of it), Sonnet is faster, cheaper, and produces the same result. The only time I genuinely need Opus for coding is when a change spans massive context with complex interdependencies. That’s maybe 5% of my build work. Everything else is Sonnet.

Testing and sub-agents → Haiku

The one most people overlook. Test execution doesn’t need frontier intelligence. It needs speed and reliability. Haiku at $1/$5 per million tokens can run a lot of tests. Burning Opus tokens on a test run is like asking your principal engineer to check the CI pipeline. Technically they can; it’s just an appalling use of their time.

If you’re running multi-agent pipelines, the economics here are even more pronounced; every sub-agent call compounds. I wrote about what agentic systems actually look like in practice if you want the concrete version of this.

What this looks like at scale

For a medium coding task – say, 200k input tokens, 50k output – the numbers look like this:

  • Pure Opus workflow: ~£1.20 – £1.50 per task
  • Mixed routing (Haiku for tests, Sonnet for implementation, Opus for design): ~£0.20 – £0.35

Scale that to a team running 100 tasks a week. The annual difference runs to tens of thousands of pounds, with better results, because each model did what it’s actually good at rather than one expensive model doing everything adequately. For a real picture of what that kind of build actually involves, my journey building an agentic developer gives you the unfiltered version.

The enterprises that will look back on their 2026 AI spend with a clear conscience will have done three things: defined a model routing policy, used batch processing and prompt caching where possible (both Anthropic and OpenAI offer 50% discounts for batch API; prompt caching can cut input costs by up to 90% for repeated context), and treated token spend as an engineering metric, not just a finance line.

Cost per task. Quality per token. Routing efficiency. These are performance indicators. The teams that measure them will outperform the ones that don’t.

Three questions before you open any chat window

I’ve simplified my own decision process to three questions. You can use these starting tomorrow:

  1. Does this require deep reasoning, or is it just execution? Deep reasoning (architecture, ambiguous problems, multi-system tradeoffs) earns the premium model. Execution that follows a clear spec doesn’t.
  2. Could a cheaper model get me 80–95% of the way there? Be honest. Most tasks have a 90% solution available at a tenth of the cost. If 90% is good enough for the task, 90% is the right answer.
  3. Am I using a premium model because I need it, or because it’s convenient? Convenience is the real budget killer. Defaulting to the model you have open is how waste compounds invisibly.

The principle

The people who win with AI in the next two years won’t be the ones using the most powerful models.

They’ll be the ones who worked out that intelligence is a finite resource — and spent it accordingly.


Token ROI is the discipline of using the smallest model that reliably does the job, and reserving your expensive reasoning for the moments where quality actually changes the outcome.

What’s coming next

I’ve been thinking about building something to make this easier: a model selector tool where you describe what you’re trying to do and get a current, task-calibrated recommendation on which model to use. Not a static list (those go stale fast as models shift), but something live. I’m calling it the LLM Council; the best recommendation isn’t one model’s opinion, it’s a consensus view that updates as capabilities evolve.

If that sounds useful, say so in the comments. I’ll build it if there’s appetite.


Miss the companion piece? Not All AI Is Equal — Stop Pretending It Is covers the which model for which task. This one covers why it matters economically.

Your Second Brain Shouldn’t Live in Someone Else’s Database

The average knowledge worker has their thinking scattered across browser tabs, Slack threads, email chains, and notebooks that haven’t been opened since last quarter. Most of it is gone the moment the tab closes. The rest is findable in theory and lost in practice.

A second brain fixes that — a single place where your thinking accumulates, connects, and compounds over time. The idea isn’t new.

What is new is what happens when you give that brain to an AI. Not as a search index. As context. Suddenly the AI you’re working with knows about the decision you made three months ago, the constraint you discovered last week, the small but critical detail you’d long forgotten because it was buried in a note from a Tuesday in February. It doesn’t just retrieve — it reasons. It helps you build projects with context no chat window, no SaaS platform, no fresh conversation can match.

The question isn’t whether to build one. It’s whether to build it in a way that actually works — or hand your thinking to someone else’s platform and hope they’re still around in three years.


A video dropped yesterday. “Claude Code + Karpathy’s Obsidian = New Meta.” 189,000 subscribers. Already circulating in the feeds of everyone who thinks about AI and productivity.

I’ve been running this setup for months.

Not because I saw a video. Because I tried everything else first and this is what survived.


I Did It the “Proper” Way First

When I wanted to build a second brain with AI, I did what any technically-minded person does: I reached for the right tools. Vector embeddings. Pinecone. Ingestion pipelines. I built an HR chatbot with N8N and Pinecone as the backend. I tried wiring Notion up with a Pinecone-backed retrieval layer.

These are legitimate approaches. I’ve shipped them in production. I know what they take.

And for a personal knowledge system, they were completely wrong.

Here’s what nobody tells you about RAG: the pipeline is the product. Before you can search your knowledge, you have to build and maintain the system that turns your knowledge into searchable vectors. Every new note is a workflow step. Every source needs chunking, embedding, syncing. When your source material changes, your embeddings drift. The thing that was supposed to help you think now needs its own maintenance schedule.

I didn’t want to maintain a pipeline. I wanted to think.


What I Actually Run

The setup is embarrassingly simple.

Obsidian for the vault. Every note is a markdown file. Every file lives on my machine, backed by a private Git repository.

Claude Code as the AI layer. It talks directly to the filesystem — reads files, writes files, updates notes, maintains structure. No API middleware. No ingestion step. No embeddings.

A CLAUDE.md file that tells Claude the rules of the system: where things live, what conventions to follow, how to behave in this vault specifically.

Session skills — a /session-start that warm-starts every conversation from vault context, and a /session-end that writes a structured note capturing what we did, what decisions were made, and what to pick up next time.

That’s the minimum viable version. If you have Obsidian and any LLM that can interact with the filesystem — Claude Code, Cursor, Windsurf, take your pick — you can build this today.


Why This Beats RAG for Personal Knowledge

Three reasons. All learned the hard way.

1. No ingestion tax.

With RAG, every piece of knowledge has to pass through a pipeline before it’s usable. With this setup, I write a note and it exists. Claude reads it when it’s relevant. That’s the entire workflow. Half the time, I don’t even run /session-start manually. Claude just does it. The friction is so low it effectively disappears.

2. Markdown is portable. Databases aren’t.

Notion is prettier. I genuinely don’t care. Function over style, every time. My notes are markdown files. They open in any editor, on any machine, without an account or an API key. If I switch from Claude Code to something else tomorrow, my vault doesn’t care. The knowledge stays mine. I’ve watched people lose years of Notion content to export limitations. I’ve seen Roam users scrambling when pricing changed. Your knowledge shouldn’t be held hostage to a product decision you had no part in.

3. Data sovereignty.

This is the one I feel most strongly about. The video recommends Pinecone — a SaaS vector database. NotebookLM — Google’s product. The entire “new meta” stack has your most personal knowledge distributed across third-party platforms, each with their own terms of service, their own pricing models, their own sunset risk.

My knowledge lives on my machine and in my own Git repository. Change IDE — still works. Change LLM provider — still works. Anthropic disappears tomorrow — still works.


The Privacy Question You’re Probably Asking

You might be thinking: aren’t you just sending your notes to Anthropic instead of Pinecone? Fair challenge. The difference is storage versus processing — your notes pass through to generate a response and that’s it. I’m on a consumer plan with model training opted out, which takes about ten seconds in account settings. My notes don’t live on Anthropic’s servers. With Pinecone, your data does — permanently, on their infrastructure, under their terms. That’s the meaningful difference.

If you want zero data leaving your machine at all, swap Claude Code for a local model. Ollama works. The vault doesn’t care which LLM is reading it. That’s exactly the point — the system doesn’t depend on any single vendor being trustworthy. You can swap the LLM layer without touching your knowledge. Try doing that with your Pinecone index.


What It Looks Like at Scale

The minimum viable setup — Obsidian plus a file-aware LLM — is genuinely useful from day one.

But I’ve been running something more elaborate. There’s a second agent in this system: Jarvis, running on a Raspberry Pi 5. Jarvis generates my daily briefing each morning, maintains the vault overnight, handles the housekeeping I don’t want to think about. My own entry points now include voice notes from Meta Rayban smart glasses, Telegram messages, and a custom Jarvis UI with TTS. All of it ends up in Obsidian. That’s a different article. The point is: the foundation is just markdown files and a terminal. Everything else is built on top of that.


What I Haven’t Solved Yet

One honest gap: the hyperlink problem.

Obsidian’s power is in the connections between notes — the [[wikilinks]] that build a graph of your thinking. Right now, those links are created manually or as a side effect of Claude working in the vault. There’s no agent that looks at new notes overnight and says: this connects to that, and that connects to this. It’s a solvable problem. I just haven’t built it yet. I mention it because the “new meta” framing tends to imply a finished system. This one isn’t finished. It’s a living thing, and that’s partly why it works.


The Actual New Meta

The video is good. The instinct is right. Reasoning over your knowledge, not just retrieval of it — yes. Structured notes rather than disconnected chunks — yes.

But the “meta” isn’t Claude Code plus Obsidian. The meta is owning your knowledge stack.

Simple enough to maintain. Portable enough to survive tool changes. Private enough that you control what it knows. You don’t need a vector database. You don’t need an embedding pipeline. You need a folder of markdown files and something that can read them.

Start there.


Next: adding an overnight agent to the system — what Jarvis actually does and why it changes everything.