Skip to content
Cranking out Good Code
Go back
The token economy of software work: three roles that will matter next

The token economy of software work: three roles that will matter next

The token economy of software work

The core argument is that software’s unit of work is shifting from instructions to tokens. In plain terms: from humans writing every step to humans specifying outcomes, constraints, and context while models execute more of the path.

I think that’s directionally right, and it maps to what I’m seeing in real teams.

The three roles that stand out

There are three emerging tracks:

  1. Orchestrators — people who define outcomes, break down problems, and manage model quality/cost.
  2. Systems engineers for AI infrastructure — people who build the routing, eval pipelines, context systems, and reliability layer.
  3. Domain translators — people with deep domain expertise plus enough technical fluency to direct AI at high-value problems.

I see this as labor specialization under a new economic constraint: intelligence is now purchasable, and increasingly abundant.

Why this is an economics story (not just a tooling story)

My undergrad was in economics at BYU, and this shift looks like a classic productivity transition:

That means comparative advantage changes. The highest-value work is less about typing syntax and more about allocating intelligence effectively against business constraints.

The hard truth companies don’t want to hear

A lot of teams are still in “AI makes mistakes, therefore we wait” mode.

That sounds prudent, but in many cases it’s organizational inertia wearing a risk-management costume.

The better question is: Have we given AI the environment required to succeed?

If the answer is no, poor results are not evidence that the direction is wrong. They’re evidence that the system design is incomplete.

I felt this personally in my last role. Part of why I moved on was a strategic mismatch: I believed we needed to invest now in context engineering and AI-native developer workflows; leadership did not. Time will tell, but I’m confident this investment becomes table stakes for almost every software company.

What companies should do now

If you’re serious about staying relevant, invest in three things immediately:

1) Context engineering

Treat context as infrastructure. Build and maintain the docs, schemas, examples, and constraints that let AI reason in your domain.

2) AI-native developer experience

Build internal tools (especially CLI + automation entry points) so models can act safely and repeatably against real workflows.

3) Evaluation and operations

Measure quality, latency, cost, and failure modes continuously. If you can’t evaluate it, you can’t scale it.

Where this goes next

I expect many teams to converge on a new operating model:

The winners won’t be teams with the most demos. They’ll be teams that redesign work around this stack faster than everyone else.

That’s worth saying out loud now, not in hindsight.

This blog post was inspired by this video: $1,000 a Day in AI Costs. Three Engineers. No Writing Code. No Code Review. But More Output..


If you’re working through this transition in your own org, I’d love to compare notes. I’m especially interested in practical patterns for context engineering, eval design, and AI-native delivery workflows.


Share this post on:

Previous Post
Stop Giving Your Agents a Task List
Next Post
Claude Sonnet 4.6: Opus-Level Performance at 1/5 the Cost