10 min read Interactive 2026

The Knowledge Tree

A closer look at where AI is reshaping work — task by task, and what stays human in it.


Find your profession.

Each dot is a job. Left-to-right is how much training data the field generates for AI. Outward is how routine versus frontier the role is.


The map above is one claim made visible: that AI's reach into your work depends on two things — what your field has trained AI on, and how routine your role is within it — while three other properties determine which slices of that work survive even where AI is capable. The rest of this page walks through the framework that built the map, and what we found applying it to 180 jobs.

The map plots two; the cards show all five.

Training data

How much digital exhaust your field generates for AI to train on.

Every line of code on GitHub, every published article, every recorded transaction — these are AI's raw material. Some fields generate enormous volumes; some almost none. Two big discounts: privacy-locked work (therapy, medical exams, big financial deals) happens but never gets recorded; and tacit physical work (most trades) doesn't encode digitally no matter how skilled it is.

Low · 8 Janitor. Two and a half million people do this work. Almost none of it leaves a trace AI could learn from.

High · 100 Junior software developer. GitHub plus Stack Overflow plus every public library — the densest professional corpus ever assembled.

Novelty

Where your role sits between routine and frontier in its field.

Training data is field-level; novelty is role-level. Even in a dense field, a frontier role can resist AI. The question novelty asks: how much of the work has someone already done before? A senior partner negotiating a novel deal sits far out on the frontier; a paralegal cite-checking a brief walks a path walked a million times.

Low · 5 Data entry clerk. Every day is structurally identical to the last.

High · 95 Novelist. Every book has to do something the previous one didn't.

Embodiment

How much the work requires a body in a place.

Software writes itself onto screens; surgery happens in a room. The first moves anywhere a network reaches; the second is anchored to bodies, tools, and physical specificity. The score is roughly robotics-readiness — high-embodiment work isn't going to a robot in any near-term future you should plan around. Not because we'll never figure it out; because we haven't, and we're not close.

Low · 10 Software developer, translator, copywriter. Pure information work. No body required.

High · 95 Surgeon, sculptor, firefighter. Hands in a specific place doing things hands learn to do.

Accountability

How much customers pay for a human to be on the hook.

If AI could do the work at 95% quality, what would the customer pay extra for a human to be the named responsible party? Sometimes nothing — the work is the work. Sometimes a great deal: a license, a signature, fiduciary duty, criminal responsibility. The premium is what accountability measures. It's what protects radiologists when image-reading AI gets very good.

Low · 10 Data entry clerk, dishwasher, packer. No one cares whose hands the work passed through.

High · 95 Surgeon, judge, commercial pilot. Lives, fortunes, freedoms hang on the named human carrying the responsibility.

Relational

How much of the work IS the relationship.

Most jobs involve talking to people. The question isn't how much — it's how much the relationship IS the deliverable, where the work can't exist outside the trust between the two parties. A therapist's session is the relationship; a truck driver's delivery is not. This is the metric most resistant to compression.

Low · 15 Truck driver, machine operator, assembler. The output is what's bought.

High · 100 Therapist, nanny. The relationship isn't part of the work — it IS the work.


What the dots leave out.

The map and the cards do most of the work. Spending time with 180 jobs in this framework surfaced four things the dots themselves can't show.

Two distortions the chart can't show you.

Two patterns push jobs further left on the chart than their conceptual difficulty would predict.

Privacy lock. A therapist conducts thousands of hours of judgment-loaded work over a career. None of it lands in any training corpus. The same goes for primary care, family law, social work, investment banking deal rooms, most high-stakes consulting. The published literature exists; the actual work doesn't. Therapists score 38 on training data despite an enormous published field.

Tacit physical work. A master electrician's judgment isn't encoded anywhere digital. There are texts about plumbing; the corpus can't actually plumb. Twelve trades cluster between 18 and 38 on training data, well below where difficulty alone would put them.

The cut isn't difficulty. It's recordability.

AI doesn't respect prestige.

Most AI-resistant

  • Preschool teacher
  • Public defender
  • Special education teacher
  • Janitor
  • Dishwasher
  • Home health aide
  • Nursing assistant

Most AI-augmentable

  • Junior software developer
  • Paralegal
  • Junior copywriter
  • Financial analyst
  • Marketing analyst
  • SEO content writer
  • Bookkeeper

The cut doesn't follow prestige. It doesn't follow pay. It doesn't follow status. It follows the mechanics of the work — embodied, relational, accountability-bearing, frontier.

A software engineer earning ten times what a preschool teacher earns is doing work AI is much closer to displacing. A corporate lawyer billing hundreds to thousands an hour spends most of her workflow on steps AI now drafts in seconds. The preschool teacher does work AI cannot meaningfully approach today.

Uncomfortable in both directions.

The page isn't saying which work is good. It's saying which work AI can do. Two different questions.

Three shapes cover ninety percent of the cards.

About ninety percent of the 180 cards fall into one of three workflow shapes.

The flat-High

H H M H H H

Mostly information work in a dense field. AI does these steps today.

  • Junior software developer
  • Customer service (scripted)
  • SEO content writer
  • Data entry clerk
  • Bookkeeper
  • Paralegal

The sandwich

L M H H M L

Lows on the bookends, highs in the middle. Senior knowledge work — protect the bookends, lean in for the build.

  • Software architect
  • Management consultant
  • Investment banker
  • Investigative journalist
  • CMO

The flat-Low

L L L L L L

Almost every step is Low. AI doesn't reach the work — for opposite reasons at the two ends.

  • Therapist
  • Surgeon
  • Chef
  • Preschool teacher
  • Janitor
  • Sculptor

The flat-High and the sandwich aren't independent.

For decades, junior dev, paralegal, junior analyst, junior copywriter were the rungs you climbed to reach the senior roles in the sandwich above. If AI does the rung-work, the senior roles still exist — but the path to them narrows. The dataset doesn't predict whether a new path forms, or who gets to walk it.

Where the Lows live.

Across all 180 cards, the steps that rate Low cluster into six recurring places.

Accountability moments

The single point in a workflow where a person becomes the named, legally responsible party.

Sign off · stamp drawings · approve or decline · issue ruling

Coordination and people management

Negotiating among humans who have their own interests, contexts, and feelings.

Coordinate teams · manage classroom · mediate conflicts · lead team

Trust-building

Steps where the human presence itself is what's being built. The relationship is the deliverable.

Build trust · build alliance · engage with customer · engage with audience

Framing the question

The cognitive moment where someone decides what is even worth working on. The most consistently Low step type in the dataset.

Define problem · frame question · develop hypothesis · choose problem

Embodied physical contact

Hands or eyes on a specific physical thing. AI cannot be there.

Examine patient · perform procedure · cook · cut materials · apply paint

High-stakes interpersonal work

Live conversation where a lot is at stake and the person across from you is reading you in real time.

Negotiate contract · argue in court · counsel patient · hold sessions

Some workflows are one step from flipping.

A few cards aren't showing current AI exposure. They're showing the moment before a single technology lands.

Look at the rideshare driver's card. Receiving the ping is augmentable. Navigating is augmentable. Processing payment and rating the passenger, augmentable. One step rates Low — the driving itself.

When autonomous vehicles scale, that single step flips to High. The whole workflow goes augmentable in one move.

Rideshare driver — today

Receive pingHigh
Navigate to pickupHigh
Drive passengerLow
Process paymentHigh
Rate passengerHigh

Rideshare driver — post-AV

Receive pingHigh
Navigate to pickupHigh
Drive passengerHigh
Process paymentHigh
Rate passengerHigh

Cashier is the same shape with self-checkout. Truck drivers, mail carriers, delivery drivers, all on similar shapes. Protection held by a single step.

If your card looks like this, your protection is one technology away. That's thin.

While this does not predict where you'll be in five years, it does tell you which way the ground is tilting.

Where this leaves us.

None of this is settled. The transition won't be even, and the pressure on the people inside the harder edge of it is real. But the compression is happening regardless of who's ready, and the question that actually matters isn't whether work shrinks — it's what gets done with the capacity that's freed. That part isn't any one person's call. It's decided by employers, industries, governments, and individuals — through choice, or through inertia.


A note on method.

This piece is a structured argument, not an empirical study. The five metrics and the 180 occupation cards are an attempt to organize a question — where does AI reach into work, and where doesn't it — into something concrete enough to look at, not something measured.

The metric set was developed iteratively. I started from the standard "routine vs. cognitive" framing and found it didn't account for several things I kept observing: that some judgment-heavy fields (therapy, family law, deal rooms) generate almost no training data despite being well-documented professionally; that some highly-paid knowledge work requires almost no embodiment; that "responsibility" and "relationship" are different forces and conflating them obscures both. The five metrics are the smallest set I could find that explained the observations without collapsing distinct phenomena into each other.

The 180 cards were scored by me, not derived from a dataset. Each score is a structured judgment based on: my reading of the public conversation about the role, where applicable my own working experience or that of people I know in the role, and consistency checks against other roles in the same metric range. Where reasonable people would disagree, I erred toward the more conservative score in each direction (lower training data when the recordability case was uncertain, higher novelty when the role's variance was clear). I have not validated the scores against external benchmarks; I would not stand behind any individual score as a measurement.

The piece's claims live one level above the scores. The framework, the workflow shapes, the "one step from flipping" pattern, and the path-narrowing observation should hold whether or not the specific numbers do.