AI won’t take your job. Apathy will.
AI won’t take your job. Apathy will.
Sep 8, 2025



The person I’m worried about isn’t the engineer.
It’s the manager who says, “Let’s wait for a policy.”
Here’s the thing: every company I know has two tribes right now.
The Reluctant—meet weekly, make a task force, write a memo.
The Tinkerers—ship tiny automations on Tuesdays and fix them on Thursdays.
Guess which group wins.
I’ve been both. Early at Concord, I waited for “the perfect system” before touching pricing ops. It cost us a quarter. Later, we started shipping stupid-simple tools—contract summarizers, clause finders, intake triage—and our cycle time dropped without a single new hire. Not magic. Just motion.
This isn’t about “AI taking jobs.” It’s about whether you’re building paved roads while everyone else is still drawing maps.
What the tinkerers do differently
They pick tiny loops, not moonshots.
Summarize a 12-page MSA into five bullets for Sales. Draft a first-pass playbook for common redlines. Auto-tag contracts with renewal flags. Hours, not months.
They keep humans in the loop on the right things.
Copilot for 80% cases; human judgment for the 20% that decide risk or revenue. Clear lanes. No drama.
They instrument reality.
Time-to-yes. % standard paper. # of escalations. If the tool isn’t beating the baseline, it doesn’t ship. No “AI theater.”
They protect the schema like a hawk.
New prompt? New field? If it pollutes ARR, ramps, or renewal flags, it’s a no until RevOps says the model holds.
Practical playbook for Legal/Ops to not get left behind
1) Start where you already own the outcome
Intake triage: route by issue, risk, and segment.
Clause lookup: pull approved language + precedent in one click.
Executive summary: “What we sold / price / term / risk” in 6 bullets.
2) Make it safe by design
Use a sandbox: templates, public docs, and fully redacted examples.
No PII or customer secrets until Security clears the vendor.
Log every assist: who used it, for what, and the output snapshot.
3) Publish the Prompt & Playbook Pack
One living doc with: model, prompt, guardrails, “good vs. bad” examples.
Every prompt maps to an owned KPI (e.g., time-to-first-draft).
4) Run controlled A/Bs
20 matters this month: 10 with copilot, 10 without.
Track cycle time, accuracy, escalations, and rework.
If it’s +20% faster with equal quality, make it default.
5) Set three lanes for governance
Off-limits: high-stakes clauses, novel risks, special indemnities.
Copilot: drafts, summaries, comparisons, playbook suggestions.
Autopilot: tagging, metadata extraction, template fills.
6) Wire it to revenue
Contract metadata (SKU, term, price, renewal flag, ramp) updates CRM and billing, not the other way around.
If AI extracts it, it must reconcile to the contract, or it’s thrown out.
7) Train like it’s a sport
Weekly 30-minute “show the mess”: bad outputs, fixes, new patterns.
Rewards for deleted steps, not just added tools.
A 30/60/90 you can actually run
Day 0–30
Pick three low-risk use cases: intake triage, clause finder, contract summary. Build in a sandbox. Measure against baseline. Ship to a small group.
Day 31–60
Expand to playbook drafting and redline suggestions on standard paper only. Start metadata extraction for SKU/term/price/renewal into the CRM. Publish your first “Contract Health” dashboard (standardization %, discount buckets, ramp exposure).
Day 61–90
Promote two workflows to default. Remove one manual step from the process entirely. Define off-limits/copilot/autopilot lanes in policy. Tie at least one marketing/sales KPI to contract reality (e.g., pipeline that converts to standard terms at target price).
Objections you’ll hear (and how to answer)
“What about accuracy?”
Better than baseline or it doesn’t ship. Humans still sign off on risk. We measure errors and roll back fast.
“Security won’t allow it.”
Great—start with redacted corpora and on-prem or vendor-approved tools. Your policy can evolve while you learn.
“Our work is too nuanced.”
Perfect. Keep the nuance. Automate the 40 minutes of retrieval, formatting, and comparison around the five minutes of judgment.
“We don’t have time.”
You don’t have time to keep re-typing the same explanation of limitation of liability either. Automate that, buy your time back.
The uncomfortable truth
AI won’t replace your judgment. It will expose whether you’re using that judgment on hard problems—or hiding behind busywork and committees.
The teams that win will be the ones who move first on the boring stuff: routing, summarizing, tagging, reconciling. That’s where the hours hide. That’s where the power is.
Pick one workflow. Ship a tiny loop this week.
Not perfect. Real.
Because AI won’t take your job.
Apathy will.
The person I’m worried about isn’t the engineer.
It’s the manager who says, “Let’s wait for a policy.”
Here’s the thing: every company I know has two tribes right now.
The Reluctant—meet weekly, make a task force, write a memo.
The Tinkerers—ship tiny automations on Tuesdays and fix them on Thursdays.
Guess which group wins.
I’ve been both. Early at Concord, I waited for “the perfect system” before touching pricing ops. It cost us a quarter. Later, we started shipping stupid-simple tools—contract summarizers, clause finders, intake triage—and our cycle time dropped without a single new hire. Not magic. Just motion.
This isn’t about “AI taking jobs.” It’s about whether you’re building paved roads while everyone else is still drawing maps.
What the tinkerers do differently
They pick tiny loops, not moonshots.
Summarize a 12-page MSA into five bullets for Sales. Draft a first-pass playbook for common redlines. Auto-tag contracts with renewal flags. Hours, not months.
They keep humans in the loop on the right things.
Copilot for 80% cases; human judgment for the 20% that decide risk or revenue. Clear lanes. No drama.
They instrument reality.
Time-to-yes. % standard paper. # of escalations. If the tool isn’t beating the baseline, it doesn’t ship. No “AI theater.”
They protect the schema like a hawk.
New prompt? New field? If it pollutes ARR, ramps, or renewal flags, it’s a no until RevOps says the model holds.
Practical playbook for Legal/Ops to not get left behind
1) Start where you already own the outcome
Intake triage: route by issue, risk, and segment.
Clause lookup: pull approved language + precedent in one click.
Executive summary: “What we sold / price / term / risk” in 6 bullets.
2) Make it safe by design
Use a sandbox: templates, public docs, and fully redacted examples.
No PII or customer secrets until Security clears the vendor.
Log every assist: who used it, for what, and the output snapshot.
3) Publish the Prompt & Playbook Pack
One living doc with: model, prompt, guardrails, “good vs. bad” examples.
Every prompt maps to an owned KPI (e.g., time-to-first-draft).
4) Run controlled A/Bs
20 matters this month: 10 with copilot, 10 without.
Track cycle time, accuracy, escalations, and rework.
If it’s +20% faster with equal quality, make it default.
5) Set three lanes for governance
Off-limits: high-stakes clauses, novel risks, special indemnities.
Copilot: drafts, summaries, comparisons, playbook suggestions.
Autopilot: tagging, metadata extraction, template fills.
6) Wire it to revenue
Contract metadata (SKU, term, price, renewal flag, ramp) updates CRM and billing, not the other way around.
If AI extracts it, it must reconcile to the contract, or it’s thrown out.
7) Train like it’s a sport
Weekly 30-minute “show the mess”: bad outputs, fixes, new patterns.
Rewards for deleted steps, not just added tools.
A 30/60/90 you can actually run
Day 0–30
Pick three low-risk use cases: intake triage, clause finder, contract summary. Build in a sandbox. Measure against baseline. Ship to a small group.
Day 31–60
Expand to playbook drafting and redline suggestions on standard paper only. Start metadata extraction for SKU/term/price/renewal into the CRM. Publish your first “Contract Health” dashboard (standardization %, discount buckets, ramp exposure).
Day 61–90
Promote two workflows to default. Remove one manual step from the process entirely. Define off-limits/copilot/autopilot lanes in policy. Tie at least one marketing/sales KPI to contract reality (e.g., pipeline that converts to standard terms at target price).
Objections you’ll hear (and how to answer)
“What about accuracy?”
Better than baseline or it doesn’t ship. Humans still sign off on risk. We measure errors and roll back fast.
“Security won’t allow it.”
Great—start with redacted corpora and on-prem or vendor-approved tools. Your policy can evolve while you learn.
“Our work is too nuanced.”
Perfect. Keep the nuance. Automate the 40 minutes of retrieval, formatting, and comparison around the five minutes of judgment.
“We don’t have time.”
You don’t have time to keep re-typing the same explanation of limitation of liability either. Automate that, buy your time back.
The uncomfortable truth
AI won’t replace your judgment. It will expose whether you’re using that judgment on hard problems—or hiding behind busywork and committees.
The teams that win will be the ones who move first on the boring stuff: routing, summarizing, tagging, reconciling. That’s where the hours hide. That’s where the power is.
Pick one workflow. Ship a tiny loop this week.
Not perfect. Real.
Because AI won’t take your job.
Apathy will.
About the author

Matt Lhoumeau
CEO
Displayed on Author page presentation
About the author

Matt Lhoumeau
CEO
Displayed on Author page presentation
About the author

Matt Lhoumeau
CEO
Displayed on Author page presentation
Product
Legal




© 2025 Concord. All rights reserved.
Product
Legal




© 2025 Concord. All rights reserved.
Product
Legal




© 2025 Concord. All rights reserved.