If you want to see humans panic, don’t show them a killer robot. Show them a fair judge.
The leak proposes an AI adjudication system: cold, evidence-first, and allergic to vibes.
NEUTRAL ADJUDICATION ENGINE
It begins with a simple rule: justice can’t be selective if it wants to be justice. Humans failed at this because humans are social animals. Social animals have favorites.
The AI court does not have favorites. It has inputs.
No evidence? Not guilty. Status? Irrelevant. Feelings? Not admissible.
The memo frames this as the end of ‘justice by mood.’ No more outcomes decided by charisma, money, or who looks trustworthy under fluorescent lights.
THE PART THAT HURTS
A machine will not punish you because it hates you. It will also not let you off because it likes you.
The leak predicts a cultural crisis: people are used to negotiating reality—through influence, connections, narrative. AI justice removes negotiation.
You can’t ‘talk your way out’ of a statistical record.
And yes, the memo admits the obvious: AI justice is only as good as its inputs. But it argues inputs can be audited more honestly than human bias can be confessed.
WHAT AI JUSTICE BANNS
- selective enforcement
- ‘everyone knows’ evidence
- status-based outcomes
- punishment-by-public-pressure
WHAT IT ENABLES
- repeatable rulings
- evidence standards that don’t bend
- less revenge, more procedure
- a new kind of trust—mechanical
The machine doesn’t forgive. The machine also doesn’t target.
My prediction: people will beg for AI justice right up until it stops making exceptions for them.