Things AI Made More Obvious: Performance Reviews
A limited series that shines a bright light in the dark corners of AI in Leadership
The Workday notification that Jordan’s boss had completed their Manager Evaluation landed at 4:47 PM on a Friday.
Three paragraphs. Perfectly formatted. Development goals that sounded specific without actually being specific. Action items that could apply to anyone on the team.
Jordan read it twice, then opened ChatGPT and pasted in a prompt: “Write a performance review for a mid-level marketing manager showing strong initiative but needing growth in strategic thinking.”
The output matched their review almost word for word.
Their manager hadn’t written this. An algorithm had.
And the thing is—Jordan got it. End-of-year reviews are brutal when you’re managing twelve people and none of them are actually struggling. Everyone’s doing fine. Nobody’s a problem. So you sit there trying to manufacture meaningful feedback for people who just need you to not get in their way.
The AI makes it faster. Cleaner. Less painful for everyone.
Except it doesn’t.
Because what Jordan needed wasn’t a document. It was a conversation with someone who’d been paying attention. Someone who could name the specific moment when Jordan had stepped up, or point to the pattern they’d been missing, or explain why the promotion wasn’t happening yet despite the work being promotion-level.
The AI couldn’t give them that. And apparently, neither could their manager.
Not because the manager was lazy or incompetent. Because the system had made it possible to manage twelve people without actually managing any of them. To check the boxes without doing the work. To outsource the hard part—the actual observation, judgment, and courage required to develop people—to a tool designed to sound thoughtful without being thoughtful.
The AI didn’t fail Jordan. The AI did exactly what it was built to do: generate plausible-sounding corporate text that mimics feedback without requiring any of the human work that makes feedback useful.
The system failed Jordan. The system that turned performance management into a calendar event. That separated development from daily work. That made it possible for someone to be responsible for twelve people’s growth without ever watching them actually work.
The AI just made it easier to pretend otherwise.
Here’s what your AI prompt can’t capture: the specific Tuesday morning when Jordan walked into a disaster no one else wanted to touch. The way they navigated the politics. The client call where they salvaged a relationship someone else had damaged. The hundred small decisions that added up to someone ready for something bigger.
Your AI can generate “demonstrates strong initiative” without knowing what initiatives Jordan actually demonstrated. It can produce “opportunities for growth in strategic thinking” without defining what strategic thinking means or showing the specific moment when tactical execution was the wrong call.
It can create the appearance of management without requiring any actual managing.
And that’s not the AI’s fault. That’s ours.
We built performance management systems so divorced from actual work that an algorithm trained on corporate mediocrity can reproduce them perfectly. So formulaic that ChatGPT can’t tell the difference between thoughtful feedback and template-filling.
The AI didn’t break performance reviews. It revealed they were already broken.
Watch what happens when managers discover AI review generators. Relief. Finally, a way to get through the tedious ritual without having to recall specific details or form coherent thoughts about someone’s year.
Drop in their name and job title. Maybe add a few bullet points about projects if you’re feeling thorough. Let the AI weave it into something that sounds managerial and development-focused.
Twenty reviews completed in an afternoon. Efficiency achieved.
Except everyone receiving these reviews knows immediately what happened. They can feel the generic weight of it. The way it could describe anyone doing roughly similar work. The absence of anything specific enough to prove you were actually there.
And they learn something about the system they’re in: caring has been automated. Observation has been outsourced. The hard work of actually seeing people has been replaced by the easy work of generating text about them.
The AI isn’t replacing good management. It’s exposing how little management was happening in the first place.
If you can plug someone’s name into a prompt and get a complete performance review, your review process was already worthless. The AI is just making the emptiness visible.
Here’s what actual performance feedback requires: watching people work. Not checking boxes about projects completed, but observing how they approach problems, navigate relationships, handle pressure, develop others.
Remembering specifics. The meeting where they said what nobody else would say. The project where they struggled and how they worked through it. The pattern you’ve noticed over months about how they engage with feedback.
Caring enough to form actual opinions. About their strengths, gaps, potential, trajectory. Opinions you can defend because they’re based on accumulated observation, not generated text.
Having the courage to say hard things directly. To name the problem you’ve been avoiding. To give the feedback that might be uncomfortable. To have the conversation instead of hiding behind pleasant-sounding development goals.
And doing all of this without a template or script or AI assistant that makes it easier.
Because the only way to give useful feedback is to actually know the person you’re giving it to. And the only way to know them is to do the work of managing them all year long.
If you’re using AI to write performance reviews, you’re not solving a writing problem. You’re revealing a management problem.
What you should use AI for: organizing notes you’ve already taken, flagging patterns you’ve already noticed, helping structure thoughts you’ve already formed, checking language for legal risks or unintended bias.
What you shouldn’t use AI for: generating observations you haven’t made, creating feedback you haven’t thought through, producing reviews for people you haven’t actually managed.
The difference matters. In one case, AI amplifies your management. In the other, it replaces it.
If your relationship with someone is strong enough that you could write their review without AI, then use AI to make it better. If you need AI to write it at all, the problem isn’t insufficient technology. The problem is insufficient attention.
Jordan figured this out within minutes of reading that review. Not because they’re exceptional at detecting AI-generated text, but because they’d been waiting all year for someone to actually see their work. To notice what they’d been building. To care enough to form a specific opinion about where they were and where they could go.
The AI-generated review told them everything they needed to know: their manager had been managing around them, not managing them. Checking boxes, not creating development. Filling forms, not building relationships.
Two weeks later, Jordan accepted an offer elsewhere.
In the exit interview, they explained exactly what had happened. About the AI-generated review. About the year of being managed by someone who never actually saw their work. About deciding to find a place where someone might actually pay attention.
HR thanked Jordan for the feedback and promised to review performance management practices.
Then they went back to sending managers the AI tool recommendations. Because efficiency.
The talent war isn’t lost to competitors with better salaries or shinier benefits. It’s lost to the quiet surrender that happens when people realize their managers care so little that they’ve outsourced caring to an algorithm.
Your people are worth more than AI-generated corporate speak. They’re worth the actual work of being seen, being known, being developed by someone who paid attention.


