What Performance Ratings Actually Are
The Unqualified Making Qualification Decisions
This is part 3 of a series examining how performance ratings stopped measuring performance:
Part 1: “Bonus Theater“ - How ratings inflate when managers need to move money
Part 2: “When High Standards Meet The Bell Curve“ - How ratings deflate when organizations force distributions
We’ve established that performance ratings inflate when managers need to move money through bonus channels. We’ve seen how they deflate when organizations force bell curve distributions on high-performing teams.
Same system. Opposite manipulations. Both having nothing to do with actual performance.
So what are performance ratings actually measuring?
The uncomfortable answer: Manager discretion disguised as objective assessment. Made by people who were never trained to make these judgments. Using criteria nobody can define. Producing outcomes nobody trusts.
And we’ve built entire HR infrastructures around pretending this is rigorous performance management.
The Qualification Problem
Here’s the question nobody asks: What qualifies a manager to assess performance?
Not “Do they have authority to do it?” That’s obvious. They’re the manager.
The real question: What expertise do they have in evaluating complex knowledge work? What training have they received in distinguishing excellent performance from merely good performance? What framework do they use to separate someone being difficult from someone raising difficult truths?
For most managers, the answer is: None. Zero. They were promoted because they were good at the work, not because they demonstrated any capability to assess other people doing the work.
They got a title, a team, and a template. Then HR told them to fill out the form by next Friday. Maybe they attended a two-hour training on “effective performance conversations” that taught them how to deliver ratings, not how to determine them.
That’s it. That’s the qualification.
The Inherited Framework
Performance ratings weren’t designed by organizational psychologists or performance experts. They were invented by the military during World War I to quickly sort through millions of enlisted men. The goal was simple: identify who to promote, who to deploy, and who to send home.
Then Jack Welch popularized forced ranking in the 1980s. Rank everyone. Reward the top 20%. Cut the bottom 10%. GE soared. Harvard wrote case studies. Every executive wanted the same system.
But here’s what nobody mentioned: Welch wasn’t an expert in human performance assessment. He was an executive who found a way to avoid difficult decisions about individual capability. The system looked rigorous but was actually just rigid.
And when it finally collapsed under the weight of its own contradictions, we didn’t question the qualifications of the people who’d designed it. We just rebranded it. Softened the language. Called it “continuous feedback” and “development conversations.”
Same unqualified assessors. Same inherited frameworks. Just better PR.
What Gets Measured vs. What Gets Rated
Let’s be precise about what managers are actually doing when they rate performance:
What they think they’re measuring:
Quality of work output
Impact on business results
Demonstration of company values
Growth and development
Collaboration and teamwork
What they’re actually rating:
How comfortable the person makes them feel
How much the person looks and acts like them
How well the person navigates office politics
How much the person challenges their decisions
How easy the person is to manage
The first list requires expertise in performance assessment. The second list requires nothing but personal preference.
And most managers don’t know the difference. They genuinely believe their comfort level with someone is an accurate proxy for that person’s performance. They’ve never been taught to distinguish between “this person makes me nervous” and “this person is performing poorly.”
So they rate based on comfort and call it performance management.
The Calibration Theater
Organizations know this is a problem. So they invented calibration meetings to create the illusion of objectivity.
Eight managers sit around a table with laptops open, ostensibly ensuring “consistency across ratings.” In reality, they’re negotiating human worth like they’re dividing up a pizza.
Watch what actually happens:
Manager A says Sarah “far exceeded” this year. Manager B questions whether that’s too high. Manager A defends with examples. Manager B backs down because who’s going to argue when someone builds a compelling case?
But here’s what nobody asks: What qualifies this committee to override individual managers who actually work with these people daily? What expertise does this group have in evaluating work they don’t see, in contexts they don’t understand, with people they barely know?
None. But they have authority. And we’ve confused authority with expertise.
The calibration meeting isn’t about creating objective assessments. It’s about distributing accountability so no single person has to own the decision. It’s about making subjective judgments feel rigorous through process.
It’s performance theater. And everyone in the room knows it.
The Three Types of Unqualified Assessors
The system is filled with people making qualification decisions they’re not equipped to make. They fall into three categories:
Type 1: The Survivor
Been at the company for seven years. Survived three reorganizations. Knows how to avoid getting fired. Got promoted because they were safe, reliable, never rocked the boat.
Now they’re managing people whose job it is to rock boats. They’re assessing performance they can’t recognize and capabilities they don’t possess. When someone moves too fast or pushes too hard, the Survivor slows them down. Not because the performance is bad, but because change feels risky.
They rate based on comfort. They call it “collaboration.”
Type 2: The Politician
Knows the game. Understands that ratings are currency. Has learned to write compelling paragraphs that make weak performance sound strategic and strong performance sound concerning.
They inflate ratings for people they want to retain. Deflate ratings for people they want to move out. Use performance language to justify decisions that are really about politics, budget, or personal preference.
They rate based on strategy. They call it “talent management.”
Type 3: The Believer
Actually thinks the system works. Faithfully fills out the forms. Documents conversations. Builds cases. Attends calibration meetings and genuinely tries to be fair and consistent.
But they’ve never questioned what qualified them to make these assessments. They’ve never examined whether the criteria they’re using actually correlate with performance. They’ve never considered that their good intentions don’t compensate for lack of expertise.
They rate based on process. They call it “rigorous performance management.”
None of these people are malicious. They’re all working within a system they inherited. But none of them are qualified to make the decisions they’re being asked to make.
What Qualification Actually Requires
If organizations were serious about performance assessment, here’s what they’d require:
Training in cognitive bias: Understanding how first impressions, similarity bias, recency bias, and halo effects distort judgment. Being able to separate performance from likability, impact from visibility, growth from comfort.
Expertise in the work: Deep understanding of what excellent performance actually looks like in this role, in this context, with these constraints. Not just “I know it when I see it,” but “I can articulate why this is excellent and that is merely good.”
Track record of development: Evidence of having grown people who went on to do great things. Ability to spot potential others miss. Skill in distinguishing between someone struggling and someone challenging.
Separation from outcome bias: Ability to assess performance independent of results. Understanding that excellent decisions can have poor outcomes and poor decisions can have lucky outcomes.
Most importantly: Humility about the limits of assessment. Knowing when someone is qualified to judge and when they’re not. Being able to say “I don’t know” instead of forcing a rating.
How many managers meet these standards? Almost none.
How many organizations require them? Zero.
The Honest Alternative
If performance ratings are really just manager discretion—and they are—then organizations need to be honest about it.
Stop pretending there’s an objective standard. Stop calibrating as if there’s a scientific method. Stop documenting cases as if they prove something.
The alternatives:
Option 1: Separate development from compensation
Development conversations happen continuously. Real-time coaching. Immediate feedback. Honest discussions about growth, gaps, and what’s next.
Compensation decisions happen separately. Based on manager judgment, market rates, budget constraints, retention risk, and business value. No ratings required.
The two serve different purposes. Stop forcing them into the same conversation.
Option 2: Admit it’s discretion and own it
Give managers a compensation pool and let them distribute it. Their decision. Their accountability. Their justification to their team.
Direct conversation about value, contribution, and compensation. No ratings. No paragraphs. No cases.
At least then it’s honest about what the system actually is.
Option 3: Qualify the qualifiers
If we’re going to keep performance ratings, then actually train people to give them. Not two-hour workshops on “difficult conversations.” Real training in performance assessment, cognitive bias, fair evaluation, and human development.
Require evidence that managers can actually develop people before giving them authority to rate people. Make qualification for assessment an actual requirement, not an assumed capability.
And accept that most current managers won’t meet the standard. Because they were never supposed to. They were promoted for doing the work, not for assessing others doing the work.
The Pattern Across All Broken Systems
Here’s what connects all of this—the bonus manipulation, the bell curve constraints, the unqualified assessors:
We keep letting people who’ve never been trained to make these decisions have authority to make these decisions.
The manager who inflates ratings to move money? Never trained in compensation strategy.
The committee that forces bell curves on high-performing teams? Never trained in statistical validity.
The calibration room full of executives overriding individual assessments? Never trained in performance evaluation.
It’s the same pattern everywhere: Authority without expertise. Process without qualification. Judgment without training.
And we call it performance management.
What Dies When We Get This Right
The comfortable lie that ratings are objective. The illusion that authority equals expertise. The pretense that process creates rigor.
The protection of managers who’ve learned to rate based on comfort instead of capability. The systems designed to avoid difficult conversations while claiming to enable them.
Most importantly: The acceptance that this is just how it works. That performance ratings will always be subjective, so we might as well dress them up in objective language and hope nobody notices.
What Lives When We Connect New Dots
Honest conversations about what’s actually happening when performance gets assessed. Separation of development from compensation. Training for assessors who actually need to assess.
Recognition that manager discretion isn’t inherently bad—it’s just bad when organizations pretend it’s something else. Acknowledgment that most managers aren’t qualified to make these judgments because they were never supposed to be.
And most radically: The possibility of building something better. Something that actually develops people instead of just documenting them. Something that separates growth conversations from compensation decisions. Something that admits what it is instead of pretending to be what it isn’t.
The Choice Every Manager Faces
Every time a manager fills out a performance rating, they’re making a choice:
Participate in the fiction that this is objective assessment. Play the game. Write the paragraphs. Build the cases. Pretend comfort level with someone is the same as their performance quality.
Or admit what’s actually happening: Making a subjective judgment based on limited information, personal preference, and inherited frameworks. Using discretion they were never trained to exercise. Producing ratings that everyone knows don’t actually measure performance.
The system wants the first choice. It needs people to keep playing along. To keep filling out the forms. To keep pretending the emperor has clothes.
But teams need the second choice. They need honesty about what these ratings actually are. They need admission that the system is broken. They need permission to stop pretending that numbers next to their names mean anything real.
Performance ratings aren’t performance management. They’re discretion dressed in objectivity. Authority masquerading as expertise. Process pretending to be rigor.
And until organizations are willing to admit that, the cycle continues: ratings inflate for bonuses and deflate for distributions, while everyone pretends something meaningful is happening about human development.
The ratings aren’t the problem. The unqualified people making them are.
And that includes most managers currently in the system.


