Why All VA Executives Are Above Average

The Hill
June 24, 2014

The revelation that some 65 percent of executives at the embattled Department of Veterans Affairs (VA) received performance bonuses last year did something rare these days in our nation’s capital: It unified congressional Republicans and Democrats.

In a hearing Friday, it was revealed that all 470 of the VA’s senior executives were rated “Fully Successful” or higher at a time when the agency has been under fire for egregious claims backlogs and other difficulties. Indeed, over the past four years, none received ratings in the two lowest categories, “Minimally Satisfactory” or “Unsatisfactory,” and nearly 80 percent were rated either “Outstanding” or “Exceed Fully Successful,” making them eligible for performance awards. Some 65 percent in fact received said awards, which averaged around $9,000.

Members of Congress dutifully proclaimed outrage.

House Veterans’ Affairs Committee Chairman Jeff Miller (R-Fla.) declared, “It should not be the practice of any federal agency to issue taxpayer dollars in addition to paying six-figure salaries to failing senior managers just because the current [Office of Personnel Management] statute for members of the [Senior Executive Service] allows that to occur.” He added, “Bonuses are not an entitlement; they are a reward for exceptional work.”

Rep. Phil Roe (R-Tenn.) demanded, “Do you think that’s normal in business, that nearly every executive is successful?”

Rep. Ann McLane Kuster (D-N.H.) made the obvious comparison to Lake Wobegon, where “all of the children are above average.”

While I understand why the average reader would see the ratings and bonuses as some shocking revelation, it’s infuriating that those charged with oversight of federal agencies would greet this with anything but a “Well, duh.” Those doing so or either woefully ignorant of the system or shamelessly posturing.

There are five rating categories for federal civil servants:

  1. Outstanding: Work performance consistently exceeds established Performance Elements and Standards.
  2. Exceed Fully Successful: Work performance usually exceeds established Performance Elements and Standards.
  3. Fully Successful: Work performance consistently meets established Performance Elements and Standards.
  4. Minimally Successful: Work performance meets some, but not all, established Performance Elements and Standards.
  5. Unacceptable: Work performance does not meet any established Performance Elements and Standards.

Even from a casual reading, then, anything less than Fully Successful is a failure. Even for a low-level employee, much less a member of the Senior Executive Service (equivalent to military general and flag officers) a rating of Minimally Successful is highly unusual and requires extensive documentation. The vast majority of employees should expect to get Exceed Fully Successful ratings with high performers — not superstars, simply those who are above average — getting Outstanding ratings. The superstars will be distinguished from the merely above average by the written comments.

Is this inflation of the ratings rather silly? Sure. But it’s been in place for decades and is far from unique.

It’s simply the nature of bureaucracies to inflate ratings. While most employees, even very high performers, could use improvement in some areas of their performance, bureaucracies naturally pretend otherwise, at least officially. By definition, the “established Performance Elements and Standards” are the very minimum threshold and failing to meet it is grounds for termination.

The armed forces have a similar system and it’s even more inflated. The rating systems are replaced every few years to de-skew the process but it inevitably concentrates at the top very quickly.

As I was coming on active duty a quarter century ago, the Army introduced a new Officer Evaluation Report (OER) format that tried to avoid rating inflation by rating more senior officers on the distribution of their ratings. The system — long since replaced, at least twice — had six “blocks.” Theoretically, the distribution for each senior rater (in my case, my boss’s boss, the lieutenant colonel commanding the battalion) should have been a bell curve, with very few lieutenants in the 1 and 6 block, more in the 2 and 5 blocks, and most in the 3 and 4 blocks. In reality — and this was a system that had only been in place for a year or two — all officers were in the top three blocks. The very top performers (say, three of the 17 lieutenants in the battalion) were in the 1 block. The worst of the worst (probably no more than one or two of the 17) were in the 3 block. The rest were in the 2 block.

With the newest version of the form, implemented last year, there are only four ratings categories. The top category, “Most Qualified,” is limited to 49 percent of rated officers. Yes, that means that the plurality of officers will be in the top category and, presumably, all but those being singled out for elimination from the force will be in the second.

Similarly, if “only 15 senior executives across the entire federal government had received either of the two lowest ratings in the most recent year,” that means those ratings are reserved for the worst of the worst — if not the criminally malfeasant.

Nor is government employment unique in this regard. Companies that have employment rating systems invariably rate their employees too high, at least in terms of the names assigned to the ratings. No one wants to tell the employees that they want to keep that they’re anything less than above average. It’s not only bad for morale but socially awkward and requires a lot of documentation.

And, of course, we’ve seen “grade inflation” in academia. Whereas a “C” grade is theoretically average, in most universities and in most disciplines the overwhelming number of students get Bs and As — especially in the upper-level courses taken by people majoring in a subject.

I’ve worked at a lot of different places — military, state government, federal government, private sector, nonprofit sector — over the years and all of them had systems in place. While they may seem bizarre from an outsider’s standpoint, that’s irrelevant if the people reading the ratings understand what they’re looking at.

It ultimately isn’t a problem for 95 percent of the employees to get “walks on water” or above ratings if the people making decisions about retention, promotions, bonuses, raises and so forth understand how the system works. Presumably, the 5 percent who get below “walks on water” are counseled and soon out the door if they don’t improve drastically and the true superstars are in the top category — whatever it’s called — or otherwise distinguishable by the use of specific buzzwords in their review.

In such a system, a manager who rates their average employees “average” are doing both those employees and the company a disservice. Eventually, his superiors will figure out that he’s a dolt. But, in the meantime, good people are being punished for his doltishness. And the true top performers under his management won’t be promoted, hurting both them and the firm.

Similar, the VA is hardly unique in offering “performance bonuses” to what would seem an inordinate share of its senior managers. There, as in many workplaces — and, again, not just in the government sector — bonuses are part of the standard pay package for management-level employees and not getting a bonus is the exception, not the rule.

Finally, it’s worth noting that it’s possible for an agency to be problem plagued and for most of its senior management to actually be doing a good job. Many of the VA’s problems are externally driven, with congressionally driven mandates creating a vast number of new claims from Vietnam Era and Desert Storm veterans at a time when the system is being pushed beyond its limits by the influx of claims from the wars in Iraq and Afghanistan.

Original article