Yes, and no.
Yes you should use a common rating scale, but no, don't delve too deep into rating standards and recruitment matrices!
But first, what do I mean by a common standard?
Some government recruiters will argue that the selection panel should come up with a list of skills, abilities or knowledge that would reflect a specific rating. So, for example, if the applicant mentions A, B, C, D and E in their application or interview, then they would get a 10/10 score. If they only mention four out of the five points required, they would get a 7/10, if they only mention three, then they get a 4/10 and so on.
It's great that panels want to be on the same page and make sure they're assessing the applicants in the same way. I can see why panels do it, and I can understand that the arguments for using a common standard can appear very strong on the surface. The argument for using this approach is that each panel member should not be using their own standard of judgement, every one should be using the same standard of judgement. However, this method can also be extremely flawed.
For starters, government recruitment utilises selection panels to get a balanced opinion and different viewpoints. That is why most government departments also insist on independent panel members. If you start using a very rigid common standard then you are dismissing the value of the panel.
Secondly, what if the common standard that the panel decide to use, is flawed?
For example, if the selection criteria being assessed is
'interpersonal skills', who defines what interpersonal skills
are? What if the applicant comes up with a better list of points
than the panel ... will they be marked low because they didn't get all
point on the panel's list? Will they get a bonus score for coming
up with extra things? What if the panel decide half way through
that their common standard is flawed or incomplete?
If you are going to define your criteria with a common standard and introduce additional competencies, then you should attach them to your selection criteria and publish them like this in the first place - hence the more common use of competency frameworks.
Publishing one set of selection criteria that the applicants have to address and then adding additional criteria that you are using to assess the first criteria is getting way too involved and moving towards delivering an unfair recruitment process (you are judging applicants on something that they are unaware of and has not been advertised).
More and more inexperienced selection panels are using this process
to try and deliver a more standard and robust recruitment exercise but
they are in effect making their recruitment process less transparent. And often getting the wrong result in the end because their assessment criteria are wrong.
Human behaviour cannot be measured entirely by a list of
competencies. So, selection panels, I urge you to come back to basics
and assess your applicants by having an old fashion conversation about
them. Give up the pages of rating scales and assessment matrices and get back to your original selection criteria!