Saturday, September 10, 2011
One last time: It's all subjective
While reading news about a court decision recently I was struck again by how the U.S. Court system continues to make a false and largely unhelpful distinction between "objective" and "subjective" assessment processes, and they're certainly not the only ones. Presumably this is meant to highlight how some are based on judgment while others are free from it.
One last time: it's all subjective.
I challenge you to come up with a single aspect of the personnel assessment process that is not based on judgment. Not only that, "degree of judgment" is not a useful metric in determining the validity of a selection process--for many reasons including but not limited to whose judgment is being used.
Here is a sample of assessment components that are based on judgment:
- how to conceptualize the job(s) to be filled
- how to study the job(s)
- how to recruit for the job(s)
- which subject matter experts to use
- how to select the KSAs to measure
- how to measure those KSAs
- how items are created or products are selected
- how to set a pass point, if used
- how to administer the assessments
- how to score the assessments
- how to combine assessments
- how to make a final selection decision
- what type of feedback to give the candidates
And this is just the tip of the iceberg. The entire process is made up, like all complex decisions, of smaller decisions that can impact the selection process (what software do I use to analyze the data? Is this one KSA or two?).
So what WOULD be helpful in describing how an assessment process is developed and administered? I can think of a few yardsticks:
1. Extent to which processes and decisions are based on evidence. To what extent is the assessment and selection process based on methods that have been shown scientifically to have significant utility?
2. Degree of structure. To what extent are the assessment methods pre-determined? How flexible are the processes during the assessment?
3. Multi-dimensionality. How many KSAs are being measured? Is it sufficient to predict performance?
4. Measurement method. How many assessment methods are being used? Do they make sense given the targeted KSAs?
5. Transparency. Is the entire process understandable and documented?
I'm not inventing the wheel here. I'm not even reinventing it. I'm pointing out how most cars have four of them. It's obvious stuff. But it's amazing to me that some continue to perpetuate artificial distinctions that fail to point out the truly important differences between sound selection and junk.