As we say goodbye to 2008, I thought it time to review which topics people seem to be the most interested in--on this blog at least.
So without further ado, here, in descending order, were my top 10 most popular posts of 2008:
1. Adverse impact on personality and work sample tests
2. Can the Dewey Color System be used for hiring?
3. What the new ADA law means for recruitment and assessment
4. What does your selection process say about your culture?
5. A review of situational judgment tests
6. B = f (P, E)
7. One man's journey into Talent Management software
8. Real world value of strategic HRM practices
9. Grading topgrading
10. Staffing.org's 2008 benchmark report
So what do I take from all this? Reader interest varies, but combined with the sidebar poll on the homepage, it remains obvious that personality testing is far and away the most popular topic. Hiring managers continue to be very interested in selecting candidates for more than just their ability, and HR professionals continue their quest for the best way to do this. There are very good personality tests out there, but I get the feeling no one's discovered the holy grail here yet.
What I'd like to see in 2009 is much more emphasis on candidate self-selection. Realistic job previews (descriptions and multi-media), assessments used for feedback rather than selection, and much much more. It's a very exciting direction, so let's get to it!
So Happy New Year, keep up the good work, and thanks for reading!
Oh, and please complete Dr. Charles Handler's survey. Thanks!
Wednesday, December 31, 2008
Posted by BryanB at 12/31/2008
Monday, December 22, 2008
One of my favorite games at work is HRspeak. Using a few words and phrases that seem to be overly popular in some HR circles, we create sentences that sound good but are practically meaningless.
"Hey, watcha up to?"
"Not much, just transforming our operational cluster into customer-centric business practices that strategically meet our mission-critical goals using knowledge-based ROI-driven solutions."
Why this is funny (at least to a very small segment of the population) is because an even smaller segment of the population actually finds language like this to be practically useful. Typically in my experience it's HR consultants who are trying to convince HR to reinvent itself. The concept may be sound, but the words get in the way.
As usual, Scott Adams picks up on this phenomenon (and makes it much funnier than I can) in a recent Dilbert strip.
This time of year, often when people reflect and plan, let's all try to communicate a little more clearly.
And here's hoping that you and yours enjoy a joy-filled, family and friend-centric season focused around cuisine-based activities that meet your critical soul-based needs.
Wednesday, December 17, 2008
Last week I presented at the Personnel Testing Council of Northern California's monthly luncheon. The title was "Selection in a changing world: What will we be doing, and who will be doing it?"
The topic was motivated mostly by my own experiences related to what's going on in our field lately. With many of us facing severe budget challenges, I got to thinking about things like:
- What do we need to be doing, what should we be doing, and how can we add value in lean times?
- What impact does automation (e.g., applicant tracking systems) have on the work we do?
- What impact does automation have on the competencies we need?
The field of HR has been talking about transformation for years now, with IMHO only partial success. It's time for assessment professionals to take a look at ourselves and determine if we're where we need to be.
You can see the slides here:
They should also be posted soon on PTC-NC's website.
Tuesday, December 09, 2008
I don't know about you, but one of my least favorite forms of assessment is pouring through resumes. They're not standardized, they leave out important details, and often provide way too many details about things we don't care about. But most importantly, it just doesn't feel like a very valid way of making inferences about candidates.
There are good reasons to dislike this activity. Not only are there rampant self-inflation problems, the inferences recruiters tend to make about applicant personality are erroneous, according to a recent study. After looking at responses from 244 recruiters, the authors found several important results:
1) Low interrater reliability -- in other words, the recruiters didn't agree with each other very often about what the resume said about an applicant's personality.
2) When correlations were made between recruiters' inferences of personality with actual Big 5 scores from the applicants, low levels of validity were found (slightly better for conscientiousness and openness to experience).
3) Despite the two findings above, rater perception of extraversion, oppenness to experience, and conscientiousness predicted their assessment of employability of the applicants.
Lesson? Be very careful what you imply from a resume. Think carefully about the facts you're using to infer personality. If you must use them, screen out only those who lack the basic qualifications to do the job. Follow up the resume screen with a number of much more valid assessments--work sample tests, structured interviews, in-depth reference checks, etc.
Wednesday, December 03, 2008
One of the biggest areas of focus for personnel psychologists is uncovering which selection mechanisms do the best job of predicting job performance.
Different researchers have focused on various tests, but perhaps no tests have received as much attention as those that measure general mental ability (GMA). GMA has consistently been shown to produce the highest criterion-related validity (CRV) values and has some very strong proponents. (For those of you not up on your statistics, CRV refers to the statistical relationship between test scores and subsequent job or training performance; with a maximum value of 1.0, the bigger, the better)
One of the most strident advocates of ability testing is Frank Schmidt, who has studied and written extensively on the topic. You may have heard of the widely cited article he co-authored with John Hunter in 1998. In that article, they present a CRV value of .51 for cognitive ability tests, which is considered excellent. Only work samples received a higher score, but this value has been subsequently questioned.
In the latest issue of Personnel Psychology (v61, #4), Schmidt and his colleagues present an updated CRV value, and it's even higher. Using what they claim is a more accurate way of correcting for range restriction, the authors present an overall value of .734 for job performance and .760 for training performance. This value is the highest I've seen reported in a major study such as this and further solidifies GMA as "the construct to beat" when predicting performance.
The article also uses this same updated statistical approach to looking at the CRV of two personality variables that have been generally supported--Conscientiousness (Con) and Emotional Stability (ES). The values presented for these unfortunately were not that much larger than previously reported: for Con: .332 (.367) and for ES: -.100 (-.106) for job (training) performance.
That all being said, there are some things to note:
1) Use of GMA tests for selection are likely to produce substantial adverse impact with most applicant samples of any substantial size, potentially limiting their usage in many cases.
2) CRV coefficients are just one "type" of validity evidence. The calculation is far from perfect and depends greatly on the criterion being used. The authors admit that they were unable to measure the prediction of contextual performance, which could have resulted in substantially higher values for the personality variables.
3) On a related note, some of the largest CRV values for personality tests I've seen were reported in Hogan & Holland (2003), where they aligned predictor and criterion constructs. This study was excluded from the current study because "the performance criteria they employed were specific dimensions of job performance rather than overall job performance."
4) The lower values reported in this study for personality measures may also reflect the way personality is measured, which the authors acknowledge. They suggest using outside raters as well as multiple scales for the same constructs may yield higher CRV values. Interestingly, they also suggest that personality may not be as important because with sufficient GMA, individuals can make up for any weaknesses--such as forcing yourself to frequently speak with others even if you're an introvert.
5) CRV values for GMA continued to vary substantially depending on the complexity of the job, yielding values that ranged .20-.30 apart from one another. This is a key point and is related to the fact that the type of job--and job performance--matters when generating these numbers.
Last but not least, there's another great article in this issue, devoted to (coincidentally) conducting CRV studies by Van Iddekinge and Ployhart--check it out. They go into detail about many issues directly relevant to the study above.