Sunday, May 14, 2017

Big research update

It's been a while since I provided a research update, so let's take a look at some recent highlights:

The March 2017 issue of the International Journal of Selection and Assessment (IJSA) (free right now!):

The June IJSA:



Vol 2(1) of Personnel Assessment and Decisions:



April Psychological Bulletin:



March Journal of Applied Psychology:



May Journal of Applied Psychology:



March Journal of Organizational Behavior:



May Journal of Organizational Behavior:



June Journal of Business and Psychology:


That's it for now!

Wednesday, March 29, 2017

Which employment tests work best? An update.

I'm not sure how I missed this one, but three researchers are updating Schmidt & Hunter's famous study of the validity of personnel selection procedures. And unlike much of the research in this area, it's free (webpage) (PDF)! It's also a working paper, so proceed with caution.
Oh, and some of the results as they stand will probably make your head spin.
Let's back up. In 1998, Frank Schmidt and John Hunter published "The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings." Via meta-analysis, the authors reported the criterion-related validity coefficients of 19 selection methods. What does this mean? Basically, how well each type of employment test predicts job (or training) performance. It's what some consider the "gold standard" of validity because it's based on actual performance, not subject matter expert judgment. (I'll leave that debate for another day)
The bottom line is that when they crunched these numbers, there were several "winners", including cognitive ability, work samples, integrity tests, and structured interviews. It still stands as one of the most frequently-cited research articles in the field of industrial/organizational psychology.
Since then, there have been statistical advancements that researchers can use to improve the accuracy of their estimates, and of course more primary research has been done, which can be included in meta-analyses.
So all that said, what are the updated results (as currently reported)? Here are some highlights, which now include analyses of 31 different types of selection methods:
  1. Cognitive ability or "general mental ability" (GMA) still reigns supreme. Casting aside for now that these tests tend to result in adverse impact, the criterion validity coefficient went up.
  2. Unstructured interviews match structured interviews. "Whaaat?" you say? Take a look, and check the explanation for why this is, statistically. Remember there are other factors in play that should inform your decision about how to structure interviews (e.g., legal defensibility, merit rules).
  3. Validity of interviews (both types) went up. Bottom line: interviews can work.
  4. Validity of work samples went down. The authors use the value found in a 2005 study, and the results may have to do with these tests being used in different sectors than originally envisioned. Whatever the explanation, I scratch my head on this one because I'm a big believer in work sample tests.
  5. Conscientiousness went down. But the authors remind us that studies using work-specific measures show improved results.
  6. The T&E point method still under-performs. Many organizations have become besotted with this training and experience (T&E) approach, which gives candidates points based on how applicants respond to self-reported inventories. They're easy to develop, and easy to automate. And they don't predict performance well compared to the others. Best used as an initial hurdle (if at all).
  7. Strictly relying on years of experience or years of education doesn't work. Sorta calls the whole resume review thing into question, doesn't it?
  8. Job knowledge tests still performed quite well. Some things never go out of style, like a well-developed test of concepts applicants must know to do the job.
  9. Emotional intelligence measures did a decent job. They're not close to the top, but they're not useless either.
  10. The best results still come from combining tests. The authors are a fan of combining cognitive ability tests with integrity or job knowledge tests, but there are other combinations that work as well, and they out-perform using a single measure.
So what does all this mean? Well, nothing and everything. Nothing, because how you test for a job should be based on that job. You should never just blindly pick an employment test out of a hat. The choice should be based on what competencies are required for that job, day one, along with other factors such as organizational context and operational considerations.
Everything because the only real way we know whether these things work is to look at the data—and this type of research is about as good as we get. For now. Let's see where we're at a few years from now, when we have more information about emerging forms of measurement, such as simulations and VR. Until then, keep using good tests.




Written by




Saturday, January 07, 2017

Mini research update

Without further ado, a few research updates:

January 2017 Psychological Bulletin:

- Why is there better gender balance in some STEM fields than others?  Possible reasons these authors cite include masculine cultures, lack of early experience, and lower self-efficacy.

- There appears to be a strong relationship between emotional intelligence and other personality factors, particularly trait EI.


January 2017 Journal of Applied Psychology:

- Sackett & Lievens argue that a modular perspective on selection procedures --in other words, breaking selection procedures down into their design components such as response format -- allows for insights beyond a holistic view.  Read this quite fascinating article here.


Reminder: the Journal of Personnel Assessment and Decisions is free, available here.

Tuesday, November 15, 2016

Research update

Several research updates this time, including fascinating studies of how computers can assist with interview training, and how a brief writing exercise can lower stereotype threat for women:


International Journal of Selection and Assessment, December, 2016:

- International support for the cultural intelligence scale

- Looking to improve applicant interview performance? Maybe a computer can help.

- This study found that time lag and g-loading are important factors impacting re-testing results

- Status-seeking seems to be an important individual difference when looking at self-presentation behaviors, including exaggeration and faking in job search

- Development and validation of a 360-degree measure of leadership personality


Personnel Psychology, Winter 2016:

- Do CEOs significantly impact firm performance?  This study found evidence that they do.

- More evidence that the assumption that performance is normally distributed is questionable

- A more accurate correction for range restriction is presented, and an example analysis indicates the relationship between the Big 5 and job satisfaction may be greater than previously believed



Journal of Organizational Behavior, November, 2016 (which includes several articles devoted to the importance of theory in organizational sciences):

- Are self-focused or other-focused recruiting advertisements more effective?  This study suggests it depends not only on cultural differences but individual regulatory focus



Journal of Applied Psychology, October, 2016:

- A suggestion for improving meta-analytic structural equation modeling



Journal of Applied Psychology, November, 2016:

- A fascinating study of how having women compose a brief written description of their personal values can help ameliorate stereotype threat in competitive environments

Saturday, September 03, 2016

How to create the ultimate hiring system

Unless your organization is primarily composed of robots (which is becoming more of a reality for some*), arguably the most important thing you need to get right is hiring.  To restate the obvious, without the right people, in the right places, at the right time, your organization hampers its ability to innovate, collaborate, deliver, and fulfill your mission.

So how do you ensure that your organization consistently makes great hires?  Before I get into the steps, let's talk a little bit about culture.  None of the steps below will reliably deliver results unless you first get serious about two things: (1) disciplined processes and procedures, and (2) a clear understanding of the roles of HR versus hiring supervisors.

In order to make great hires time and time again, you have to document your practices and put them in place across the organization.  Everyone needs to understand that this is the way things are done here, not simply an initiative.  Supervisors and HR need to be trained--and reinforced--for how successful they implement these steps.

Speaking of these players, both hiring supervisors and HR need to be very clear about what their roles are.  Hiring supervisors are expected to know the job they're hiring for and what it takes to succeed in it.  HR is expected to have expertise in job analysis, recruiting, assessment, onboarding, and other aspects of talent management.  This should be part of their job descriptions, and their performance should be in part based on how successful they are at serving in these roles.

Okay, with that out of the way, let's talk about the steps your organization needs to have in place to ensure repeatable success in hiring.


Step 1: Know your Organizational Reputation.  Before you even think about hiring for a specific job, you need to think about the reputation of your organization.  Is it a destination employer, or an employer of last resort?  What do people say about your workplace?  This is important because it drives the pipeline of talent.  If you're a destination, the pump is primed and "hard to recruit jobs" becomes less of an issue, making the steps below that much easier.  Find out what the word on the street is about your organization--how do your employees feel? Your customers? Prospective applicants?



Step 2. Analyze the job.  Yes, many jobs are becoming more fluid, but even narrowing the job to its occupational category helps.  Think about the most important tasks the person will perform on a day-to-day basis and what competencies or knowledge, skills, and abilities are required to perform them.  Sites like O*NET are a huge resource so you don't reinvent the wheel.  Without knowing the job, hiring is a roll of the dice.



Step 3. Develop a recruitment/assessment strategy.  Ya gotta have a plan.  It doesn't have to be a 20-page missive, but you need to document what your plan is, otherwise you're unlikely to cover all the bases.  Honestly here's where a lot of hiring processes fall apart--people have the best intentions but they forget about certain key steps.  Hey, here's an idea: use the same document that you used to document the job in Step 2!  The way the key competencies will be linked to how you plan to recruit and assess for each of them.



Step 4. Use multiple and creative recruitment strategies.  The only time "post and pray" is acceptable (and even then I'd argue against it) is if you've nailed your reputation as described in Step 1.  Recruiting is sales, plain and simple--you're selling the job, the organization, and the people.  Use the web, but also think about physical interactions, including open houses.  Reach out to schools.  Develop realistic job previews.  Hire recruiters that have a marketing and sales background.  Don't be afraid to push the envelope if you need to stand out from the crowd.  Honestly, the sky is the limit.



Step 5. Use multiple high-quality assessments, internet and mobile whenever possible.  In-person interviews aren't going away any time soon (although many of those are migrating online), but they should be only one tool  in your belt--not the only tool.  Assessment starts with how you recruit, because you allow applicants to self-select in and out.  It continues with assessments embedded in the application process, whether that's a statement of qualifications, an online survey, or a set of online skills assessments.  And don't forget to make any "minimum qualifications" truly minimum--please don't rely on hard-and-fast "X years of experience" or "Y degree"--those should be suggestions.  The important thing isn't the type of assessment, it's that you're using several and they're tied to those important competencies you identified in Step 2.  In short, using a single assessment is like buying a house based on what it looks like from the outside.



Step 6: Don't forget about them once you make the offer.  Again, this is pretty obvious, but once you've made the offer, don't breath a sigh of relief and get back to your Inbox--your job isn't over yet, not by a long shot.  Your new employee needs to feel welcomed to the organization, have the tools they need, understand what the expectations are, and get continual feedback--in other words, feel like they made the right choice and have a successful future with you.


Get these simple steps right, make them part of your organizational DNA, and you will ensure that not only do you get hiring right--you'll get performance right.

* Stay tuned for my signature article of 2018, "How to hire the right robot for the job"

p.s. this post marks 10 years for my blog!  Thanks for reading!

Saturday, July 30, 2016

Welcome to Sacramento, IPAC'ers!

This year, IPAC's annual conference is here in my home of Sacramento from July 31 - August 3.  I'll be there Monday afternoon for a session titled "Fits and Starts: The Evolution of Testing for the State of California (Special Invited Session)", where I'll be interviewing my good friend, Adria Jenkins-Jones.  Here's the description:

In this lively discussion, the presenters will discuss the current state of employment testing for the state of California, including significant recent and upcoming changes to our examination software. Using an interview format, the presenters will discuss the massive changes envisioned for statewide testing, and how the California Department of Human Resources is attempting to collaborate with stakeholders to change the traditional paradigm, and significantly improve automation, while maintaining the commitment to merit. The presenters will engage in open dialogue and there will be time for audience members to ask questions about current and future directions of testing for the state.

Feel free to use the comments section below to post the sessions you plan on attending, or anything about your experience!