Saturday, June 11, 2016

One way to reduce interviewer leniency/severity


A persistent challenge in interviews is that certain interviewers tend to be lenient (i.e., score candidates highly) while others are consistently more severe (i.e., score candidates lower).  This of course is not ideal as it introduces measurement bias as well as reduces the defensibly of the process.

One way to reduce these tendencies discussed by Hartwell and Campion in the June 2016 issue of Journal of Applied Psychology is to provide interviewers with what they call "normative feedback interventions."  Basically what this means is giving interviewers data on how they rate candidates over time compared to how other interviewers rated.  It can reveal to interviewers that they tend to rate candidates more harshly, or more easily, than others.

What Hartwell and Campion found in their study (of over 20,000 interviews using more than 100 interviewers) is that by providing this feedback to interviewers, it minimized interviewer differences and increased interview reliability--both obviously good things in terms of quality of the process.  Interestingly, it did not seem to impact the validity of the interviews, but it did impact which particular candidates were hired.

Up until now, one of the most often recommended practices for reducing rating errors has been pre-interview instructions and guidance regarding these errors.  What this study suggests is we can do even better by providing interviewers with objective data about their ratings over time.  Listening to someone talking about rating bias probably feels a lot different than actually seeing how you do compared to your peers!

Wednesday, June 01, 2016

Some easy tests to improve your hiring success

The interview is such a commonly used hiring assessment that it's hardly worth mentioning (although there is always room for improvement).


But what if you're already doing interviews and you want some easy to implement add-ons?  No problem.  Here are some ways to improve quality of hire for knowledge worker positions that don't take a long time, an automated solution, or a PhD to develop:


1. Pre-screening questionnaire.  Whether you use something quick and cheap like SurveyMonkey or your own proprietary assessment system, it's easy to create open- and closed-ended items that serve to screen out the uninterested, allow you to get some more detail from candidates, and even help you solve problems you've been struggling with!  Keep it relatively short so you don't dissuade the most in-demand candidates.


2. Targeted cover letter.  Don't just ask for a generic cover letter, ask applicants to describe in their letter how their background syncs with the core competencies you're looking for.  Remember: limit the length; two pages is generally sufficient.



3. Research project.  As part of the application process, as candidates to look into an issue that's relevant for the job.  How do they think the new overtime regulations will impact the industry?  What new technologies are on the horizon that will change the way this job is done?  Have them briefly write up their results, and/or ask about it during your interview.


4. Writing exercise.  There's no substitute for live demonstrations of writing ability.  Have them correct a document you've messed up, ask them to write a quick memo to a customer--just something related to the core duties of the job that you would expect them to be able to do day one.


5. Rule/procedure application.  Knowledge worker jobs are characterized by frequent application of laws, rules, and procedures to specific situations.  Either provide them with the rules ahead of time or give them the short version, then give them a specific fact pattern and have them come up with a solution or options.



6. Oral presentation.  Have you ever been on a hiring panel where there's an oral presentation?  If not, you're missing out.  Presentations are a great way to mix things up and see those different skillsets that a candidate brings.  Of course realize that you're adding a presentation to an interview, which I'm pretty sure are both in the top five of most stressful events.


Notice that you can mix and match theses approaches: have them do a rule/procedure application and then write a memo.  Have them do an oral presentation on a topic they researched ahead of time.  Of course, the tests you use should be based on the requirements of the job; start with entry level KSAs needed and let the assessments flow from that.  Beyond that, be creative!

Sunday, May 22, 2016

First, get the people basics right

Competencies, talent, gamification... there's no doubt about it, we like us some buzzwords.  Like bright shiny objects, these ideas entice--and largely detract.

Sometimes new ideas and ways of thinking can lead to significant improvements in the way organizations manage their people. But here's the truth that no one seems to want to talk about: many organizations fail to get the basics right.  So while leaders may be leaping headlong into the nanofied virtual talent management sunset, the foundation of HR is lacking.

What are these basics of which I speak?

1. Adequately defining jobs--based on subject matter expert data.   Every single job should be defined and documented in terms of key tasks, requirements, and expectations.  The form this takes is less important than the quality of the data. This is the bedrock that helps you recruit, select, reward, and manage effectively.

2. Recruiting like you're selling, not like you're being forced to.   Writing attractive job ads is so easy, why aren't we swimming in them?  The same reason many organizations fail to accurately describe the job: laziness and lack of discipline.

3. Using valid hiring measures.  Speed of hire is important, but not even remotely as important as quality.   I can make you a sandwich really quickly if it's just bread.   Do you think Google gets millions of resumes each year because candidates are hoping for a quick hire?  Importantly, the higher in the organization, the more time should be spent on valid assessment.

4. Holding leaders accountable for being leaders. This really should be #1 except I was trying to go chronologically (and will fail miserably).  All too often, it's the line staff who are quickly called to the carpet when they make mistakes.   But holding leaders accountable for their behavior (hint: ask their subordinates) is exponentially more powerful.

5.  Listening to each other.  Many if not most good ideas for improving your organization are in the heads of your line staff.   Do you ask them regularly and implement their ideas?  Is listening skill considered critical for all employees?

6.  Saying thank you.  It's easy, it's cheap.   Do it more, and mean it.  

7.  Dealing firmly with poor performance.  This is top to bottom, from not being helpful on the phone to running productive meetings.  Again, the higher in the organization, the more important this is.

8.  Growing your people--forever.  Sure,  they may leave, but they'll leave sooner if you don't invest in them.  And like everything else on this list, it grows your reputation. 

9. Treating people with respect and fundamental human decency.   If you have this as a backbone, many other things simply follow.  There's a reason why one of the most popular business books recently is The No Asshole Rule.


None of this is incredibly difficult, it just takes the most precious resource of any organization: time.   And it takes commitment and discipline.  But these aren't initiatives.   They're part of an organization's DNA--or not.  They're how people respond when asked what it's like to work there.   And who is responsible for ensuring they happen ?  The people at the top. 

So before your organization jumps on to the latest buzzword bandwagon, make sure it's getting these basics right (by, I dunno, measuring them). Just promise me this, if you pick just one thing on this list:

Select.  Good.  Leaders.

Wednesday, May 04, 2016

Research update

A few new journal issues have come out lately:

Summer 2016 Personnel Psychology, including:

Transparency of Assessment Centers: Low Criterion-related Validity but Greater Opportunity to Perform?

May 2016 Journal of Applied Psychology, including:

Initial impressions: What they are, what they are not, and how they influence structured interview outcomes.

Racioethnicity, community makeup, and potential employees’ reactions to organizational diversity management approaches.

June 2016 International Journal of Selection and Assessment, including:

Applicant Reactions to Selection Events: Four studies into the role of attributional style and fairness perceptions

Behavioral Cues as Indicators of Deception in Structured Employment Interviews

The Role of Self-focused Attention and Negative Self-thought in Interview Anxiety: A test of two interventions

The Influence of Candidate Social Effectiveness on Assessment Center Performance Ratings: A field study

Discrimination due to Ethnicity and Gender: How susceptible are video-based job interviews?

A Comparison of General and Work-specific Personality Measures as Predictors of Organizational Citizenship Behavior

The Perceived Nature and Incidence of Dysfunctional Assessment Center Features and Processes

Who is Being Judged Promotable: Good actors, high performers, highly committed or birds of a feather?

Sunday, February 28, 2016

New journal issues

Two new journal issues to make you aware of:

International Journal of Selection and Assessment - March 2016

Unintended Consequences of Transparency During Personnel Selection: Benefitting some  candidates, but harming others?

Ethnic Differences in Perceptions of Cognitive Ability Tests: The explanatory role of self-serving attributions

Conditional Reasoning Test for Aggression: Further evidence about incremental validity


For Love or for Money: Intrinsic and extrinsic value congruence in recruitment

Social Influences in Recruitment: When is word-of-mouth most effective?

Highlighting Tensions in Recruitment and Selection Research and Practice

Tests of Integrity, HEXACO Personality, and General Mental Ability, as Predictors of Integrity Ratings in the Royal Dutch Military Police

Training Affects Variability in Training Performance Both Within and Across Jobs

Examining Applicant Reactions to Different Media Types in Character-based Simulations for Employee Selection

When Will Interviewers Be Willing to Use High-structured Job Interviews? The role of personality


Journal of Applied Psychology - March 2016


How and why do interviewers try to make impressions on applicants? A qualitative study.

The long road to employment: Incivility experienced by job seekers.

The role of self-determined motivation in job search: A dynamic approach.


Sunday, January 24, 2016

How a mobile game made me re-think the nature of jobs

I have a lot of games on my iPad.  They go on for several pages as you swipe left.

Some are quite good, but most are just good enough that I can't bring myself to delete them, but not good enough that I play them regularly.

Every once in a while though, one comes along where I find myself playing it whenever I get downtime, usually due to some clever reward scheme that borrows heavily from the concept of variable interval reinforcement.  (come to think of it, most of them do)  You Candy Crush players out there know what I'm talking about.

Quite unexpectedly, I came across one recently that's not only fun, but made me re-think my approach to jobs.  Frankly it was rather eye-opening.  I know this sounds odd, but stay with me.



It's a game called Pixel People. It's a "city building" game, drawn from the rich history of games like SimCity and Civilization.  There are many games, like Clash of Clans, that have an element of city building, but unlike most of them, Pixel People doesn't focus on fighting, but instead on building your city.

The graphics aren't anything to write home about, you can see an example below.  The word "pixel" isn't in the title by accident.



Fortunately, the game doesn't need high-resolution graphics to pull off addicting gameplay.  And part of what makes it addictive--actually, the biggest part--is how you create people to populate your city.

Every citizen starts off as a clone, tabula raza.  You individualize them by giving them professions.  And the way you give them professions is by combining other professions from different categories.

For example, you start off with a Mayor and a Mechanic.  The Mayor belongs to the "Administration" occupational category, the Mechanic to the "Technical" category.  It was right about this point that something tickled at the back of my mind and I was reminded of a typical job classification system (O*NET being a prime example).

The best part of the game is that in order to move forward and create new professions and buildings, you combine two different professions.  For example, if you combine the professions of Mayor with the profession of Mechanic, you get an Engineer.  By doing so you create not only an Engineer, but access to the Garage and Mine structures.  You continue combining professions to create new ones, create new buildings which offer new abilities, etc.

Here are some other profession creation combinations, of which there are currently 400:

Mechanic + Engineer = Mechanical Engineer (duh)
Director + Model = Actor
Doctor + Park Ranger = Vet
Farmer + Farmer = Botanist
Mechanic + Police Officer = Firefighter
Architect + Dreamer = Artist

As you can see, the game designers put some thought into how different professions relate to one another.  They're not perfect, but close enough to make you smile when you create a new profession.

So what does this have to do with assessment?  Thanks for hanging in there.  Well, the game mechanic got me thinking about how overly logical and rational most of our classification systems are, and how little we acknowledge the overlap and relationships between occupations.

Most of us in HR structure our worlds around the idea that jobs can be categorized and differentiated.  And in some cases, this makes perfect sense.  A doctor is not the same as a computer programmer.  Different educational requirements.  Many core competencies required for the position are different.

But I submit to you that in many cases, occupations have more overlap than we pretend.  They are related to one another in ways that we don't typically acknowledge.  And this has implications for recruiting, assessment, compensation, promotional paths--i.e., the core work involved in talent management.  For example:

Recruiting:  currently, the ideal model of recruiting is to identify through job analysis the core KSAs or competencies required, and craft your recruitment campaign to attract those that possess them.  KSAs can get very specific, resulting in recruiting efforts often focused on a pretty narrow desired profile.  If we acknowledge that many jobs in our organizations have more overlap than we normally pretend, it becomes obvious that recruitment campaigns can become broader in two major ways: (1) you start to focus more on recruiting for the organization, not specific jobs, and (2) you start recruiting for broader skillsets or competencies, like analytical skill and conscientiousness.  I don't think it's a coincidence that the last 40+ years of assessment research has repeatedly underlined the predictive power of these qualities.

Assessment:  like recruiting, assessment strategies typically target very specific KSAs--knowledge of a particular programming language, knowledge of a particular area of HR law, etc.  If we acknowledge the overlap and relationship between jobs, it changes our assessment strategy.  Like our recruitment strategy, we focus on broader targets such as communication ability and ability to work as part of a team.  I'm not suggesting we shouldn't focus on those things that are critical to job performance and necessary upon entry to the position, but rather that we not prioritize those above more general qualities of the individual.

Compensation:  most often, particularly in civil service systems, compensation is based on the job category someone belongs to and their tenure.  If we instead acknowledge the somewhat artificial nature of our classification structures, it shifts the focus to compensation being based on contribution to the organization.  I recognize pay-for-performance has had inconsistent success, but I suspect that has as much to do with what's being compensated as it does with the concept.

Thinking about recruitment, assessment, and compensation in this way broadens our horizons when it comes to other aspects of talent management, such as career mobility.  It becomes easier to see how transferable skills benefit the organization, increasing its ability to adjust to new conditions, including unexpected turnover.  Instead of staffing focused on narrow KSAs, we fill our organization with people whose strengths allow them to move relatively fluidly between jobs, which helps the individuals as well in their career development.

Am I suggesting that we ignore specific skillsets when recruiting?  Definitely not.  Obviously sometimes you need people with a very particular ability or knowledge.  What I am suggesting is we shift the balance toward a much more inclusive perspective when it comes to the qualities we seek.

What do you think?  Has your organization already acknowledged the overlap between jobs?  Do you already recruit and select based on a broader mindset than simply those KSAs required for a particular position?  Have the long hours spent in front of a tablet warped my perspective?



Footnote: long-time readers will have noticed that I'm not posting nearly as much as I used to, and for that I apologize.  I took a new job last summer and since then my blogging has suffered.  If you want to follow me, I recommend my Facebook page, which I update more often.  Thanks for hanging in there! This year marks the 10-year anniversary of HR Tests and I hope to do something special in celebration.

Thursday, August 13, 2015

Where there's a will, there's a way: OPM shows how to do UIT the right way


On August 10 and 12, PTC-NC was privileged to have Dr. Patrick Sharpe from the U.S. Office of Personnel Management (OPM) deliver a presentation about USA Hire, part of a suite of online platforms that has allowed OPM to revolutionize the way they deliver assessment services to their diverse customers.

The idea to ask Dr. Sharpe to present occurred to me when I read an article in the Washington Post on April 2 about USA Hire titled, "For federal-worker hopefuls, the civil service exam is making a comeback".   It provides an overview of what OPM has managed to accomplish with its partner, PDRI, in the area of unproctored internet testing (UIT).  Start there if you want to learn more because it includes some examples of the items--although significantly absent is an example of the excellent video avatar-based assessment used for things like situational judgment.

Dr. Sharpe did an excellent job painting the picture for the audience of how much work was involved in the project, and how important things like stakeholder communication and contract management were to ensuring the project was a success.  He then showed us a demo version of USA Hire, where he lead us through what it looks like from the applicant's perspective as they proceed through a series of competency-based assessments.  The item formats range from the traditional (e.g., reading comprehension multiple-choice) to the modern (avatar-based SJT) to the groundbreaking--at least for the public sector (forced-choice non-cognitive assessment).

Here are some of the key points I took away:

- The technology is just a part of successfully putting an UIT program together, you have to step back and look first at what you're trying to accomplish.  For example, are you interested in whole person assessment (as OPM is) or simply focusing on certain KSAs?

- USA Hire is the culmination of years of research and analysis, and traces its history back 20-30 years within the federal government.   Translation : don't jump into UIT without careful planning.

- Start with the basics when delivering UIT: make sure the customer has a solid job analysis foundation before jumping to the assessment platform

- Getting a larger, more influential, customer successfully implemented can cause others to jump on board

- Realize that, particularly in a decentralized testing environment, you may still end up with a hybrid of different testing approaches following the roll-out of UIT, and this includes T&Es.  But the best way to move the practice is to show what success looks like.

- Consider carefully whether you want to build, buy, or lease the technology.  There are benefits and drawbacks to each.

- Starting with a pilot can be a great way to test the system (no pun intended), and also demonstrate the potential to stakeholders.

- Collaboration between assessment professionals, HR specialists, and vendors is critical.

- Don't underestimate the importance of change management.  Fears (e.g., about losing control) come easily and have to be addressed head-on.

- Organizational and system readiness is very important.  Part of the reason this effort was successful is because hiring organizations were fed up with the extremely low utility (and perception) of point-based T&Es.

For someone passionate about assessment and technology, the presentation was educational and motivational.  I walked away, as did others, with a new-found optimism for what sufficient will, resources, and tenacity can accomplish.  It's seductive to focus on what can't be done in the public sector, so to hear and see what can be done reemphasizes the importance of leadership--both in HR and at the top of the organization.

Unproctored internet testing has been talked about for so long, but to see it in action, in a research-based way in the public sector, is truly inspirational.  Truly where there is a will, there is a way.