Sunday, November 24, 2013

Research update

Well, it's that time of year again.  No, not the holidays.  No, not winter (or summer, depending on where you are!).  Research update time!  And I think you will agree with me that there is a lot of interesting research being reported, on traditional topics as well as emerging ones.

First, the November issue of JOB:

- Do transformational leaders increase creative performance and the display of OCBs?  Well, that may depend on how much trait affectivity they had to begin with. A reminder to not make blanket statements like "X type of leadership causes Y type of behavior."

- There is seemingly endless debate about the utility of personality inventories.  This study reminds us--again--that in assessment research there are few simple answers.  The authors describe how a particular combination of personality measures correlated with task performance among professional employees, but not non-professionals.  (yes, I said task performance)


Next, the Winter issue of Personnel Psychology (free right now!), much of which is devoted to corporate social responsibility (CSR):

- Do perceptions of CSR drive job pursuit intentions?  It may depend on the applicant's previous justice experiences and their moral identity.

- Oh, and it may also depend on the extent to which applicants desire to have an impact through their work.

- There is a debate in the assessment center literature about whether competency dimensions are being measured or if it's purely a function of the assessment type.  This study suggests that previous research has been hamstrung by a methodological artifact and that measured properly, assessment centers do in fact assess dimensions.


Let's switch to the November issue of the Journal of Applied Social Psychology:

- Engagement is all the rage, having seemingly displaced the age-old concept of job satisfaction (we'll see).  This study reminds us that personality plays an important role in predicting engagement (so by extension our ability to increase engagement may be bounded).

- Here's another good one and it's related to internal motivations.  The authors developed an instrument that helps organizations measure the "perception of the extant motivational climate."  What does that mean?  As I understand it, it's essentially whether most people are judging their performance against their peers or their own internal standards.  It seems the latter may result in better results, such as less burnout.

- On to something more closely tied to assessment: letters of recommendation (LORs).  There's surprisingly little research on these, but this study adds to our knowledge by suggesting that gender and racial bias can occur in their review, but requiring a more thorough review of them may reduce this (I don't know how likely this is for the average supervisor).

- Finally, a study looking at the evaluation of job applicants who voluntarily interrupted their college attendance.  Unfortunately this does not appear to have been perceived as a good thing, and the researchers found a gender bias such that women with interrupted attendance had the lowest evaluations.


Next, the November issue of Industrial and Organizational Psychology, where the second focal article focus on eradicating employment discrimination.  This article looks pretty juicy.  I haven't received this one yet in the mail, so I may have more to say after digesting it.  There are, as always, several commentaries following the focal article, on topics including background checks, childhood differences, and social networks.


Okay, let's tackle the 800-pound gorilla: the December issue of IJSA:

- Are true scores and construct scores the same?  According to this Monte Carlo study, it seems how the scales were constructed makes a difference.

- Can non-native accents impact the evaluation of job applicants?  Sure seems that way according to this study.  But the effect was mediated by similarity, interpersonal attraction, and understandability.

- Here's a fascinating one.  A study of applicants for border rangers in the Norwegian Armed Forces showed that psychological hardiness--particularly commitment--predicted completion of a rigorous physical activity above and beyond physical fitness, nutrition, and sensation seeking.

- Psst....recruiters...make sure when you're selling your organization you stay positive.

- Spatial ability.  It's a classic KSA that's been studied for a long time, for various reasons including its tie to military assessments and the finding that measures can result in sex differences.  But not so fast, spatial ability is not a unitary concept.

- Another study of assessment centers, this time in Russia and using a consensus scoring model.

- And let's round it out with one that should rock some worlds: the authors presents results that suggest that subject matter expert judgment of ability/competency importance bore little relation to test validity!  Okay, I'm really curious about what the authors say about the implications, so if anyone reads this one, let us know!


Last but not least, the November issue of the Journal of Applied Psychology:

- Another on personality testing, this one underlining the important distinction between broad and narrow traits.  This is another article I'm very curious about.

- Here's on one leadership: specifically, on the impact of different power distance values between leader and subordinates on team effectiveness

- And another on nonnative speakers!  This one found discriminatory judgments made against nonnative speakers applying for middle management positions as well as venture funding.  Interestingly, it appears to be fully mediated by perceptions of political skill--a topic that is hot right now.

- Okay, let's leave on a big note.  This meta-analysis found an improvement in performance prediction of 50% when a mechanical combination of assessment data was used rather than a holistic (judgment-based) method.  BOOM!  Think about that the next time a hiring supervisor derides your spreadsheet.

Until next time!


Sunday, November 03, 2013

Will robots replace assessment professionals?


Technology and assessment have had a close relationship for years.  From the earliest days of computers, we were using them to calculate statistics, store items, and put applicants into spreadsheets.

Over time as computers advanced, we used them for more advanced tasks, such as multiple regression, applicant tracking, and computer-based testing.

With the advent of the Internet, a whole new area of opportunity opened for us: web-based recruitment and testing.  People began "showing off for the world" by creating personal webpages, commenting on articles, writing blogs, and living their lives through online social networks.  We developed Internet testing, allowing applicants to examine more conveniently.  And new forms of assessment opened up, such as advanced simulations.

We now find ourselves evolving yet again to take advantage of another significant technology advance: the social web.  As millions and billions of people began living their lives publicly on the web, they began developing a web identity and leaving footprints all over the place.  It was only a matter of time before recruiters (historically some of the first in HR to embrace technology) figured out how to harvest this information.  

One of the hottest trends now in HR technology is scouring the web to seek out digital footprints and making this information readily available to recruiters.  It's the latest iteration of Big Data applied to HR, and it's a creative way to make Internet recruiting more efficient.  Companies like IdentifiedTalentBinGild, and Entelo offer solutions that purport to lay qualified applicants at your doorstop, without all the hassle of spending hours manually searching the web.  They claim an additional benefit of targeting passive job seekers, who are obviously more challenging to attract.

But just exactly how big of an evolutionary step is this?  How big of a solution will this be?  Will this next evolutionary step result in us working ourselves out of a job?

I don't think so.  And let me explain why.

Fundamentally, assessment is about measuring--in a valid, reliable way--competencies key for successful performance in a field, job and/or organization.  Assessment can be performed using a number of different methods, the biggest ones being:

- Ability testing.  Measuring things like critical thinking, reading comprehension, and physical agility.  These tests seek the boundaries of individuals, the maximum they are capable of demonstrating related to a variety of constructs.  When properly developed and used, these tests have been shown to be highly predictive of performance, although some can result in adverse impact.

- Interviews.  One of the oldest forms of assessment and still probably the most popular.  Like ability tests, interview questions can seek "maximum" performance (i.e., knowledge-based), but they can also be used to probe creativity (i.e., situational) as well as gain a better understanding of someone's background and accomplishments (i.e., behavioral).  Interviews have also been shown to be valid predictors of performance, although they rely heavily on potentially unrelated competencies such as memory and verbal skills.

- Knowledge testing.  SAT or GRE anyone?  Multiple-choice tests have been around a long time, and with newer technologies like computer adaptive testing, don't show any signs of going away any time soon.  While these used to be quite common in employment testing, they have fallen out of favor in many places, which is odd given that they too have been shown to be successful predictors of performance (I suspect it is due to their "unsexy" nature and the fact that they require a significant amount of time to prepare)

- Personality inventories.  While these haven't been used nearly as much as the others above, there is an enormous interest in measuring personality characteristics related to job performance.  While they sometimes suffer from a lack of face validity (although contextualizing them seems to help), they have been shown to be useful, and typically demonstrate low adverse impact.

- Applications.  Also extremely popular, and the most relevant for this topic.  The assumption here is that qualifications and (like behavioral questions) past accomplishments predict future performance.  There is potential truth here, but as we know relying on applications (and resumes) is fraught with risks, from irrelevant content to outright lies.


An important thing that all of these assessment types have in common is that they are employer-generated.  One of the fundamental changes society has seen in the last ten years is an enormous shift to user-generated content at the grass roots level.  Anyone can have a blog, regardless of qualifications, and many of questionable veracity are read more than those written by people who actually know what they're talking about.  Content has become, if it wasn't already, king/queen.

But therein lies the fundamental challenge for aggregating digital footprints/content for use in assessment.  Relying on user-generated content, whether from social networks, blogs, comments, or other sources, is predicated on the assumption that qualified candidates are leaving digital versions of themselves.  In places that you have access to.  And that it is accurate.  And predicts performance.  This may work decently in certain industries, like IT, where it may be nearly universal--and expected--that professionals live their lives publicly on the web.  But for many people in many different professions, they may have neither the time nor the inclination to reveal their qualifications online.  In contrast, you can always test someone's ability, and a significant advantage of ability testing is it gives candidates an opportunity to demonstrate what they can do even if they haven't had the chance to do it yet.

I should note that using this information for recruitment is a different--but related--animal.  In this context, concerns about replacing tried-and-true assessment methods are moot.  However, we should carry the same concerns about content generation, both frequency and veracity.

As I've said before, taking technology to its logical endpoint would result in a massive database of everyone on the planet and their competency levels.  This database would empower users to generate and control their content, but allow organizations the widest possible field of qualified candidates.  At this point I'm aware of only one thing that comes close, and honestly I don't see anything approaching this scope anytime soon, particularly with more and more concerns over digital privacy.

Which leaves us...where exactly?  Will robots replace assessment professionals?  Not anytime soon.  At least not if we want hiring to work.  But we should be active observers of these trends, looking both for opportunities as well as pitfalls.  We shouldn't fear technology, but rather the way it's used.  Any important endeavor that requires human analysis should use technology as an assistive tool, not a sexy replacement.

I also want to give props to these companies for taking advantage of user-generated content.  It's a much more efficient way of assessing (i.e., it doesn't require applicants to in some sense double their efforts by completing a separate assessment).  And it's not surprising that these companies have sprouted up, given the trend in HR to automate user-initiated activities that lend themselves to automation, such as leave requests, benefit changes, and training.  But importantly, the science of whether digital footprints predict real-world job performance is in its infancy.  With something as important--operationally as well as legally--as hiring, we have to be careful that our addiction to technology doesn't outstrip our evidence that it works.