Well, it's that time of year again. No, not the holidays. No, not winter (or summer, depending on where you are!). Research update time! And I think you will agree with me that there is a lot of interesting research being reported, on traditional topics as well as emerging ones.
First, the November issue of JOB:
- Do transformational leaders increase creative performance and the display of OCBs? Well, that may depend on how much trait affectivity they had to begin with. A reminder to not make blanket statements like "X type of leadership causes Y type of behavior."
- There is seemingly endless debate about the utility of personality inventories. This study reminds us--again--that in assessment research there are few simple answers. The authors describe how a particular combination of personality measures correlated with task performance among professional employees, but not non-professionals. (yes, I said task performance)
Next, the Winter issue of Personnel Psychology (free right now!), much of which is devoted to corporate social responsibility (CSR):
- Do perceptions of CSR drive job pursuit intentions? It may depend on the applicant's previous justice experiences and their moral identity.
- Oh, and it may also depend on the extent to which applicants desire to have an impact through their work.
- There is a debate in the assessment center literature about whether competency dimensions are being measured or if it's purely a function of the assessment type. This study suggests that previous research has been hamstrung by a methodological artifact and that measured properly, assessment centers do in fact assess dimensions.
Let's switch to the November issue of the Journal of Applied Social Psychology:
- Engagement is all the rage, having seemingly displaced the age-old concept of job satisfaction (we'll see). This study reminds us that personality plays an important role in predicting engagement (so by extension our ability to increase engagement may be bounded).
- Here's another good one and it's related to internal motivations. The authors developed an instrument that helps organizations measure the "perception of the extant motivational climate." What does that mean? As I understand it, it's essentially whether most people are judging their performance against their peers or their own internal standards. It seems the latter may result in better results, such as less burnout.
- On to something more closely tied to assessment: letters of recommendation (LORs). There's surprisingly little research on these, but this study adds to our knowledge by suggesting that gender and racial bias can occur in their review, but requiring a more thorough review of them may reduce this (I don't know how likely this is for the average supervisor).
- Finally, a study looking at the evaluation of job applicants who voluntarily interrupted their college attendance. Unfortunately this does not appear to have been perceived as a good thing, and the researchers found a gender bias such that women with interrupted attendance had the lowest evaluations.
Next, the November issue of Industrial and Organizational Psychology, where the second focal article focus on eradicating employment discrimination. This article looks pretty juicy. I haven't received this one yet in the mail, so I may have more to say after digesting it. There are, as always, several commentaries following the focal article, on topics including background checks, childhood differences, and social networks.
Okay, let's tackle the 800-pound gorilla: the December issue of IJSA:
- Are true scores and construct scores the same? According to this Monte Carlo study, it seems how the scales were constructed makes a difference.
- Can non-native accents impact the evaluation of job applicants? Sure seems that way according to this study. But the effect was mediated by similarity, interpersonal attraction, and understandability.
- Here's a fascinating one. A study of applicants for border rangers in the Norwegian Armed Forces showed that psychological hardiness--particularly commitment--predicted completion of a rigorous physical activity above and beyond physical fitness, nutrition, and sensation seeking.
- Psst....recruiters...make sure when you're selling your organization you stay positive.
- Spatial ability. It's a classic KSA that's been studied for a long time, for various reasons including its tie to military assessments and the finding that measures can result in sex differences. But not so fast, spatial ability is not a unitary concept.
- Another study of assessment centers, this time in Russia and using a consensus scoring model.
- And let's round it out with one that should rock some worlds: the authors presents results that suggest that subject matter expert judgment of ability/competency importance bore little relation to test validity! Okay, I'm really curious about what the authors say about the implications, so if anyone reads this one, let us know!
Last but not least, the November issue of the Journal of Applied Psychology:
- Another on personality testing, this one underlining the important distinction between broad and narrow traits. This is another article I'm very curious about.
- Here's on one leadership: specifically, on the impact of different power distance values between leader and subordinates on team effectiveness
- And another on nonnative speakers! This one found discriminatory judgments made against nonnative speakers applying for middle management positions as well as venture funding. Interestingly, it appears to be fully mediated by perceptions of political skill--a topic that is hot right now.
- Okay, let's leave on a big note. This meta-analysis found an improvement in performance prediction of 50% when a mechanical combination of assessment data was used rather than a holistic (judgment-based) method. BOOM! Think about that the next time a hiring supervisor derides your spreadsheet.
Until next time!
Sunday, November 24, 2013
Well, it's that time of year again. No, not the holidays. No, not winter (or summer, depending on where you are!). Research update time! And I think you will agree with me that there is a lot of interesting research being reported, on traditional topics as well as emerging ones.
Sunday, November 03, 2013
Technology and assessment have had a close relationship for years. From the earliest days of computers, we were using them to calculate statistics, store items, and put applicants into spreadsheets.
Over time as computers advanced, we used them for more advanced tasks, such as multiple regression, applicant tracking, and computer-based testing.
With the advent of the Internet, a whole new area of opportunity opened for us: web-based recruitment and testing. People began "showing off for the world" by creating personal webpages, commenting on articles, writing blogs, and living their lives through online social networks. We developed Internet testing, allowing applicants to examine more conveniently. And new forms of assessment opened up, such as advanced simulations.
We now find ourselves evolving yet again to take advantage of another significant technology advance: the social web. As millions and billions of people began living their lives publicly on the web, they began developing a web identity and leaving footprints all over the place. It was only a matter of time before recruiters (historically some of the first in HR to embrace technology) figured out how to harvest this information.
One of the hottest trends now in HR technology is scouring the web to seek out digital footprints and making this information readily available to recruiters. It's the latest iteration of Big Data applied to HR, and it's a creative way to make Internet recruiting more efficient. Companies like Identified, TalentBin, Gild, and Entelo offer solutions that purport to lay qualified applicants at your doorstop, without all the hassle of spending hours manually searching the web. They claim an additional benefit of targeting passive job seekers, who are obviously more challenging to attract.
But just exactly how big of an evolutionary step is this? How big of a solution will this be? Will this next evolutionary step result in us working ourselves out of a job?
I don't think so. And let me explain why.
Fundamentally, assessment is about measuring--in a valid, reliable way--competencies key for successful performance in a field, job and/or organization. Assessment can be performed using a number of different methods, the biggest ones being:
- Ability testing. Measuring things like critical thinking, reading comprehension, and physical agility. These tests seek the boundaries of individuals, the maximum they are capable of demonstrating related to a variety of constructs. When properly developed and used, these tests have been shown to be highly predictive of performance, although some can result in adverse impact.
- Interviews. One of the oldest forms of assessment and still probably the most popular. Like ability tests, interview questions can seek "maximum" performance (i.e., knowledge-based), but they can also be used to probe creativity (i.e., situational) as well as gain a better understanding of someone's background and accomplishments (i.e., behavioral). Interviews have also been shown to be valid predictors of performance, although they rely heavily on potentially unrelated competencies such as memory and verbal skills.
- Knowledge testing. SAT or GRE anyone? Multiple-choice tests have been around a long time, and with newer technologies like computer adaptive testing, don't show any signs of going away any time soon. While these used to be quite common in employment testing, they have fallen out of favor in many places, which is odd given that they too have been shown to be successful predictors of performance (I suspect it is due to their "unsexy" nature and the fact that they require a significant amount of time to prepare)
- Personality inventories. While these haven't been used nearly as much as the others above, there is an enormous interest in measuring personality characteristics related to job performance. While they sometimes suffer from a lack of face validity (although contextualizing them seems to help), they have been shown to be useful, and typically demonstrate low adverse impact.
- Applications. Also extremely popular, and the most relevant for this topic. The assumption here is that qualifications and (like behavioral questions) past accomplishments predict future performance. There is potential truth here, but as we know relying on applications (and resumes) is fraught with risks, from irrelevant content to outright lies.
An important thing that all of these assessment types have in common is that they are employer-generated. One of the fundamental changes society has seen in the last ten years is an enormous shift to user-generated content at the grass roots level. Anyone can have a blog, regardless of qualifications, and many of questionable veracity are read more than those written by people who actually know what they're talking about. Content has become, if it wasn't already, king/queen.
But therein lies the fundamental challenge for aggregating digital footprints/content for use in assessment. Relying on user-generated content, whether from social networks, blogs, comments, or other sources, is predicated on the assumption that qualified candidates are leaving digital versions of themselves. In places that you have access to. And that it is accurate. And predicts performance. This may work decently in certain industries, like IT, where it may be nearly universal--and expected--that professionals live their lives publicly on the web. But for many people in many different professions, they may have neither the time nor the inclination to reveal their qualifications online. In contrast, you can always test someone's ability, and a significant advantage of ability testing is it gives candidates an opportunity to demonstrate what they can do even if they haven't had the chance to do it yet.
I should note that using this information for recruitment is a different--but related--animal. In this context, concerns about replacing tried-and-true assessment methods are moot. However, we should carry the same concerns about content generation, both frequency and veracity.
As I've said before, taking technology to its logical endpoint would result in a massive database of everyone on the planet and their competency levels. This database would empower users to generate and control their content, but allow organizations the widest possible field of qualified candidates. At this point I'm aware of only one thing that comes close, and honestly I don't see anything approaching this scope anytime soon, particularly with more and more concerns over digital privacy.
Which leaves us...where exactly? Will robots replace assessment professionals? Not anytime soon. At least not if we want hiring to work. But we should be active observers of these trends, looking both for opportunities as well as pitfalls. We shouldn't fear technology, but rather the way it's used. Any important endeavor that requires human analysis should use technology as an assistive tool, not a sexy replacement.
I also want to give props to these companies for taking advantage of user-generated content. It's a much more efficient way of assessing (i.e., it doesn't require applicants to in some sense double their efforts by completing a separate assessment). And it's not surprising that these companies have sprouted up, given the trend in HR to automate user-initiated activities that lend themselves to automation, such as leave requests, benefit changes, and training. But importantly, the science of whether digital footprints predict real-world job performance is in its infancy. With something as important--operationally as well as legally--as hiring, we have to be careful that our addiction to technology doesn't outstrip our evidence that it works.
Wednesday, October 09, 2013
I just finished attending the 2013 HR Technology conference in Las Vegas, something I've wanted to do for many years.
It was at a great venue (Mandalay Bay), well organized, and full of industry experts. And vendors. Lots and lots and lots and lots of vendors in an enormous expo hall. I'm pretty sure the city I was born in could have fit in there.
But I'm getting ahead of myself. For anyone interested in recruiting products or HCM suites, this is the place to go to get educated. Or sold to. Ok, a little of both.
What about assessment, you ask? Well, Hogan had a presence...I saw PAN there...but the floor was dominated by the big HCM vendors like SucessFactors/SAP, Oracle, and Workday (shown at the top, who had a great booth btw).
While sitting in a milieu of thought leaders like Don Tapscott, above on the jumbotron (one of the highlights--check out this video he talked about in the context of ease of collaboration driving social change), industry experts, and product VPs naturally results in some learning through osmosis, I came away feeling, to misquote Flynn Rider, "kinda like the color brown." It took me a while to figure out why I felt that way, but it came from a place of personal disconnect.
See, much like putting lipstick on a pig doesn't change the fact that it's still a pig, you have to put gas in that fancy new car. What the hell am I talking about? Lemme 'splain. Getting the right people into your organization is critically important. You have to have the right ingredients to make that fancy cake. You have to get the right people on the bus. You have to have coal to make diamonds. Pick your fav metaphor.
But that highly skilled, friendly, conscientious person you just hired ain't gonna go very far--certainly not as far as they could--without a focus on what happens after they start. And it's here that I didn't see nearly as much tech innovation as we need.
Creating recruiting and employee tracking tools for HR pros is great. We need those tools. Particularly ones that don't make you go cross-eyed trying to figure out the GUI. But here's the thing: HR is a support function. We exist to catalyze other functions. Our product is a higher performing organization, and we get that through working with our business units. This doesn't mean HR's not important or strategic, that's just reality. So tech products should, at the end of the day, really be about program supervisors and managers. They don't have to be USED by them (although that would be ideal), but the goal should be to make their lives easier.
But you would think attending this conference that attracting the right people and getting their information into your HCM was the be-all and end-all of HR management. Is recruitment and hiring the most important step of HR? Quite possibly. But it sure as heck ain't the only issue on my mind as an HR manager.
But I suspect there's a lot more innovation to be had here. Ways of connecting HR and supervisors. Getting HR jazzed about what they do. Keeping employees engaged, or at least as much as we can. THAT'S what I want to see. That's what I hope to see in the coming years. I fervently hope that we move away from talking about a talent shortage and focus more on making sure our houses are in order.
Am I being unfair? Most likely. Were there products there that focused on things beyond recruitment, CRM, and core HR? Sure, including a cool one that facilitates volunteerism. But they felt like an afterthought. Maybe the market gets what it demands, but I suspect it's just what's sexy. Recruitment is sexy. Progressive discipline? Not so much. But which one takes up more of our time? Just like you don't want to hire too many pigs, you better buy some gas for that car or you won't get very far down the road. That's what I need--more fuel.
By the way, a lot of people at this conference really need to have their hashtags and @ signs taken away, I think they're addicted. At the very least they need to watch the Jimmy Fallon/Justin Timberlake vid.
Sunday, September 22, 2013
Okay, it's mega research update time!
First off, the September IJSA; lots of good stuff, including:
- a constructed response multimedia test for entry-level police resulted in minor ethnic group differences
- panel interviews once again prove their superiority (also: more on interview reliability)
- further analysis of the Hogan Personality Inventory with a Spanish sample
- how to applicants form impressions of person-organization fit? This study suggests contextual factors may be more important than interview content
- circumplex traits (combinations of personality factors) may predict counterproductive work behaviors better than simple FFM scores
- speaking of CWBs, conditional reasoning tests may not be the best predictor of them
- last but not least, what looks to be a good overview of competency modeling
Next up, the September JAP:
- an interesting, large study of the impact of candidate reactions on test scores, organizational perception, and criterion-related validity
- a study of the dynamics of the job search process and the impact of efficacy and focus
- highlighting certain factors during an interview may reduce discrimination toward pregnant applicants
Next, the Autumn 2013 Personnel Psychology:
- first, an important study of self-efficacy that suggests it is a product of past performance and not necessarily a predictor of future performance (free right now!)
- second, a study indirectly on selection that suggests that age diversity in work groups leads to more emotion regulation
Let's move on to the September JASP:
- okay, this may be a bit of a stretch, but if you're considering interviewing for a position as a dentist or a lawyer, make sure you suit up
- knowledge of service encounters predicts service effectiveness (and is related to conscientiousness)
- can use of biodata instruments result in adverse impact? This study suggests so, but also suggests that removal of problematic items has no impact on validity
Starting to wrap up, let's move to the October JOB:
- perceptions of the fairness of promotion practices is one of those "bubbling beneath the surface" issues in most organizations. This study found that perceptions are impacted by having been promoted in the past, organizational commitment, and ego defensiveness. Good stuff.
- do more creative sales agents produce higher sales? Perhaps only when there is a high quality of leader-member exchange.
- is validity generalization overgeneralized? (say that five times fast) These folks seem to think so.
In the home stretch, from the September Psychological Science:
- older employees may have lower average cognitive performance, but it's more consistent
- spatial ability has a valuable role to play in the development of creativity, and can predict things like patents and publications
Second to last, for you stats geeks out there, a study that suggests that t-tests can be used reliably with small samples, thank you very much
Finally, something that has nothing to do with selection but is a nominee for the 2013 HR Tests Coolest Study Award, and something we all are very familiar with: time bandits (no, not the movie).
Thursday, September 12, 2013
I'll have a research update for you as soon as I get enough content, but today I wanted to give you a preview of a talk I'll be giving in San Francisco on September 23rd titled "Hiring interviews: You're probably doing them wrong. A motivational talk." The topic was inspired somewhat by Google's semi-recent announcement regarding their internal research on the issue.
The sad truth is that even though interviews are one of the most popular forms of personnel assessment, they are often done wrong. Not necessarily through any ill intent, but because of two main factors:
(a) they're harder to develop than many people think
(b) most people think they're great interviewers
On the first point, interviews are deceptively simple. Many people assume that if they can talk to other people, they can interview someone.
Wrong and wrong.
Interviewing isn't talking to people. Well, okay, I suppose literally it is. But interviewing really is about MEASURING people.
If I asked you to use a set of measuring spoons to give me 1 tsp of sugar, you would have no problem, right?
But what if I asked you to use a Halloway P36 spectrometer* to measure photon radiation? You might need some help. It's all about what you're measuring.
On the second point, research has well established that people are generally very bad at accurately reporting their skill levels--across a wide variety of disciplines.
To make matters worse, research has also established that interviewers tend to get addicted to bad interviews: when things turn out poorly, they tend to blame outside factors rather than the interview format.
But there is good news.
Specifically, we know how to do interviews the right way. Namely by structuring them. What does this mean? There are several key features. Here are a few:
1. Use high quality question formats. This means behavioral, hypotheticals, knowledge-based, and background questions. Not puzzle questions or gimme questions.
2. Be consistent. Each candidate should be asked the same questions in the same order (with limited variations for follow-up if needed).
3. Use a detailed rating scale. When there is no criteria to compare answers to, scoring tends to be inconsistent, reducing the utility of the whole process.
4. Base the questions on an analysis of the job. I probably should have made this #1. Everything you do in your selection process, including interviews, should be driven by the key knowledge, skills, abilities, and other characteristics required for good job performance.
5. Train the interviewers. Because of the wide variety of biases that plague interviewers, it's critical that they be aware of these tendencies and guard against them.
These are just some of the ways we can avoid bad interviews. Of course another strategy to increase your success is to look beyond interviews at things like tests that more accurately mirror the job (i.e., performance tests). But even following these guidelines can radically improve your results.
In other words, there is hope.
* I made this up. If this is a real instrument I'm better than I thought!
Posted by BryanB at 9/12/2013
Sunday, August 11, 2013
Recently at a certain employer, let's call it the Department of Inertia (DOI), a mandate was passed down from on high to "do more recruiting." The ensuing chaos that ensued, from HR to line manager, is worth discussing.
First, some background. DOI had historically generally used the "post and pray" approach to recruiting. Meaning they put job ads on their careers page and assumed (hoped?) that qualified candidates would come running.
In general there was very little formal establishment of relationships with schools or professional groups. Branding was minimal and uniform. Unique attractants weren't highlighted. Advertisements were about as exciting as reruns of Gilligan's Island. Wait, less exciting than that, Gilligan had a rockin' hat.
If a bad candidate pool resulted, the assumption was bad timing.
One day, the Executive Office passed down a mandate that all recruitments must henceforth include...ya know... recruiting. Specifically a "recruitment plan." (I know, earth-shattering, right?)
Here's what happened after that, not necessarily in this order:
- Program shared services staff freaked out. What's a recruitment plan? What does that mean? Can we see one? Can you make one for us?
- Line supervisors freaked out. What's this about recruiting? Aren't we already advertising? Who can do that for me?
- HR was deluged with calls seeking clarification. Since no clear instruction had been given, they scrambled to provide consultation.
- The one half-time recruiter in HR (budget cuts, ya know) quickly became overwhelmed. Because most of the other analysts were transactional or focused on performance management, recruitment expertise was lacking. (Don't even get me started about resource balancing between selection and discipline)
- Slowly, ever so slowly, a strategy coalesced to provide short- and long-term advice to programs on how to best recruit, particularly for hard-to-fill positions. Discussions, long overdue, about things like LinkedIn, craigslist, and Dice, were had.
- Everyone breathed a sigh of relief and, much to everyone's shock, the world did not end.
- Recruiting never dies*. I don't care if there's a depression and the unemployment rate is 20%. Branding is always important. Attractive but accurate job ads are always important. Your careers page is always important.
- HR consultants should have recruiting as a core competency. If your customer gets you on the phone, you should be able to speak half-intelligently about LinkedIn.
- Times change. One year the focus may be budget cutbacks, but the next things may turn around and suddenly you're hiring again. HR and shared services providers need to be flexible and agile, and not assume the future will look like the present. In fact, absent a time machine, I can pretty much guarantee you it won't.
- Supervisors should have to demonstrate they've given recruiting half a second of thought. I don't care if they think they know who they want. Ingrain high-quality recruiting and assessing into your organizational culture and hold people accountable for ensuring it.
- Recruitment should be a core focus of any executive office. Talent is the one thing standing between you and sustainable success. To the extent that you care about your organization succeeding.
- Negative is stronger than positive. Left to their own devices, hiring supervisors naturally gravitate toward avoiding poor performers rather than attracting great ones. HR plays a key role in sustaining the focus on finding the diamonds in the rough.
- If you're going to mandate "more recruitment", give folks a little direction on what you mean, how success will be measured, and when you expect results.
Those of you in HR leadership positions know that not overreacting and keeping a cool head are key competencies for success as a manager. Sudden changes in the direction of organizational HR practices are no exception. It may hit a little closer to home, but that makes your calm and logical approach all that much more important. Rise to the occasion.
* This may have made a better title for this post. Although I almost went with recruitapocalypse, which now sounds like a dinosaur.
Posted by BryanB at 8/11/2013
Thursday, July 25, 2013
At the time, I remember marveling at the impact that one person can have on others. And how important leadership is, both personally and organizationally.
Leadership is one of the oldest topics in organizational behavior, yet remains as popular today as ever. Why? Many reasons:
. Successful leadership can be hard to measure. Surveys are of course one way to measure, but they have inherent problems such as response rate. Productivity is a good one as well, but that can be defined in many ways and is also influenced by a wide variety of environmental factors.
. Leaders can be hard to identify, because of the aforementioned factors. A simple interview won't cut it when it comes to assessing for a leadership role. Even assessment centers are only so effective when it comes to predicting performance.
- Good leaders promote their subordinates. They support their team members, mentor them, and prepare them for promotional opportunities. They make it clear that if you succeed, you have a career path in the organization.
- Or help subordinates promote elsewhere. In situations where a promotional position doesn't exist at the time, good leaders help star employees seek fame and fortune elsewhere--and wish them well. On the flipside, bad leaders see star performance transfer elsewhere for no increase in pay or responsibility.
- Good leaders get boomerangs. Because they support and mentor their employees--even helping them get promotions elsewhere when necessary--they're more likely to be a return destination by said employees in the future.
- Good leaders are followed when they move. They are so well thought-of that high performers are willing to follow them to new positions, units, or organizations even though it means big changes and possibly no pay increase.
- Good leaders attract internal star performers. When you ask your high performers where they want to work, they indicate units overseen by good leaders.
For external candidates, there are at least a couple ways:
1) Ask them. Yep, you can integrate this into your interview process. Ask questions such as, "Which high performers in your organization have you personally recruited and hired?" "Which have you personally promoted?" "Which have returned to work for you after working somewhere else?"
2) Integrate this into your reference checking process. Find out who the high performers are that worked under your applicant and ask them questions that get at the points above. This has the additional benefit of potentially identifying applicants you weren't even considering!
There is at least one important caveat to all this: it's unlikely good leaders will leave this footprint in a short amount of time. And if a good leader is entering into a bad environment or one that needs to be turned around, it will likely take years for the impact to be felt.
So like all assessment methods, this approach will work best in certain situations, like executive-level selection. But it's something to add to your tool belt as an assessment professional.
And something to think about in your group, organization, and--for those of you that lead--your daily work life.
Posted by BryanB at 7/25/2013