Monday, March 28, 2011

Seeing through the candidates' eyes

"A person-centric work psychology will look at the world through the eyes of the worker. It will take work in the raw, all the sights and sounds and tasks and people that run by the worker everyday. The worker experiences all of it, not demarcated parts of it. She organizes and partitions to be sure, but it is her organization that must interest us." - Weiss & Rupp

"Before you criticize someone, you should walk a mile in their shoes. That way when you criticize them, you are a mile away from them and you have their shoes." - Jack Handy

The two quotes above represent different ways of looking at someone else. The first, from a recent journal article, suggests we benefit from studying the world as others see it. The second, while humorous, suggests an alternative--and more common--worldview, where people are really just a means to our ends. Unfortunately in the world of recruitment and selection, all too often we're stealing shoes rather than practicing (and studying) empathy.

Fortunately there's been quite a bit of talk lately about seeing things from the perspective of the applicant, particularly in the recruiting world. Joe Murphy, among others, has done a lot of writing and thinking about this. And in the latest issue of I/O Psychology, Howard Weiss and Deborah Rupp bring our attention to the fact that I/O research tends to treat people as objects. (e.g., a collection of KSAs, a test score). They also point out that much of I/O research is done for the "collective purpose", driven explicitly to serve organizational needs, rather than beginning with the person. It's one of the best written, most thought provoking pieces I've read in a while. And it convinced me that we need to spend more time studying the experience of working; or for our purposes, the experience of applying for a job.

There is a lot of emphasis these days on reaching out to prospective applicants via social media. I'd like to respectfully submit that for many organizations this is putting the cart before the horse. In fact it may cause the horse to trample you. Because we don't even do a very good job communicating with EXISTING applicants. So you could very likely be trying to build a brand or reputation that's already been sullied. Yeahh..good luck with that.

And think about how important it is to treat applicants right. Not only can you get easily razzed on Facebook, you're dealing with people at one of their most vulnerable times. Think about when you really NEED an organization and how it impacts your sensitivity. What state of mind are you in during a medical emergency? When your car needs to be fixed? When you REALLY need a plumber? Looking for a job, whether because you're unhappy in your current one or, even worse, because you don't have one, puts people in a vulnerable, and sensitive, state of mind.

Do we see things from the eyes of our applicants? Do we treat them as customers? Because in many cases they literally are, and if they're not they're connected to people that are.

If we do see them as customers, then why...

Do we not get back to them, even with a simple automated acknowledgement, when they apply?

Do we not give them any feedback about their performance on assessments?

Do we structure our processes and career portals around what makes our lives easier, not theirs?

Do we wring our hands about how hard it is to weed through piles of applications, when most ads do a horrible job of serving their purpose (attracting the RIGHT applicants) and we give people very few tools to use to self screen (e.g., realistic job previews)?

In our defense, there is research done in this area (e.g., perception of selection tools and websites, organizational attraction), but there are many more opportunities, as pointed out in the commentaries. For example:

- What does it feel like to be recruited? To be passed over?

- What does the experience of taking a particular assessment for a particular opportunity feel like? Exciting? Frustrating? Confusing? Engaging?

- What are applicants attending to while taking a test? (we assume it's just the test)

- Does having fun while being assessed add to the organization's perception, or lower test anxiety?

- How does an applicant describe their assessment process to others?

If we want to explicitly tie these to organizational objectives, we could take the results of this research and answer questions like: How do candidate/applicant perceptions translate into socialization, turnover, attitudes, and job performance?

From a broader perspective, this article reinforces out that most organizations need to adopt much more of a systems perspective of the selection process. And there are other implications. Perhaps if we knew more about the experience of working, it would help us understand job performance better. And that would help explain the large amount of variance that goes unaccounted for by formal testing.

To some extent, none of this is new. Recruitment and selection research is all about understanding applicants. But it's done so through the lens of the organization--in other words, it looks at what applicants bring to the table. It's a slight shift to think about studying applicants as individuals and focusing on their experience. But doing so might help both organizations and individuals find a better match.

Sunday, March 20, 2011

Evidence-based I/O: Where are we? More importantly, who?

The March 2011 issue of Industrial and Organizational Psychology contains two excellent focal articles. One, which I plan on writing about in the future, is about how the field needs to spend more time studying the experience of working rather than treating workers as objects. It's one of the best, most thought-provoking articles I've read in this journal.

But today I'll focus on the first article, which is about the evidence-based practice (or lack thereof) of I/O psychology, by Briner and Rousseau (B&R). This topic is obviously near and dear to me given this blog so I had a lot of reactions during my read of it. I'll share some of those with you today.

B&R argue that the practice of I/O psychology is not strongly evidence based--at least not in the sense that other professions (e.g., medicine) are becoming. This may surprise you given the history behind I/O psychology and its strong grounding in sound research methods. But we're not talking about the quality of research (although as some of the commentaries point out, this is a relevant question), rather how well we're doing putting research findings into practice.

The authors point out that there are many "snake-oil" peddlers out there who claim to have evidence for what they do. That the concept of evidence based is not used or well known in I/O circles. Even more fundamentally, we have no data on what percentage of I/O practices are based on solid scientific evidence. They point out that part of the problem is that the latest research findings and research summaries aren't accessible--a point I obviously agree with given the purpose of this blog. They even provide a tool (systematic reviews) that they suggest could help us get there.

Yet it's one particular point, referenced by both focal and commentary authors but not treated in depth, that I'd like to focus (okay, rant) on. I'll use a quote from B&R to get us started: "It is not always obvious to practitioners, certainly not the least experienced or less reflective, how exactly to apply the principles identified in such research." This comes as close as any to, IMHO, the real issue: who uses I/O research in organizations.

Before I go too much further, I'd like to do something that I don't believe any of the authors did: acknowledge my bias in approaching this topic. I'm someone who has an advanced degree in I/O (a Masters, which paraphrasing the focal authors puts me slightly above the village idiot), but who works in the trenches assisting supervisors and managers alongside many who have little if any formal education in either I/O or HR. This obviously impacts the issues I see as important and my take on them. 'Nuff said.

Now, back to my point. I believe the authors in this issue fail to recognize a very important point: that although I/O research is used by I/O consultants and academics, and to a lesser extent by mid-level managers, there's a large group who is going unnoticed: HR practitioners. It is these individuals who I would argue in most organizations are the primary "users"--whether they know it or not. These are the folks who supervisors count on to provide expertise related to a wide variety of HR/IO issues, including recruitment and selection. Yes, many organizations use consultants who have formal training in I/O, but I think we can all agree that in terms of the number of day-to-day decisions that get made related to HR, we're primarily talking about supervisors and HR practitioners. And only after recognizing this point do answers to the question "how do we make I/O practice more evidence-based?" become more clear.

Let me throw out a couple questions just to stimulate you a bit more:

1) Why is this one of only a handful of "publications" devoted to making research more accessible to HR practitioners? (and heck, I don't even know how many of you are practitioners in the first place!) And why is it left up to "independents" such as myself, rather than professors? (Dennis, you are a blessed exception)

2) Why does SHRM, with its enormous resources and user base, focus on things like HR strategy and leadership, while failing to focus on evidence based practice? (and while I'm ranting, Sir Richard Branson as keynoter? really?)

3) Why is SIOP seemingly only now beginning to make attempts to make research more accessible (e.g., with its randomly updated blog and thankfully soon-to-be published Science You Can Use series)?

I would argue it's all because we do a horrible job of focusing on the true users of I/O research. We engage in high-level debates about criterion-related validity but fail to gather even basic information like who is responsible for making each type of HR decision.

And unlike evidence-based medicine, we have a particular responsibility to ensure that HR/IO decisions are made using the best science. Because the consumers of the research aren't all individuals with advanced training (e.g., doctors). They're supervisors who are under the gun to make an effective, and ass-covering, decision. They're HR analysts who ended up there not because they love the topic but because they were looking for a promotion.

So if the goal is to increase the number of people-related decisions in the workplace that are based on the best evidence, in addition to tackling the issues identified in this issue (such as publication bias and accessibility), we need to do a much better job of understanding our customer base and tailoring our efforts toward them. After all, I/O psychologists generally do not control organizational practices.

So here, in no particular order, are my own recommendations for helping ensure the "best" (defined here are based on science, not things like, oh, I dunno, organizational politics) people decisions get made:

1. Make basic research more accessible--and by this I mean affordable. And by this I mean free. Or cheap. Fifteen dollars for a research article? Really? How many of those are you selling, exactly, publishers? Somebody, I don't know who know, needs to get on this.

2. Start addressing the elephant in the room: that many of those providing guidance to supervisors (i.e., internal HR), not to mention supervisors themselves, are doing so based on absolutely zero research. Who are these people? How are they trained? What do they know? We know very little, because we don't study them (with some exceptions). Imagine if evidence-based medicine tried to progress without understanding anything about doctors.

3. In a similar vein, spend more time understanding those who are ultimately responsible for people decisions and have to live with them: supervisors. Particularly first-line ones. Why--and more importantly how--do they make decisions related to particular HR issues? To what extent is speed the single most important decision criterion?

4. Identify the "high value" people decisions (e.g., selection, harassment prevention) and their current practice. Use this to figure out where the biggest gap is and where we should be channeling resources. Without a baseline it's hard to figure out where we should be focusing.

5. For Pete's sake, let's start putting our "blessing" on certain practices. SIOP, don't be afraid to endorse things. By remaining "objective", you're de facto taking the stance of, "hey, figure it out yourself, or hire a high-priced consultant." There's a thought behind products earning an "energy star."

6. Let's have an industry-approved training curriculum and certification. Why does SHRM offer the PHR and SPHR...and that's pretty much it? We need training programs for practitioners that are created and reviewed by people steeped in the research but able to translate.

7. Let's identify "best of" practices as well as "worst of." Why are most awards for individual researchers? Why not organizations that demonstrate particularly effective practices? Why not have a professional equivalent of the Razzies for just the opposite? ("and now...the award for worst interview question of 2011 goes to...")

Okay, I think that's the end of my rant. Let me be clear, the proposition that people decisions should be based on the best available science is hard to argue with. I applaud all of the authors for thinking deeply about this topic. I just think we need to get in the trenches a little more. Because if we don't, the trend that's already apparent (toward automation/speed and away from thoughtful, research-based decisions) will accelerate. Leaving lots of people trained in I/O but few who understand--and are interested in--their services.

Side note: this article did point out a couple books I'm considering adding to my library, including Locke's handbook on organizational behavior (probably still too much for the average consumer in this age of Twitter, but on the right road), and Locke, et al.'s book on evaluating HR programs.

Sunday, March 13, 2011

Research roundup: Political skill, emotions, and...hobos?


Time to catch up on the research:

First, Blickle, et al. show that political skill ("the capacity to understand others in working life effectively, and to apply such knowledge to induce others to act in ways that add to one's personal or organizational goals") can predict job performance beyond GMA and personality factors.

Fast on its heels comes a study from Bing et al. that also shows support for the concept of political skill (what is it, election season?). This time the researchers use meta-analysis to show a positive relationship between political skill and both task and contextual performance, with a stronger link to the latter. The relationship is also stronger as interpersonal and social requirements of the job increase.

Next, Giordano, et al. provide evidence that can help organizations detect deception in interviews (hint: reviewers out-performed interviewers, and giving a warning helps).

Fukuda, et al. show that the factor structure of two emotional intelligence measures remained consistent when translated into Japanese.

Hjemdal et al. continue gathering support for a measure of resilience (essentially the ability to successfully deal with stressors); this time in a French-speaking Belgian sample.

Interested in the Bookmark method of setting cut scores? Then you'll want to take a look at this study by Davis-Becker et al. and its slightly disturbing results.

The measurement of reading comprehension is common, and it's important to understand the construct(s) being measured. This piece by Svetina, et al. helps us parse out the concept.

The discussion about the March issue of the Journal of Personality and Social Psychology was dominated by Bem's study of precognition, but a hidden gem is there to be found: Hopwood et al.'s study of how personality changes over one's lifespan. Looks like it's largely due to environmental influences and the most change occurs early on.

Interested in PO fit and its relationship with attitudinal outcomes? Then you'll want to check out Leung & Chaturvedi's study.

Last but not least, Woo analyzes a sample of U.S. workers and finds support for "Ghiselli's hobo syndrome" among a small group: basically these folks frequently change jobs and have positive attitudes about it. Not surprisingly, these hobos reported being less satisfied with their current jobs. I guess there's just no pleasing some people!

Side note: I just ran across the Questionmark blog. A lot of really cool stuff that focuses on education and learning applications, but a lot of the content is directly applicable to personnel assessment. They're a vendor, so there are some sales pitches, but it's worth weeding through.

Final note: I apologize ahead of time if some of these links don't work; I've tried my best to link to non-session based abstracts but it can be hard to tell.

Monday, March 07, 2011

Gild.com: What the future looks like

The concept of a website that matches applicant skills to specific recruitments is...shall we say...not new.

What is new is a website that gets applicants to take actual assessments (of things like mathematical reasoning) so recruiters have a much better chance of finding the person that has the skills they need.

And it working.

This is the genius that is Gild, a website launched late last year devoted to "serious technologists" and currently being used for recruiting by companies like Oracle, eBay, and Salesforce.com. As of December they had over 100,000 users.

How do they do it? With a dash of good 'ol fashioned reinforcement, in the form of competitions for actual prizes that the target audience might like (like an iPad). The focus is on IT jobs, but one can easily envision this being expanded for other occupations (e.g., demonstrate your knowledge of multiple regression and win a free one-year subscription to SIOP!).

There are two main ways Gild gets users to take assessments: through certifications and competitions. Certifications are short multiple-choice tests designed to measure proficiency in things like ASP.NET, SharePoint, and Unix (and some more general competencies like English proficiency). They're easy to take and (at least if my middling knowledge of IT is any indication) difficult to fake. They've even incorporated reinforcement into adding members (invite friends for a chance to win an iPad!).

Competitions are where things get really interesting. They're also short m-c tests, and here are some examples of competitions under way as of this writing:

- PHP Elite (prize: Kindle)
- Java Elite (prize: iPad)
- Mathematical Reasoning (prize: AppleTV)

It's the social competition that seems to be the key. The group interaction extends even to their excellent support forum. For example I see that someone suggested practice exams or sample questions, and the site was quick to praise the idea and promise to investigate.

This website demonstrates that it is possible to get internet applicants to complete real assessments as part of an online profile--when properly motivated. Yes, I know, unproctored testing is subject to faking, blah, blah, blah. I just don't buy that argument anymore. Use confirmatory testing; end of story.

One thing I noticed, which may not be that surprising: the leaderboard is currently dominated by folks (okay, almost all men) from India. I mean...wiping the floor with the other countries. Now they have offices in India (and China, and the U.S.), so maybe it's better known there. Or maybe their focus on technology is showing dividends.

Another interesting...feature...is that the site is supposed to be exclusively for direct employers, not third-party recruiters. No trolling here. Once you create an account, you have the ability to post jobs (or "job cards"), create competitions, and manage your company profile. Interestingly, the posting process allows you to specifically select three skills--and their level--to target. Like a mini-mini job analysis. It will even forecast the supply and demand dynamically based on your requirements. A one-month "silver" posting is free but is smaller than the $50USD-"gold" posting that also includes 100 invites and better placement. Still, very affordable (I mean, Dice is $500).

Behind all this is the Professional Aptitude Council (PAC), a company that creates certifications and technologies to deliver them. The website states that their mission is in large part to ensure that talented individuals get opportunities based on their merit.

Gild is a great example of how to use technology to engage applicants, create more legitimate profiles, and offer employers a more accurate method to match individuals to specific recruitments.

Hat tip. You can read a little more about the company's history and purpose here.