Tuesday, May 31, 2011

Recipe for losing a lawsuit

Ingredients

One large, diverse candidate pool
One cognitive ability OR physical agility test
One protective services job (police or fire if in season, otherwise corrections)

Optional: large, aggressive employee union
Optional: history of litigation


Instructions

1. Begin by deciding what type of exam--cognitive ability or physical agility--you feel like giving; don't worry about performing a job analysis first as these are time consuming and boring. If you must, select a small sample of current employees (preferably the poor performers) and provide minimal instruction. Don't worry about whether they are "true" job experts, and whatever you do, don't link tasks to KSAs--everyone hates doing it.

2. Select what you will be measuring. Base your decision on what you feel like, or whatever's easiest. Usually this is just doing what you did last time.

3. Have untrained analysts prepare the exams. Because anyone can do hiring, select whoever has time on their hands. Optional: search the Internet for a test that catches your eye. Rule of thumb is one question per content area (if you have more than one question, you're wasting applicant time).

4. Make sure the reading level of the exam is graduate school-level. After all, isn't reading an important part of any job? And don't you want the best?

5. Next, choose weighting of your exam components either randomly or based on gut feeling. When in doubt, place the largest weight on the test that is most related to cognitive ability.

6. Select a pass point. It should either be: (a) 70 percent; (b) based on administrative convenience; or (c) chosen at random.

7. Administer the exam, preferably with limited advertising. If you must advertise, give applicants a very short amount of time to prepare--after all, this isn't grade school. Do not pre-test the exam--if you've followed these instructions, it should be fine.

8. Score the exam--if you can, avoid "right/wrong" questions and go with ones where you can personally judge the quality of the answer. Don't worry about a boring "benchmark"--you know a good response when you see it.

9. Keep all scoring results and details regarding the process to yourself. Candidates don't need to know (and won't understand).

10. Make final selection decisions. Do not administer yet another test before making your selection, or if you must because of boring rules, make it an unstructured interview. Ask lots of questions like, "If you were a book, what would your title be?" If you have any women or minorities, ALWAYS ask questions about their ability to perform the job.

11. Do not document any of this process. Everyone involved will be with the organization for a long time, and people have really good memories.

Above all: have fun! After all, it's just people's livelihood.


A good example of how this type of thing plays out (albeit in a less horrific manner) is the recent decision in Easterling v. Connecticut Dept. of Corrections.

And for those of you that want to read more about how the law applies to selection, look for an upcoming IPAC monograph written by yours truly!

Saturday, May 21, 2011

Shameless plug: The 2011 IPAC conference

And now for something completely different. A man with a marketing video.

It's getting toward the end of May, so if you haven't made up your mind about attending this year's premier event for practitioners of assessment and selection methods, you might want to do it soon.

Details: July 17-20, Washington, D.C. Dupont Hotel. So many great presentations I can't even begin to summarize. Check out the details here.

If that doesn't convince you, I doubt amateurish marketing tactics will work, but since you're still reading and I have you captive...

Sunday, May 15, 2011

IJSA v.19 #2: Personality, personality, personality (and more)


The June 2011 issue of the International Journal of Selection and Assessment (IJSA, volume 19, issue 2) is out. And it's chalk full of articles on personality measurement, but includes other topics as well, so let's jump in! Warning: lots of content ahead.

- O'Brien and LaHuis analyzed applicant and incumbent responses to the 16PF personality inventory and found differential item functioning for over half the items (but of those only 20% were in the hypothesized direction!).

- Reddock, et al. report on an interesting study of personality scores and cognitive ability predicting GPA among students. "At school" frame-of-reference instructions increased validity and, even more interesting, within-person inconsistency on personality dimensions increased validity beyond conscientiousness and ability.

- Fein & Klein introduce a creative approach: using combinations of facets of Five-Factor Model traits to predict outcomes. Specifically, the authors found that a combination (e.g., assertiveness, activity, deliberation) did as well or better in predicting behavioral self-regulation compared to any single facet or trait.

- Think openness to experience is the runt of the FFM? Mussel, et al. would beg to disagree. The authors argue that subdimensions and facets of openness (e.g., curiosity, creativity) are highly relevant for the workplace and understudied--and demonstrate differential criterion-related and construct validity.

- So just when you're thinking to yourself, "hey, I'm liking this subdimension/facet approach), along comes van der Linden, et al. with a study of the so-called General Factor of Personality (GFP) that is proposed to occupy a place at the top of the personality structure hierarchy. The authors studied over 20,000 members of the Netherlands armed forces (fun facts: active force of 61,000, 1.65% of GDP) and found evidence that supports a GFP and value in its measurement (i.e., it predicted dropping out from military training). Unsurprisingly, not everyone is on the GFP bus.

- Next, another fascinating study by Robie et al. on the impact of the economy on incumbent leaders' personality scores. In their sample of US bank employees, as unemployment went up, so did personality inventory results. Faking or environmental impact? Fun coffee break discussion.

- Recruiters, through training and years of experience, are better at judging applicant personality than laypersons, right? Sort of. Mast, et al. found that while recruiters were better at judging the "global personality profile" of videotaped applicants as well as detecting lies, laypeople (students in this case) were better at judging specific personality traits.

- Last one on the personality front: Iliescu, et al. report the results of a study of the Employee Screening Questionnaire (ESQ), a well-known covert, forced-choice integrity measure. Scores showed high criterion-related validity, particularly for counterproductive work behaviors.

- Okay, let's move away from personality testing. Ziegler, et al. present a meta-analysis of predicting training success using g, specific abilities, and interviews. The authors were curious whether the dominant paradigm that g is the single best predictor would hold up in a single sample. Answer? Yep. But specific abilities and structured interviews were valuable additions (unstructured interviews--not so much), and job complexity moderated some of the relationships.

- Given their popularity and long history, it's surprising that there isn't more research on role-players in assessment centers (ACs). Schollaert and Lievens aim to rectify this by investigating the utility of predetermined prompts for role-players during ACs. Turns out there are advantages for measuring certain dimensions (problem solving, interpersonal sensitivity). Sounds promising to me. Fortunately you can read the article here.

- What's the best way to combine assessment scores into an overall profile? Depends who you ask. Diab, et al. gathered information from a sample of adults and found that those in the U.S. preferred holistic over mechanical integration of both interview and other test scores, whereas those outside the U.S. preferred holistic for interview scores only.

- Still with me? Last but not least, re-testing effects are a persistent concern, particularly on knowledge-based tests. Dunlop et al. looked at a sample of firefighter applicants and found the largest practice effects for abstract reasoning and mechanical comprehension (both timed)--although even those were only two-fifths of a standard deviation. Smaller effects were found for a timed test of numerical comprehension ability and an untimed situational judgment test. For all four tests, practice effects diminished to non-significance upon a third session.

Sunday, May 08, 2011

Hiring HR professionals: What are we thinking?

When you hire someone for your Accounting department, what do you look for? Accounting experience, undoubtedly, but presumably you look for someone with some college-level accounting training as well as basic competencies such as facilities with numbers, conscientiousness, etc.

What about IT support? Again, in most cases you're probably looking for experience with specific hardware or software or general support experience, but in many cases you're searching that resume for formal education/training in IT-related topics.

Connection? For many organizational "support" functions, we look not only for experience but educational experiences that would give the individual a grounding in the basics of the field and (hopefully) train their mind to recognize historical developments as well as connections between concepts.

So why is that when we hire for HR, another support function, our brains fall out our ears and we seem to focus primarily on past experience? This weakness seems common in the public sector but I'm guessing the private sector is not immune.

Phrased another way: Why don't more organizations place value on formal HR education when hiring?

I'm not suggesting that one needs a degree in HR to be good at it, although I do think it limits people. What I'm concerned about is the apparent lack of importance placed on these degrees and what that says about the profession.

Is it because formal HR educational programs don't exist? Nope. According to the College Board, over 350 schools exist with a major in HRM.

Is it because formal education in HR isn't as important for job performance as experience? I'm not aware of any research that shows this to be true (if you are, please enlighten me).

No, I suspect the following:

1) Many HR leaders themselves do not have formal educational training in HR therefore they tend not to think of it as a screening tool (or place much value in it).

2) Similarly, there is a lack of knowledge about HR educational programs--what they offer, the advantage of having gone through one, and how to connect to the school.

3) There are relatively few candidates out there that apply for HR vacancies that have a relevant degree (either as a pure function of the number of individuals that have a degree in HR or because many applicants believe anyone can do HR).

4) HR is still seen as largely transactional and/or not a critical business function, therefore the qualifications sought have more to do with customer service than they do formal training. (I believe this is a large reason why HR outsourcing is easy to contemplate for many executives)

5) Many are simply passing through HR. Many incumbents do not see HR as a "career", but rather a stopping point on their way to...something else. But much like Lightning McQueen (or Doc Hollywood if you prefer), they find they have a hard time leaving, either because they come to like it or they find they're not as employable as they thought.

6) The professional HR organizations and HR publications focus on anecdotes, opinion, and news bits rather than formal study and analysis. SHRM is not SIOP.

So why do I care about this topic? Because I see HR stagnating until it truly becomes a profession and not a loose collection of people who vaguely care about things relating to people management. And part of becoming a true profession is placing formal structure around the path from education to employment.

I'm also concerned because of the relationship between I/O and HR. Ultimately much of what is researched in I/O gets practiced through HR, and there is a close relationship in many people's minds--in fact I would wager most managers haven't the foggiest idea what the difference is. So what impacts HR ultimately impacts I/O.

Maybe it's just not there yet. Maybe I need to be patient. HR's a relatively new field and maybe it just needs time to develop, and to figure out questions like its relationship to I/O.

But given what I've seen, I'm not feeling optimistic. I see HR shops being outsourced or automated, resulting in more IT skills being required than knowledge about research on human behavior. Inevitably this will lead many organizations to lose out on important efficiencies they could be gaining (not to mention improvements in the work environment).

What can be done? I don't have all the answers, just some suggestions:

1) A wider promotion of the value of formal HR education. SHRM, I'm looking at you, as well as the other HR professional organizations.

2) More research on the connection between formal HR education and job performance.

3) Effort on the part of HR leaders to at least consider the potential importance of HR education when hiring for their teams.

4) More effort on the part of HR leaders to establish connections to schools that offer HR degrees and begin programs like internships and formal recruiting.

5) More organizational support (e.g., tuition reimbursement) for staff to obtain HR degrees.


To read more about this issue, I highly recommend starting with the 2007 piece by Sara Rynes and her colleagues.

Hat tip to this HR Examiner article, which helped me crystallize something that's been bothering me for a long time.

Sunday, May 01, 2011

Tech tools: Brainshark and Greenshot

Brainshark and Greenshot. Sounds like a kids' cartoon about a pair of superheroes.

But no, this post is all about a pair of simple tools that you can use in a variety of ways to enhance your recruitment and selection efforts and just plain make your life easier.

Brainshark is a remarkably simple, and free, tool you can use to create slideshows or videos with audio in a matter of minutes. Did I mention it's free?

Simply upload your file, add audio (if it's a PowerPoint) using either your computer microphone or phone, name your slides, and you're good to go! You can see a simple example of a promotional spot I whipped up below--remember, my specialty is information filtering, not multimedia! Make sure to check out the "Contents" menu to the right of the fast-forward button.





This took maybe thirty minutes to develop and record, and I used my computer's built-in microphone, hence the less-than-stellar audio quality. But I think you get the idea and see some of the possibilities:

- realistic job preview
- job advertising
- instructions for applying

And so on. For $10 or $20 a month, respectively, you can upgrade to Brainshark Pro or Brainshark Pro Trainer, which includes options like private sharing, capturing leads, testing, and integration with LMS. Just pretty darn cool all around, I say.


The next tool is simpler but no less useful on a day-to-day basis. I know I'm not the only one out there who likes SnagIt. It's an easy way to take screenshots of parts of the screen and quickly add borders, arrows, or other accents. And it's very reasonably priced.

But I'm all about the free stuff whenever possible, which is why I was pleased to learn about Greenshot, a scaled-down tool that gives you pretty much what you'd get in SnagIt, if a little less snazzy. Simply load the software and whenever you press Print Screen, instead of taking a picture of the whole screen you can specify the region. I use this all the time for showing others what I'm talking about, presentations, and user guides. You can also capture just the window, or the whole screen.

One feature unique (to my knowledge) to Greenshot is "obfuscate", which allows you to blur parts of the picture (e.g., name, SSNs) you may wish to hide. See the screenshot below for an example where I obfuscated part of the blog post title:



The one feature that I'd still use SnagIt for is capturing a rolling webpage that includes the links. Very handy. But other than that, Greenshot would do ya just fine.

So there you have it, two simple tools that have the potential to add tremendous value to your life. Hope you enjoy.

Hat tip to my colleagues at CODESP for turning me on to Brainshark, and my friends at Biddle for Greenshot.

Tuesday, April 26, 2011

Research update: Political skill, stereotype threat, and NFL players

A few research articles for us...

First up, several articles from the latest issue of Human Performance:

Lee & Dalal demonstrate in a policy-capturing study that performance "troughs" exceed "peaks" in their influence on performance ratings.

Next, a fascinating study by Meurs et al. where they show how political skill (or networking ability) moderates the relationship between the HEXACO factor of sincerity and task performance. In other words, for individuals high on political skill the authors found a positive relationship between sincerity and task performance (and a negative relationship for those low on the skill).

Are you recruiting highly educated graduates? Then you'll want to read Jaidi, et al.'s piece. In it, they describe a study where recruitment advertising and positive word of mouth related positively to job pursuit intention and behavior. Somewhat surprisingly, on-campus presence related negatively to these measures.

If you like football and/or physical ability tests, you'll be interested in the study by Lyons, et al. of NFL players. In it, they demonstrate that collegiate game performance out-predicted physical ability tests administered during the NFL Combine when looking at future NFL performance. And unlike physical ability, past performance remained a consistent predictor across four years of performance, although the criterion coefficients deteriorated over time, similar to what we find with cognitive ability scores.

Finally, over in the Journal of Applied Social Psychology, Nadler & Clark report the results of research on stereotype threat. The results of their meta-analysis indicated that attempts to nullify stereotype threat (e.g., by dismissing it or disguising the task) resulted in a moderate improvement in score (d=.52) for both African Americans and Hispanic Americans, and there appeared to be no difference between the groups in terms of the effect.

Sidenote: those of you with an interest in HR technology and talent management might want to check out the six sessions being streamed live from Bersin & Associates' IMPACT 2011 conference on April 27th and 28th.

Sunday, April 17, 2011

Are executive recruiters discriminating against non-Whites?

You might think so if you just glanced at a recent study by Dreher, et al. in the Journal of Management. But what if we read the study...are the results more complicated?

Here's the background: The authors distributed a survey to over 13,000 U.S.-based BlueSteps subscribers. BlueSteps is an "executive career management service"--essentially a place where $100k+ job seekers and executive recruiters can connect, seemingly similar to TheLadders. It is maintained by the Association of Executive Search Consultants (AESC).

Due to various reasons (including the survey being blocked as spam), the researchers ended up with a final sample size of 572. So keep that in mind when interpreting the results. Also, nearly 90% of the respondents were male, and around the same number of respondents were White. So the non-white, non-male sample was relatively small.

What the researchers found was that white male respondents were significantly more likely to report being contacted by executive recruiters. And it wasn't because of factors like individual education or work experience--these were controlled for. Interestingly, further analysis revealed that race appeared to be the driving force; there was no difference in contact between female and non-White male respondents.

They also found that switching employers ("pursuing an external labor market strategy") resulted in a compensation premium only for White male respondents. Not surprisingly, those who reported receiving more contacts from search firms had greater compensation. But White males who had received the most number of contacts reported the highest level of compensation compared to other demographic groups.

Here's how the authors summarized their results: "...our study suggests that the White male advantage associated with external job change is, to a meaningful degree, sensitive to the processes and practices of executive search firms."

The authors do point out that the small size of the non-White male sample may have something to do with the results, and they caution readers about generalizing the results to other populations. However, they also state: "...the executive search industry would likely benefit from in-depth internal reviews regarding a variety of diversity-oriented issues. It is in the search firm’s likely best interest to present a diverse slate of candidates to its clients; as such, search firms would benefit from pursuing internal studies designed to determine if and why female and minority male managers and executives are underrepresented in their databases (or in databases of the BlueSteps variety)."

Is this an example of blatant racial discrimination? It's suggestive but not conclusive. The authors even write "we are not suggesting that anything sinister is going on or that search firms are intentionally discriminating against women and non-White males." But the fact that a large percentage of top search professionals appear to be White males should factor into the conversation. It may not be intentional, but that doesn't mean it's not discrimination. More research (e.g., controlled laboratory studies) would help us answer this question.

Thursday, April 07, 2011

A little of this, a little of that

I've got a hodgepodge of things for you this time, some research mixed with some other interesting things.

First, the March issue of Journal of Applied Psychology, which has been out for a while and I've been a little slow to get to:

- Becker and Cropanzano on the (non-linear) relationship between job performance and voluntary turnover (people tend to skedaddle when they're on a performance skid).

- Podsakoff, et al. with a fascinating study of the impact of OCBs on interviewee ratings. Turns out they made quite an impact, particularly for the higher-level position, and particularly when candidates demonstrated low levels of OCB (e.g., helping, loyalty). Raters were probably surprised that a situation that so clearly calls for impression management failed to elicit it.

- McDaniel, et al. with a great piece on SJTs. The authors describe two adjustments that can be made to traditional SJTs that improve validity, reduce Black-White mean differences and score elevation due to coaching, and reduce total length. That all sounds pretty good to me, and you can read an in press version here.

- Last but not least, Swider, et al. report the results of a study on job search effort and voluntary turnover. Job embededness appears to play an important rule, as does job satisfaction and the availability of alternatives.

Speaking of McDaniel, he and his colleagues have written an article for an upcoming issue of Industrial and Organizational Psychology with the provocative title of "The Uniform Guidelines are a detriment to the field of personnel selection." SIOP members should be sure to consider submitting a commentary, and even if you're not a member, you should check out the in press version; it's a good read.

Speaking of adverse impact...I attended a fascinating webinar sponsored by PTC/MC on Wednesday where Kenneth Yusko of Marymount University described the development of the Siena Reasoning Test, which uses a slightly different question type (along with some other techniques) to reduce d but maintain criterion-related validity. Provocative stuff, and one of the the holy grails of personnel assessment. Which probably explains why Yusko and his colleagues are being presented with the 2011 M. Scott Myers award at this year's SIOP conference. Interested? Check it out yourself. You can also flip through slides from a similar presentation at the 2008 IPAC conference.

Speaking of IPAC, have you registered for the July conference in D.C.? It's shaping up to be another great one--just check out some of the pre-conference workshops.

Finally, if you're up for a little heated discussion, head on over to ERE, where Wendell Williams laments about the increasing number of people who claim to be "experts" in assessment but who lack the chops. He particularly calls out poorly informed bloggers. Hey...wait a minute...

Monday, March 28, 2011

Seeing through the candidates' eyes

"A person-centric work psychology will look at the world through the eyes of the worker. It will take work in the raw, all the sights and sounds and tasks and people that run by the worker everyday. The worker experiences all of it, not demarcated parts of it. She organizes and partitions to be sure, but it is her organization that must interest us." - Weiss & Rupp

"Before you criticize someone, you should walk a mile in their shoes. That way when you criticize them, you are a mile away from them and you have their shoes." - Jack Handy

The two quotes above represent different ways of looking at someone else. The first, from a recent journal article, suggests we benefit from studying the world as others see it. The second, while humorous, suggests an alternative--and more common--worldview, where people are really just a means to our ends. Unfortunately in the world of recruitment and selection, all too often we're stealing shoes rather than practicing (and studying) empathy.

Fortunately there's been quite a bit of talk lately about seeing things from the perspective of the applicant, particularly in the recruiting world. Joe Murphy, among others, has done a lot of writing and thinking about this. And in the latest issue of I/O Psychology, Howard Weiss and Deborah Rupp bring our attention to the fact that I/O research tends to treat people as objects. (e.g., a collection of KSAs, a test score). They also point out that much of I/O research is done for the "collective purpose", driven explicitly to serve organizational needs, rather than beginning with the person. It's one of the best written, most thought provoking pieces I've read in a while. And it convinced me that we need to spend more time studying the experience of working; or for our purposes, the experience of applying for a job.

There is a lot of emphasis these days on reaching out to prospective applicants via social media. I'd like to respectfully submit that for many organizations this is putting the cart before the horse. In fact it may cause the horse to trample you. Because we don't even do a very good job communicating with EXISTING applicants. So you could very likely be trying to build a brand or reputation that's already been sullied. Yeahh..good luck with that.

And think about how important it is to treat applicants right. Not only can you get easily razzed on Facebook, you're dealing with people at one of their most vulnerable times. Think about when you really NEED an organization and how it impacts your sensitivity. What state of mind are you in during a medical emergency? When your car needs to be fixed? When you REALLY need a plumber? Looking for a job, whether because you're unhappy in your current one or, even worse, because you don't have one, puts people in a vulnerable, and sensitive, state of mind.

Do we see things from the eyes of our applicants? Do we treat them as customers? Because in many cases they literally are, and if they're not they're connected to people that are.

If we do see them as customers, then why...

Do we not get back to them, even with a simple automated acknowledgement, when they apply?

Do we not give them any feedback about their performance on assessments?

Do we structure our processes and career portals around what makes our lives easier, not theirs?

Do we wring our hands about how hard it is to weed through piles of applications, when most ads do a horrible job of serving their purpose (attracting the RIGHT applicants) and we give people very few tools to use to self screen (e.g., realistic job previews)?

In our defense, there is research done in this area (e.g., perception of selection tools and websites, organizational attraction), but there are many more opportunities, as pointed out in the commentaries. For example:

- What does it feel like to be recruited? To be passed over?

- What does the experience of taking a particular assessment for a particular opportunity feel like? Exciting? Frustrating? Confusing? Engaging?

- What are applicants attending to while taking a test? (we assume it's just the test)

- Does having fun while being assessed add to the organization's perception, or lower test anxiety?

- How does an applicant describe their assessment process to others?

If we want to explicitly tie these to organizational objectives, we could take the results of this research and answer questions like: How do candidate/applicant perceptions translate into socialization, turnover, attitudes, and job performance?

From a broader perspective, this article reinforces out that most organizations need to adopt much more of a systems perspective of the selection process. And there are other implications. Perhaps if we knew more about the experience of working, it would help us understand job performance better. And that would help explain the large amount of variance that goes unaccounted for by formal testing.

To some extent, none of this is new. Recruitment and selection research is all about understanding applicants. But it's done so through the lens of the organization--in other words, it looks at what applicants bring to the table. It's a slight shift to think about studying applicants as individuals and focusing on their experience. But doing so might help both organizations and individuals find a better match.

Sunday, March 20, 2011

Evidence-based I/O: Where are we? More importantly, who?

The March 2011 issue of Industrial and Organizational Psychology contains two excellent focal articles. One, which I plan on writing about in the future, is about how the field needs to spend more time studying the experience of working rather than treating workers as objects. It's one of the best, most thought-provoking articles I've read in this journal.

But today I'll focus on the first article, which is about the evidence-based practice (or lack thereof) of I/O psychology, by Briner and Rousseau (B&R). This topic is obviously near and dear to me given this blog so I had a lot of reactions during my read of it. I'll share some of those with you today.

B&R argue that the practice of I/O psychology is not strongly evidence based--at least not in the sense that other professions (e.g., medicine) are becoming. This may surprise you given the history behind I/O psychology and its strong grounding in sound research methods. But we're not talking about the quality of research (although as some of the commentaries point out, this is a relevant question), rather how well we're doing putting research findings into practice.

The authors point out that there are many "snake-oil" peddlers out there who claim to have evidence for what they do. That the concept of evidence based is not used or well known in I/O circles. Even more fundamentally, we have no data on what percentage of I/O practices are based on solid scientific evidence. They point out that part of the problem is that the latest research findings and research summaries aren't accessible--a point I obviously agree with given the purpose of this blog. They even provide a tool (systematic reviews) that they suggest could help us get there.

Yet it's one particular point, referenced by both focal and commentary authors but not treated in depth, that I'd like to focus (okay, rant) on. I'll use a quote from B&R to get us started: "It is not always obvious to practitioners, certainly not the least experienced or less reflective, how exactly to apply the principles identified in such research." This comes as close as any to, IMHO, the real issue: who uses I/O research in organizations.

Before I go too much further, I'd like to do something that I don't believe any of the authors did: acknowledge my bias in approaching this topic. I'm someone who has an advanced degree in I/O (a Masters, which paraphrasing the focal authors puts me slightly above the village idiot), but who works in the trenches assisting supervisors and managers alongside many who have little if any formal education in either I/O or HR. This obviously impacts the issues I see as important and my take on them. 'Nuff said.

Now, back to my point. I believe the authors in this issue fail to recognize a very important point: that although I/O research is used by I/O consultants and academics, and to a lesser extent by mid-level managers, there's a large group who is going unnoticed: HR practitioners. It is these individuals who I would argue in most organizations are the primary "users"--whether they know it or not. These are the folks who supervisors count on to provide expertise related to a wide variety of HR/IO issues, including recruitment and selection. Yes, many organizations use consultants who have formal training in I/O, but I think we can all agree that in terms of the number of day-to-day decisions that get made related to HR, we're primarily talking about supervisors and HR practitioners. And only after recognizing this point do answers to the question "how do we make I/O practice more evidence-based?" become more clear.

Let me throw out a couple questions just to stimulate you a bit more:

1) Why is this one of only a handful of "publications" devoted to making research more accessible to HR practitioners? (and heck, I don't even know how many of you are practitioners in the first place!) And why is it left up to "independents" such as myself, rather than professors? (Dennis, you are a blessed exception)

2) Why does SHRM, with its enormous resources and user base, focus on things like HR strategy and leadership, while failing to focus on evidence based practice? (and while I'm ranting, Sir Richard Branson as keynoter? really?)

3) Why is SIOP seemingly only now beginning to make attempts to make research more accessible (e.g., with its randomly updated blog and thankfully soon-to-be published Science You Can Use series)?

I would argue it's all because we do a horrible job of focusing on the true users of I/O research. We engage in high-level debates about criterion-related validity but fail to gather even basic information like who is responsible for making each type of HR decision.

And unlike evidence-based medicine, we have a particular responsibility to ensure that HR/IO decisions are made using the best science. Because the consumers of the research aren't all individuals with advanced training (e.g., doctors). They're supervisors who are under the gun to make an effective, and ass-covering, decision. They're HR analysts who ended up there not because they love the topic but because they were looking for a promotion.

So if the goal is to increase the number of people-related decisions in the workplace that are based on the best evidence, in addition to tackling the issues identified in this issue (such as publication bias and accessibility), we need to do a much better job of understanding our customer base and tailoring our efforts toward them. After all, I/O psychologists generally do not control organizational practices.

So here, in no particular order, are my own recommendations for helping ensure the "best" (defined here are based on science, not things like, oh, I dunno, organizational politics) people decisions get made:

1. Make basic research more accessible--and by this I mean affordable. And by this I mean free. Or cheap. Fifteen dollars for a research article? Really? How many of those are you selling, exactly, publishers? Somebody, I don't know who know, needs to get on this.

2. Start addressing the elephant in the room: that many of those providing guidance to supervisors (i.e., internal HR), not to mention supervisors themselves, are doing so based on absolutely zero research. Who are these people? How are they trained? What do they know? We know very little, because we don't study them (with some exceptions). Imagine if evidence-based medicine tried to progress without understanding anything about doctors.

3. In a similar vein, spend more time understanding those who are ultimately responsible for people decisions and have to live with them: supervisors. Particularly first-line ones. Why--and more importantly how--do they make decisions related to particular HR issues? To what extent is speed the single most important decision criterion?

4. Identify the "high value" people decisions (e.g., selection, harassment prevention) and their current practice. Use this to figure out where the biggest gap is and where we should be channeling resources. Without a baseline it's hard to figure out where we should be focusing.

5. For Pete's sake, let's start putting our "blessing" on certain practices. SIOP, don't be afraid to endorse things. By remaining "objective", you're de facto taking the stance of, "hey, figure it out yourself, or hire a high-priced consultant." There's a thought behind products earning an "energy star."

6. Let's have an industry-approved training curriculum and certification. Why does SHRM offer the PHR and SPHR...and that's pretty much it? We need training programs for practitioners that are created and reviewed by people steeped in the research but able to translate.

7. Let's identify "best of" practices as well as "worst of." Why are most awards for individual researchers? Why not organizations that demonstrate particularly effective practices? Why not have a professional equivalent of the Razzies for just the opposite? ("and now...the award for worst interview question of 2011 goes to...")

Okay, I think that's the end of my rant. Let me be clear, the proposition that people decisions should be based on the best available science is hard to argue with. I applaud all of the authors for thinking deeply about this topic. I just think we need to get in the trenches a little more. Because if we don't, the trend that's already apparent (toward automation/speed and away from thoughtful, research-based decisions) will accelerate. Leaving lots of people trained in I/O but few who understand--and are interested in--their services.

Side note: this article did point out a couple books I'm considering adding to my library, including Locke's handbook on organizational behavior (probably still too much for the average consumer in this age of Twitter, but on the right road), and Locke, et al.'s book on evaluating HR programs.

Sunday, March 13, 2011

Research roundup: Political skill, emotions, and...hobos?


Time to catch up on the research:

First, Blickle, et al. show that political skill ("the capacity to understand others in working life effectively, and to apply such knowledge to induce others to act in ways that add to one's personal or organizational goals") can predict job performance beyond GMA and personality factors.

Fast on its heels comes a study from Bing et al. that also shows support for the concept of political skill (what is it, election season?). This time the researchers use meta-analysis to show a positive relationship between political skill and both task and contextual performance, with a stronger link to the latter. The relationship is also stronger as interpersonal and social requirements of the job increase.

Next, Giordano, et al. provide evidence that can help organizations detect deception in interviews (hint: reviewers out-performed interviewers, and giving a warning helps).

Fukuda, et al. show that the factor structure of two emotional intelligence measures remained consistent when translated into Japanese.

Hjemdal et al. continue gathering support for a measure of resilience (essentially the ability to successfully deal with stressors); this time in a French-speaking Belgian sample.

Interested in the Bookmark method of setting cut scores? Then you'll want to take a look at this study by Davis-Becker et al. and its slightly disturbing results.

The measurement of reading comprehension is common, and it's important to understand the construct(s) being measured. This piece by Svetina, et al. helps us parse out the concept.

The discussion about the March issue of the Journal of Personality and Social Psychology was dominated by Bem's study of precognition, but a hidden gem is there to be found: Hopwood et al.'s study of how personality changes over one's lifespan. Looks like it's largely due to environmental influences and the most change occurs early on.

Interested in PO fit and its relationship with attitudinal outcomes? Then you'll want to check out Leung & Chaturvedi's study.

Last but not least, Woo analyzes a sample of U.S. workers and finds support for "Ghiselli's hobo syndrome" among a small group: basically these folks frequently change jobs and have positive attitudes about it. Not surprisingly, these hobos reported being less satisfied with their current jobs. I guess there's just no pleasing some people!

Side note: I just ran across the Questionmark blog. A lot of really cool stuff that focuses on education and learning applications, but a lot of the content is directly applicable to personnel assessment. They're a vendor, so there are some sales pitches, but it's worth weeding through.

Final note: I apologize ahead of time if some of these links don't work; I've tried my best to link to non-session based abstracts but it can be hard to tell.

Monday, March 07, 2011

Gild.com: What the future looks like

The concept of a website that matches applicant skills to specific recruitments is...shall we say...not new.

What is new is a website that gets applicants to take actual assessments (of things like mathematical reasoning) so recruiters have a much better chance of finding the person that has the skills they need.

And it working.

This is the genius that is Gild, a website launched late last year devoted to "serious technologists" and currently being used for recruiting by companies like Oracle, eBay, and Salesforce.com. As of December they had over 100,000 users.

How do they do it? With a dash of good 'ol fashioned reinforcement, in the form of competitions for actual prizes that the target audience might like (like an iPad). The focus is on IT jobs, but one can easily envision this being expanded for other occupations (e.g., demonstrate your knowledge of multiple regression and win a free one-year subscription to SIOP!).

There are two main ways Gild gets users to take assessments: through certifications and competitions. Certifications are short multiple-choice tests designed to measure proficiency in things like ASP.NET, SharePoint, and Unix (and some more general competencies like English proficiency). They're easy to take and (at least if my middling knowledge of IT is any indication) difficult to fake. They've even incorporated reinforcement into adding members (invite friends for a chance to win an iPad!).

Competitions are where things get really interesting. They're also short m-c tests, and here are some examples of competitions under way as of this writing:

- PHP Elite (prize: Kindle)
- Java Elite (prize: iPad)
- Mathematical Reasoning (prize: AppleTV)

It's the social competition that seems to be the key. The group interaction extends even to their excellent support forum. For example I see that someone suggested practice exams or sample questions, and the site was quick to praise the idea and promise to investigate.

This website demonstrates that it is possible to get internet applicants to complete real assessments as part of an online profile--when properly motivated. Yes, I know, unproctored testing is subject to faking, blah, blah, blah. I just don't buy that argument anymore. Use confirmatory testing; end of story.

One thing I noticed, which may not be that surprising: the leaderboard is currently dominated by folks (okay, almost all men) from India. I mean...wiping the floor with the other countries. Now they have offices in India (and China, and the U.S.), so maybe it's better known there. Or maybe their focus on technology is showing dividends.

Another interesting...feature...is that the site is supposed to be exclusively for direct employers, not third-party recruiters. No trolling here. Once you create an account, you have the ability to post jobs (or "job cards"), create competitions, and manage your company profile. Interestingly, the posting process allows you to specifically select three skills--and their level--to target. Like a mini-mini job analysis. It will even forecast the supply and demand dynamically based on your requirements. A one-month "silver" posting is free but is smaller than the $50USD-"gold" posting that also includes 100 invites and better placement. Still, very affordable (I mean, Dice is $500).

Behind all this is the Professional Aptitude Council (PAC), a company that creates certifications and technologies to deliver them. The website states that their mission is in large part to ensure that talented individuals get opportunities based on their merit.

Gild is a great example of how to use technology to engage applicants, create more legitimate profiles, and offer employers a more accurate method to match individuals to specific recruitments.

Hat tip. You can read a little more about the company's history and purpose here.

Sunday, February 27, 2011

Spring '11 P-Psych: Leadership, interviews, competency modeling

It's journal season--this time let's take a look at the Spring 2011 issue of Personnel Psychology:

First up, Derue, et al. with an important meta-analysis on leadership effectiveness. After looking at 59 studies, they found that leader traits and behaviors explained a minimum of 31% of leadership effectiveness. Interestingly, group performance (which I would argue is the most important criterion) was the most difficult to predict. Behaviors tended to explain more than traits, but the authors suggest a model where behavior mediates the relationship between traits and effectiveness is warranted. Not surprisingly, the best trait predictor depended on the criterion: conscientiousness predicted leader effectiveness, group performance, and follower job satisfaction the best, while satisfaction with leader was best predicted by leader agreeableness (reminds me of a recent IJSA study). The same was true with leader behaviors, although consideration was a good predictor across the criteria. A must for anyone interested in leadership research, and you can read an in-press version here.

Next, a piece by Melchers, et al. on whether more interview structure really leads to better rating quality (I'll ruin it for you: yes). Specifically, using a sample of primarily undergraduates, the authors found that providing subjects with frame-of-reference (FOR) training and descriptively anchored rating scales led to substantial increases in rater accuracy and interrater reliability. You can read an in-press version here.

Those of you interested in the concept of core self-evaluations will want to read the study by Ferris, et al.

One area that we don't see enough research in is newcomer adaptation. The longitudinal study by Wang, et al. of a group of Chinese subjects helps fill that hole by exploring the relationship between adaptability, person-environment fit, and work-related outcomes.

Last but most certainly not least, Campion, et al. provide us with a review of best practices in competency modeling. Specifically, 20 of them. Of particular note to you may be that they distinguish competency modeling from job analysis. Want to read the whole thing? Me too! Wish I could have found an in-press version but no such luck.

Sidenote: while not specifically related to recruitment or selection, Christian, et al.'s piece on engagement may be of interest to several readers.

Tuesday, February 22, 2011

"Grit": An example of will do


Personnel psychologists often make a distinction between factors that indicate a person can do a particular task and those that indicate they will do. Usually "can do" facets include things like cognitive and physical abilities--baseline traits that a person must possess to even be able to perform the task. "Will do" facets are related to motivation and interest and get at whether the person is likely to perform the task, regardless of ability.

In the March 2011 issue of Fast Company, Dan and Chip Heath (Switch) write about the concept of "true grit" and its importance for successful performance. They point to the recent movie remake of True Grit. While many might assume the title refers to the crusty gunslinger (Rooster Cogburn), it actually refers to Mattie, a teenage girl who hires Cogburn to avenge her father's death.

The Heaths describe several examples where organizational leaders and innovators refused to give up in the face of failure or long odds and go on to impressive success. They even cite research conducted several years ago by Angela Duckworth and her colleagues who found that scores on a measure of grit predicted retention in West Point (a prestigious U.S. military academy).

What they don't point out is that the retention finding was specific to a summer training program and scores on the measure of grit were not superior to other predictors when the criterion was first-year cadet GPA or performance ratings. In addition, the percentage of variance accounted for across the studies was around 4% and the measure correlated highly with a measure of conscientiousness. However, grit demonstrated incremental validity beyond IQ and conscientiousness and it's still a fascinating study that you can read here.

To be sure, "will do" factors are often overlooked when talking about selection. Often we focus exclusively on ability factors, either because we're unsure how to measure motivational factors or we're afraid to. But that doesn't mean they're not important. We know in some situations (e.g., jobs with low entry and ability requirements) noncognitive measures can out-predict ability measures.

This article also raised two other issue for me. First, laypeople often conceptualize KSAPs as dichotomous: you're either smart or you're not, you either have integrity or you don't. The reality is practically anything you can think of measuring lies on a continuum--so we talk of degrees of personality characteristics, or levels of ability. With respect to the topic of this post the situation is the same: there are shades of grit.

The second issue has to do with having too much of a good thing. One can be too smart for a job; it's not that you can't do it, it's that you'll likely get bored after a short period of time. Similarly, one can have "too much" (or be too far on either end) of a personality continuum. Take grit. Imagine someone who was so determined that not only do they persist in the face of obstacles, they refuse to give up, even when presented with overwhelming odds. Now they're bordering on obsessional and/or delusional.

So what does this all mean? Back to basics:

1) Know the job and its requirements
2) Pick critical, necessary-at-entry KSAPs to measure
3) Select and/or develop high quality measures
4) Know your applicant pool and the likely range of scores you will obtain
5) Recognize that the relationship between tests and job performance is probably not linear (particularly when your concept of job performance is multifaceted)

And finally, back to true grit. The best thing we can do as assessment professionals is demonstrate it ourselves by not taking the easy way out and not folding in front of obstacles such as the desire for speed over quality, or ignorance. No guns required.

Thursday, February 17, 2011

March IJSA: So much good stuff

The March 2011 issue of the International Journal of Selection and Assessment is out, and it's a doozie. Check it out:

- Beaty, et al. present evidence that unproctored Internet tests (noncognitive: personality and biodata) generally had similar criterion-related validities across a spectrum of job performance criteria compared to the same administered in a proctored setting. Mmmm....UIT...

- Generally when people re-take a cognitive ability test, they do better. But do they do better on other tests of cognitive ability? Matton, et al. describe a study that looked at just that and found the answer to be "no."

- One advantage that is frequently claimed about personality inventories is that usage results in less adverse impact compared to, say, ability or knowledge tests. But might the AI depend on how hires are made (e.g., top-down, compensatory)? Turns out the answer is yes--at least according to some results by Risavy and Hausdorf.

- Hey, government agencies, still debating whether to put more resources into your career web portal? Maybe this will convince you. Selden and Orenstein show that governments with more usable portals as well as better available content not only attract more applicants per opening, but have less voluntary turnover of new hires.

- With advances in technology and changes to the work environment, clerical jobs have changed a lot over the last 30 years and the old ways of selecting for these jobs (namely g-loaded tests such as perceptual speed and verbal ability) likely need to be re-thought...right? Well, not so much, at least according to a meta-analysis by Whetzel and her colleagues. In fact, the criterion-related validity values met or exceeded those found 30+ years ago. The more things change...

- De Goede, et al. present the results of a study that explores the relationship between P-O fit and organizational websites but include the concept of person-industry fit. One implication: if you're trying to attract a more diverse group of candidates, work on making your portal more attractive.

- We spend a lot of time trying to make sure interviews are loaded with job-relevant content. But how much attention do we pay to the impact of impression management tactics on the part of applicants? Huffcutt's results make a compelling argument that we ignore the latter to our detriment as it may have more to do with interview ratings than the job-relevant content.

- How does one determine managerial potential? Well, it depends who you ask. Thomason, et al. present results that indicate when supervisors are asked, they focus on task-based personality traits (e.g., conscientiousness), whereas peers focus on contextual traits such as agreeableness. Given that leadership is ultimately about achieving things through subordinates, I wonder what we should be paying attention to....hmmm...how about both?

- Thinking about using self-ratings of political skill as part of the application process? I can certainly see situations where this skill may be helpful, but might this method be susceptible to inflation? Not so much, at least according to results from Blickle, et al.

- Last but definitely not least, Carless & Hetherington with some data on the impact of recruitment timeliness on applicant attraction. The longer we make applicants wait, the less attracted to the organization they will be, right? Not so fast. According to this research, it is perceived timeliness that matters, not actual timeliness (hence the importance of communication). In addition, this relationship is partially mediated by job and organizational characteristics.

Saturday, February 12, 2011

Research update: From item context to signaling theory and more

Here are some research articles from the last couple months:

Grand et al.'s study showed that adding a job-relevant context to test items--even under explicit stereotype threat--had either beneficial or no effects on test performance and test perceptions among female test takers. More evidence of the benefit of tailoring items to the particular position being tested for.

Impression management during assessments is often considered to be a negative thing--i.e., a source of error. But as Kleinmann and Klehe point out in their study of interviewee behavior, it may be an additional source of validity, and can be related to performance. At the very least it indicates that the person knows enough to alter their behavior to fit the job!

Celani and Singh provide a literature review of the role of signaling theory in applicant attraction (making inferences about important aspects of the job/organization from characteristics of the recruitment) and how social identity interacts to impacts attraction outcomes.

Last but not least, Soto et al. with a fascinating study of age and personality characteristics. Over a million individuals participated over the web, and the authors highlight several key results, such as late childhood/early adolescence being key periods for age trends, strong maturity and adjustment trends over adulthood, and the importance of looking at facet-level results.

Sunday, February 06, 2011

We can make assessments "fun"...should we?


Remember when tests were fun?

Neither do I. Tests and assessments have a long history of being about as popular as the dentist. Starting in grade school many come to dread them as lifeless--and often inaccurate--judges of worth. (Of course doing well on them that tends to improve your view)

Tests don't have to be boring. We write structured interview questions and multiple-choice questions because that's what we've always done. And we know how to do it right.

But there are plenty of ways of making them more interesting, from the way they're written (try: "You are in a maze of twisty passages, all alike"), to their presentation (e.g., animation, video), to the way people progress (e.g., adaptive testing), to the way results are given to you ("You've got the high score!"). Today more than ever before we have the flexibility to take those dry, monochromatic presentations and turn them into something eye-catching and even...dare I say...fun?

(10 bonus points to those of you who caught the Adventure reference in the preceding paragraph)

The question is: Should we?

There's been quite a bit written lately about the "gamification" of assessments. Heck, I've been on that bandwagon for years. America's Army, first published in 2002 as the U.S. Army's foray into first-person shooters, was an early example of the potential marriage between staffing and entertainment--yes it's not technically a personnel assessment but the recruiting mission is obvious as is the potential use of the results. Since then we've seen a steady stream of innovation, from the use of branching video to realistic job preview-type assessments presented online.

There are several reasons why we might want to make assessments a wee bit more entertaining:

A) Because we can. Please don't use this reason.

B) As a recruiting tool to help you distinguish yourself from competitors ("Look, we're fun and cool! Join us!")

C) To encourage candidates to complete the assessment ("Yes it's a little long, but the time will fly by!")

D) As part of a realistic job preview ("Not sure if you want the job? Find out virtually!"). Nothing wrong with that. Self-selection out is good.

E) Because it helps us measure more accurately. Ah-ha. Now we're getting somewhere. To the extent that entertainment/interactivity helps us overcome candidate fatigue or other error, or otherwise helps us measure relevant KSAPs more accurately, we've won the game.

(10 bonus points to those of you who saw noticed the irony of me using a traditional multiple-choice presentation in that last part)

(50 bonus points to those of you who have noticed the use of bonus points in this post)

So we have a number of reasons we might want to make assessments more fun. But there are some reasons why we might not want to. Or at least that should make us pause:

1) Tests are serious. No, really. They have an enormous impact on people's life. We don't want to water down their nature so much that we disrespect our applicants.

2) The "tests" out there right now that are the most "fun" are also not ones you'd want to use to select people (e.g., which animal are you?), although there are some surprising hybrids (find your Star Wars twin).*

3) Those assessments that are kinda-pitched-as-actual-assessments-but-not-really-don't-hold-us-to-that, have already started blurring the line (e.g., True Colors). We want to draw a clear distinction between assessments purely for fun and "okay, you need to take this seriously, it's for a job."

4) We can easily mess this up. All it takes is a high-level manager getting into his/her head that we need to "make these test things more fun" and suddenly we're pressured into creating an expensive mess that doesn't deliver (like Daikatana).

(10 bonus points to those of you who noticed I switched response options from letters to numbers)

(50 bonus points to those of you that completed all three of the example assessments)

(100 bonus points if you know what Daikatana is)


So is there room for us to be a little more creative and investigate alternate--more immersive, interactive--ways of assessing candidate qualifications? Absolutely. But should we use caution to make sure we don't have a big-budget flop on our hands? You bet.


Now count up your points from this blog post. How did you do?

0 points: Wait...you did READ this, right?

10-20 points: Okay, maybe you're tired.

20-130 points: The force is strong with you...but you are not a Jedi, yet.

More than 130 points: Call me.


* I'm an owl. Or maybe a penguin. Oh, and kinda like Darth Vader. But also Princess Leia. And Mon Mothma. Now I'm confused.

Wednesday, February 02, 2011

New blog: Select Perspectives

There's a new blog on the block, and this time it's the folks over at Select International and the name of their blog is Select Perspectives. Three posts over a span of one-and-a-half weeks (they started at the end of January) is promising. I particularly enjoyed the post about talking to fifth graders about I/O psychology.

Here's to hopin' that we're witnessing the birth of a valuable addition to our blog roll! Welcome.

Oh, and the RSS feed is riiiggghhht here.

Sunday, January 30, 2011

Job ads: Choose your words wisely


Back in 2007 I began a project through this website to collect survey data from passers-by on words found in job ads. You may have seen a link to the survey on this blog's home page--or even taken the survey itself.

I was motivated to undertake this project because at the time (and this may still be the case) there was very little research on how attractive/effective certain words are in job advertisements. Seems like a simple question, and I was curious.

After 3.5 years I've decided to share what the data shows. My hope was to get a large sample, and I'm settling for just under 150, so take that into account. And of course the generalizability of the results is questionable, although the fact that it was gathered over such a long period of time and the sample group is fairly diverse may help us feel a little more comfortable.

Method

SurveyMonkey. Four questions. Began collecting data in June of 2007, last data point was November of 2010. Questions were followed by a series of fifteen words or phrases generated pretty much off the top of my head with an "Other" option. Unfortunately I didn't think to randomize the presentation of options, so keep that in mind. I had to use two different collectors in SurveyMonkey since I have a free account and they max at 100 responses. When graphics are presented below they are for the first collector only (couldn't download/combine the data sets because of free account), which has more responses but is older data. You can see/take the survey here.

Participants

One hundred and forty-seven blog visitors. I expected most to primarily identify as HR professionals or academics, but most (43%) chose job seeker. The respondents generally fell evenly into the other categories, including just being interested in the subject matter.

Results

Word Frequency


The first question asked participants what words they see most often in job ads.

The #1 answer? By far....."Motivated" Selected by 70-80%, depending on the collector, also the first choice presented.

Other frequent answers (in the 50% vicinity):

- Professional
- Organized
- Works Well Under Pressure

Interestingly, "Motivated" seems to have become more frequent over time, as has "Works Well Under Pressure" and "Flexible". "Independent", "High Energy", and "Friendly" were among the words becoming less frequent.

How about least frequent?

- Conscientious
- Smart
- Friendly

Both "Conscientious" and "Smart" became less frequent over time.

Emotional Response


The second question asked participants to rate their emotional response to the same words or phrases presented in the first question on a seven-point rating scale from "Very Negative" to "Very Positive". Generally most words/phrases received positive responses, and the difference in means between the best-liked ones and least-liked ones was less than one point.

So which received the most positive ratings?

- Motivated
- Reliable
- Professional
- Independent

How about least positive?

- Works Well Under Pressure
- High Energy
- Detail Oriented

I'm guessing that to the extent that there is a difference here, the most popular words seem to describe jobs that would allow applicants a fair amount of flexibility over their work and involve a stimulating work environment. The least positive words are likely associated with fast-paced (hey that would have been a good option) jobs, such as customer service, that may not be particularly stimulating.

Application Intentions


The last question asked respondents to select how likely they would be to apply for a job that contained the same list of words/phrases (knowing no other details). As with emotional response, most responses were associated with high intentions to apply, and the difference between most likely and least likely means was less than one point.

So which led to the highest application intentions? "Professional" and "Reliable" were consistent across the two collectors. "Motivated" became more positive over time, as did "Independent". This is consistent with emotional response.

How about the ones least likely to lead to application intentions? Matching the emotional responses, "High Energy" and "Detail Oriented" were bottom on the list. "Smart" has become less associated with application intentions, as has "Works Well Under Pressure".

Discussion

Judging by this data, it appears that organizations wishing to distinguish themselves using job advertisements should feel comfortable using words that directly speak to applicant personality, such as "conscientious" and "friendly"--these were less frequently found and not associated with particularly negative responses. Organizations should try to use words that imply an environment that allows applicants to use their own judgment when making decisions and stay away from words that imply a hectic, always-on work environment.

Of course all this depends on the particular job being advertised. As we know, presenting candidates with a realistic job preview is immensely helpful (for them as well as the organization), and if the job is heavy customer service, well, it just is. In addition, this data says nothing about the quality of applicants--it may be that higher performing employees prefer different words than lower-performing ones.


I hope at the very least you found these results interesting. It's something to think about when crafting your job ads, and of course one could run a much more sophisticated study by including things like occupation and demographics.

Thursday, January 27, 2011

Jan TIP gems: HRO, UIT, SIOP, and VII


Yes, my goal was to create a blog post title using words no longer than four letters.

Anyway, for those non-SIOP'ers out there, or SIOP'ers that may have missed 'em, there were some gems in the latest issue of TIP:

How I/O can shape the practice of strategic human resources outsourcing (HRO)

A great little study on perceptions of various ways of mitigating cheating on unproctored Internet testing (UIT)

The difference between academics and practitioners in terms of what topics are valued at the SIOP conference (e.g., the latter were more interested in job analysis, staffing, and strategic HR)

Last but not least, a point, counter-point on whether the addition of sex as a protected category under Title VII was a joke