Monday, June 25, 2007

June 2007 IJSA

Yep, it's journal time again...the June 2007 issue of the International Journal of Selection and Assessment is out...

This time the articles fall into two main camps: those focused on applicant reactions to selection procedures, and those focused on personality testing.


Applicant reactions

First up, Landon and Arvey looked at ratings of test fairness by 57 individuals with graduate education in HRM. The test characteristics included validity coefficient, mean test score difference between minority and majority candidates, and test score adjustment for the minority group. Results? Fairness ratings depended almost as much on how raters rated as they did on the test characteristics! Implications? "Extensively cross validated with a global random sample of over 3 million with criterion-related validities exceeding .90!" may not be worth a hill of beans to some folk.

Next, Bertolino and Steiner with a similar study of test fairness using responses from 137 university students (ahh, where would our research be without students). Students rated 10 selection methods on 8 procedural justice dimensions. Results? Work sample tests were rated highest (as they usually are), followed by resumes, written ability tests, interviews, and personal references. All things that are expected by most job seekers. What wasn't perceived as well? Graphology--thankfully. The most important predictors of procedural justice were opportunity to perform and perceived face validity. Nothing earth shatteringly new, but some international confirmation of previous, mostly U.S., findings.

Speaking of international support for the importance of opportunity to perform and face validity among an international sample judging fairness of selection methods (say that five times fast), Nikolaou and Judge analyzed responses from 158 employees and 181 students in Greece. Methods rated highest (drumroll...): interviews, resumes, and work samples, across both groups. However, students reported more positive reactions to "psychometric tests" (i.e., ability, personality, and honesty tests) than did employees--an important distinction. Also, although there does appear to be individual differences in rating (see previous study), core self evaluations didn't appear to explain much.

In summary: Work samples and interviews--high fairness ratings and (potentially) high validity. A great combination.


Personality testing

First up in the personality section is Kuncel and Borneman's study of a new method for detecting "faking" during personality tests by looking at the particular way "fakers" respond. The authors found support for this method (sample was not described), with 20-37% of fakers identified. May not seem like a lot, but they had a false positive rate (incorrectly labeling someone a faker when they're not) of only 1% with a baserate of 56% honesty. Not too shabby. Interestingly, the "faker" pattern did not correlate with personality or cognitive ability test results.

Second, a very interesting study of the "dark side" of personality by Benson and Campbell. Using two independent samples of managers/leaders (N=1306 and 290), the authors found support for a non-linear (inverted U) relationship between "dark side" scores and assessment center ratings as well as performance ratings. So, for example, having a moderate amount of skepticism or caution is good--but too little or too much creates a problem. The instruments used were the Global Personality Inventory and the Hogan Development Survey.


Okay, there's a third category

Okay, one more study. de Mejier et al. analyzed results from over 5,000 applicants to Dutch police officer positions. The researchers were interested in rating differences when comparing ethnic minority and non-minority applicants. Results? Similar (if not more) assessment information was used to judge the two groups, but a large number of "irrelevant cues" were used to judge minority applicants. One other difference: when rating minority candidates, assessors relied more on the ratings of others. Another argument for a standardized rating process!

Saturday, June 23, 2007

JobAdWords Update

Update on the JobAdWords Survey

In a previous post I introduced a survey I'm using to answer some questions about the words used in job advertisements. The survey can be taken here, or if that's full, here or here.

Results so far? Based on responses from a mixture of HR professionals and job seekers (I'll break them out when I have a large enough sample):

The words most often seen in job advertisements:

- Motivated
- Flexible
- Customer-focused

The words least often seen in job advertisements:

- Independent
- Smart
- Friendly

The words with the most positive emotional response:

- Conscientious
- Reliable
- Strong work ethic

The words with the least positive emotional response:

- Flexible
- High-energy
- Creative

The words associated with highest probability of applying:

- Professional
- Detail-oriented
- Reliable
- Friendly
- Conscientious

The words associated with the lowest probability or applying:

- High-energy
- Flexible

So here's an interesting question (worthy of further research): do the words in job advertisements cause a reaction because they say something about what the job would be like, or because they cause people to self-assess? Or both? For example, "strong work ethic" received a high emotional response, but was not one of the highest-rated words when it came to applying. Hmmm...

Friday, June 22, 2007

June '07 JOOP

The June 2007 Journal of Occupational and Organizational Psychology is out, and while it has several articles of interest, there's really only one directly related to recruitment/assessment.

In the study, Piasentin and Chapman looked at how perceptions of person-organization (P-O) fit come about--whether they stem from feeling like the organization is similar to you, complements you, or some combination of both.

Using a sample of data from 209 employees "of various occupational and organizational backgrounds", the authors found support for both the similarity and complementary effects. In addition, perceptions of fit were found to correlate (and mediate the relationship) with several other important feelings, including:

- job satisfaction
- organizational commitment
- turnover intention

So what are the implications? How people perceive the match between their own skills, values, and goals and those of the organization matter--and not just to current employees but applicants as well. Organizations have to make sure they give applicants enough information to make this judgment, however. Too often job seekers are provided with minimal, or irrelevant, information about the position and the organization, such as long lists of tasks. Yes, people want to know what the salary is and where the job is located, but they also want to know who they'll be working with, what their career growth opportunities will be, and what the organization's take on work-life balance is.

This is low-hanging fruit from a staffing perspective, and organizations that get it are providing job seekers with this rich form of information.

...speaking of fit...there's a new book out on the topic, called Perspectives on Organizational Fit edited by Ostroff and Judge. It includes recruitment and selection as topics, but also covers others, such as leadership and teamwork.

Thursday, June 21, 2007

Police depts relax hiring standards

In response to serious recruiting challenges, many U.S. police departments are "lowering" their standards for hiring.

The reasons behind the shortage are many, including a strong job market, the Iraq war, and a high number of retirements.

Departments are using whatever means they have at their disposal, including upping their advertising. Case in point: while driving down 880 the other day in Oakland, CA, I noticed a sign promoting the $69,000 starting salary for Oakland Police Officers (and people wonder why it's hard to hire in the Bay Area).

The article cited above describes many steps departments are taking, some of which may initially seem like cause for concern. Let's take a look at them:

1. Forgiving minor criminal convictions, particularly old ones. If someone got busted 10 years ago for doing Ecstasy in college, and hasn't been in trouble since, is that still relevant?

2. Relaxing the 2-year college degree requirement, or allowing experience substitutions. I'm familiar with some research indicating a relationship between college education and officer performance, but if an officer has relevant experience (and performed well), this seems like a wash.

3. Raising the age limit. Age and job performance has been a hot topic in I/O psychology for a long time. While there are some declines over age (e.g., working memory), my reading is that they aren't practically significant in most situations. And we're talking about raising the limit to 40 or 44, not 85.

4. Relaxing fitness requirements. To me this comes back to plain 'ol validation. Granted, it's not always easy to determine where a pass point should be set (do they have to run 300 meters in 55 seconds or 56 seconds?), but do the study. Find out where a reasonable point would be. Run the numbers. See if it makes sense.

A lot of the concerns that go along with these changes--hiring people with low integrity, hiring people physically or mentally unable to perform the job--can be mitigated with good assessment, such as memory tests, physical ability testing, integrity testing, and reference and background checks.

Overall, I think this is a good thing--minimum qualifications (MQ) are often barriers to employment for certain ethnicities, women, and individuals with disabilities. And the situation is even worse when they aren't based on any rigorous study of the necessity for the MQs to being with.

On the other hand, I have heard anecdotally that similar changes in standards for U.S. Army recruits has resulted in more challenges for training.

What do you think--big deal or not?

Monday, June 18, 2007

2007 IPMAAC Conference Presentations

The 2007 IPMAAC Conference in St. Louis was great this year, with tons of informative and entertaining presentations, including great talks by Wayne Cascio, Bob Hogan, and Nancy Tippins.

The presentations are starting to roll in. Right now there's only one up--Michael Harris' great presentation titled Disparate Impact and Employment Testing: A Legal Update--but I expect more to show up this week and the next, so keep checking.

Next year I am co-chairing the program committee along with Carl Swander of Ergometrics. We're going to attempt to match the quality of this year's conference and plan on putting together an outstanding program. It's never too early to start thinking about presenting at next year's conference, to be held on June 8-11 in beautiful Oakland, California. Mark it on your calendars!

Saturday, June 09, 2007

Project JobAdWords

I'll be in St. Louis at the IPMAAC conference for most of the week, then taking a little time off...In the meantime I thought I would gather a little data...

I know there's research out there that looks at applicant reactions to various aspects of a job advertisement, such as information on selection procedures, pictures, and clear descriptions of work climate. But I'm not aware of any that specifically looks at the effect of WORDS commonly contained in job ads (although there is a recent study about word frequency).

You know the ones I'm talking about:
- "motivated"
- "creative"
- "works well under pressure"
etc.

But what exactly do these words mean to a reader? What reaction do they cause?

Today I'm starting Project JobAdWords, an attempt to try to answer these questions.

I've created a very brief survey designed to shed a little light on what these words mean. I would greatly appreciate your participation in this project. Simply go to one of the following websites--if survey #1 is full, go to survey #2, if survey #2 is full, please try survey #3:

Survey 1
Survey 2
Survey 3

As soon as I have enough respondents--judged completely arbitrarily and in no way using statistical sophistication--I will post the results. If I get enough results, I may continue this project by looking at other similar issues.

Thank you! Please pass the word.

Thursday, June 07, 2007

March '07 issue of Journal of Business and Psychology

The March issue of the Journal of Business and Psychology has two articles we should take a look at...

1) A study by Topor, Colarelli, and Han of 277 HR professionals found that evaluations of hypothetical job applicants were influenced by the traits used to describe the applicant as well as the assessment method used to measure those traits. Specifically, candidates described as conscientious were rated highest, as were those assessed using an interview--and the combination of the two was the highest-rated combination. (Other traits included intelligence and agreeableness, other assessment methods were paper-and-pencil test and assessment center)

How does this stack up against known evidence? Could be better. In general (and there are variations depending on the job), intelligence has been shown to be the best predictor of job performance. The value of a particular assessment method, on the other hand, depends not only on the job, but on the nature of the instrument as well as the construct being measured. But the research on measuring personality using interviews is relatively new, and certainly not as established as evidence supporting paper-and-pencil measures of cognitive ability (or personality, for that matter). Another example of HR folk not knowing the evidence as well as they should.

2) A meta-analysis by Williams, McDaniel, and Ford of compensation satisfaction research. The authors looked at 213 samples from 182 studies and came up with some interesting results.

- First, measures of satisfaction with direct pay (e.g., pay level, pay raises) were highly related to one another, while satisfaction with benefits showed only modest correlations with direct pay (suggesting people see direct pay and benefits as distinct compensation categories).

- Second, both the perception of the pay raise as well as the objective amount of the pay raise were important in predicting pay raise satisfaction. This fits well with what we know about the importance of procedural justice.

- Third, a modest, negative relationship was found between employee costs and benefit satisfaction. This suggests benefit costs may not be a particularly important factor when trying to lure candidates. It also suggests that asking employees to pick up more of their benefit costs may be a viable strategy as long as attention is paid to other forms of compensation (which apparently have a larger impact on overall satisfaction).

Wednesday, June 06, 2007

Recruiting metrics

One of my readers asked me to address some recruiting metrics, which I think is a good topic to visit.

Although there are some strong feelings out there about the usefulness of metrics, the fact is that traditional measures of recruiting activity are still out there being used, both as measures of success and for planning purposes.

The two benchmarks that were specifically asked about were:

- Number of fills per recruiter
- Number of hours per recruitment

Unfortunately a lot of the benchmark recruiting metrics out there are not free. Several organizations offer detailed benchmarking data, but ya gotta pay for it. Examples include:

- Staffing.org's Recruiting Metrics and Performance Benchmark Report ($425)

- Society for Human Resource Management (SHRM)'s Human Capital Benchmarking Service ($425 for non-members)

- The Saratoga Institute publishes a Human Capital Effectiveness Report

Even so, I've gathered what data I could get my hands on, so let's take them one at a time.


Fills per recruiter

How many fills, or placements, does a typical recruiter make? Of course it depends on the time frame and what type of placement we're talking about, which makes this statistic challenging to nail down. But here's what I found:

- IPMA-HR's 2006 Recruiting and Selection Benchmarking Report: The median number of recruitments per HR staffer was in the 4-6 category, closely bordering on the 7-10 category, so six is probably a good overall estimate given this data.

The other sources I found were more anecdotal. For example:

- JBN Consulting notes a typical recruiter makes .75 placements per month

- Cluff & Associates state: "A dedicated recruiter in a corporate environment can average 8 fills per month if activity levels warrant. Agency recruiters, however, are likely to generate 2 or 3 placements per month."

-
A Google search on the topic brings up quite a few recruiter job advertisements, which gives us some idea of the expected workload. In general, 1-3 fills per month seems a standard expectation.

Conclusion? The typical number of fills per recruiter seems to vary quite a bit depending upon the type of job, job market, etc. For example, Dave Lefkow noted a few years back that 100 fills a month for customer service jobs was considered successful, while 5-10 analyst/developer fills a month might be acceptable. If I had to pick an "average" number, I'd go with the IPMA-HR figure of six (although consider that's public sector organizations).


Hours per recruitment

Fortunately there's a lot more information about benchmarks for how long it takes to conduct a recruitment (here used synonymously with "time to fill"), although here too the figure varies with the type of recruitment:

- Surveys conducted by the Employment Management Association (affiliated with SHRM) consistently show an average time to fill of 42-45 days. Source

- 2004 Workforce Diagnostic System Benchmark by the Saratoga Institute indicated 48 days.

- 2006 SHRM metric for same-industry hires of 37 days (from opening of requisition to offer acceptance) Source SHRM also offers customized reports (for a fee), available here. Samples indicate median time to fill of 35 and 40 days, depending on industry.

- 2006 Corporate Executive Board survey indicated 51 days Source: The Economist

- 2006 IPMA-HR benchmark report: average of 49 days between vacancy announcement and first day on the job, with a low of 44 for labor/maintenance and a high of 57 for public safety

- Staffing.org and HR Metrics Consortium's 2003 Recruiting Metrics and Performance Benchmark: 70 days (this includes time between offer acceptance and showing up to work). Source

Conclusion? Again, depends on the job (and how you define the metric), but we can estimate around 40 days between vacancy announcement and offer acceptance, with another 10-20 days between offer acceptance and first day on the job.

If you have additional data, please share! Inquiring minds want to know...

Tuesday, June 05, 2007

Feds host virtual career fair

The U.S. Office of Personnel Management (OPM), in conjunction with the Partnership for Public Service, will host a "virtual career conference" to highlight opportunities with the federal government today, Wednesday, and Thursday. You can view the live web cast here.

The event will kick off at 11 a.m. on June 5th with an overview of federal employment by the Director of OPM, Linda Springer. Subsequent panel sessions will focus on finding and applying for jobs, insight from new federal employees, student programs, IT jobs, opportunities in medicine and public health, how to host a career fair, and several other topics.

According to the news release, videos of all twelve panels will be available through both OPM and the Partnership for Public Service for the remainder of the year. The event is specifically targeted at the 600 colleges and universities around the U.S. who participate in the Call to Serve initiative.

Kudos to these organizations for being creative about highlighting opportunities in the public sector and taking advantage of technology to get the word out.

Monday, June 04, 2007

Summer 2007 Personnel Psychology + free content

The Summer 2007 issue of Personnel Psychology (v. 60, #2) is here and it's got some good stuff, so let's jump right in!

First off is the aptly titled, A review of recent developments in integrity test research by Berry, Sackett, and Wiemann, the fifth in a series of articles on the topic. This is an extensive review of research on integrity tests since the last review, which was done in 1996. There's a lot here, so I'll just hit some of the many highlights:

- It appears that integrity tests can vary in their cognitive load depending on which facets are emphasized in the overall score.

- It is likely that aspects of the situation impact test scores (in addition to individual differences); more research is needed in this area.

- Although there have been no significant legal developments in this area since the last review, concerns have been raised over integrity tests being used to identify mental disorders. The authors do not seemed concerned, as these tests (e.g., Reid Report, Employee Reliability Index) were not designed for that purpose thus likely do not violate EEOC Guidelines.

- Research on subgroup scores (e.g., Ones & Viswesvaran, 1998) indicate no substantial differences on overt integrity tests; no research has addressed personality-based tests.

- Test-takers do not seem to have particularly positive reactions to integrity tests, although this appears to depend upon the type of test, items on the test, and response format.

Next, Raymond, Neustel, and Anderson investigate certification exams and whether re-taking the same exam or a parallel form results in different score increases. Using a sample of examinees taking ARRT certification exams in computed tomography (N=79) and radiography (N=765), the authors found no significant difference in score gains between the two types of tests, suggesting exam administrators may wish to re-think the importance of alternate forms for certification, particularly given the cost of development (estimated by the authors at between $50K and $150K). The authors do point out that the generalizability of these results is likely limited by test type and examinee characteristics.

Third, Henderson, Berry, and Matic investigate the usefulness of strength and endurance measures for predicting firefighter performance on physically demanding suppression and rescue tasks. Using a sample of 287 male and 19 female fire recruits hired by the city of Milwaukee, the authors found that both measures (particularly strength measures such as lat pull-down and bench press) predicted a variety of criteria, including a roof ladder placement exercise, axe chopping, and "combat" test. The authors suggest continued gathering of data to support the use of these types of tests (while acknowledging the ever-present gender differences), and discuss several problems with simulated suppression and rescue tasks, now used by many municipalities in light of previous legal challenges to pure strength and endurance measures.

Lastly, LeBreton, et al. discuss an alternate way of demonstrating the value of variables in I/O research. Traditionally researchers have focused on incremental validity, essentially the amount of "usefulness" that a variable adds to other variables already in the equation. (Allows you to do things like determine if a personality test would help you predict job performance above and beyond the test(s) you already use.) Instead, the authors present the idea of relative importance, which shifts the focus to the importance of each variable in the equation. Fascinating stuff (and far more than I can describe here), and something I'd like to see more of. I believe the authors are correct in stating it would be much easier to talk to managers about how useful each test in a battery is rather than the fact that overall they predict 35% of performance. The article includes a fascinating re-analysis of Mount, Witt, & Barricks' 2000 study of the use of biodata with clerical staff.

----

This issue also includes reviews of several books, including the third edition of Ployhart, Schneider and Schmitt's Staffing Organizations (conclusion: good but not great), Weekley and Ployhart's Situational Judgment Tests (conclusion: good as long as you already know what you're doing), and Griffith and Peterson's A Closer Examination of Applicant Faking Behavior (conclusion: good for researchers, not so good for managers).

---

But wait, there's more...the Spring 2007 issue, which had some interesting stuff as well, is free right now! So get those articles while you can. Hey, it's worth surfing over there just for McDaniel et al.'s meta-analysis of situational judgment tests!

Saturday, June 02, 2007

Google buys FeedBurner; introduces Street View

A couple news items today about Google.

First, they continued their buying spree and purchased FeedBurner, a popular feed syndicating service (and the one I use). This will allow Google to advertise in new ways, and also gives them somewhat of a blog end-to-end now that it owns both Blogger and FeedBurner. More information about the purchase is available on FeedBurner's blog, and they have this FAQ. Presumably this means a marriage of their analytic tools, which hopefully is good news for anyone that publishes content.

Implication for us? If you already publish a blog (or other type of feed) or are thinking about it, you'll want to keep close tabs on what this purchase will mean for publishers (e.g., ad distribution, fees, etc.).

Second, Google has introduced Street View, an add-on to Google Maps that allows for 360-degree viewing at the street level. You can even follow arrows that will lead you down the street. Right now it's available only in San Francisco, Las Vegas, Denver, New York, and Miami.

Why do we care? Seems to me a good way to preview the local area for potential applicants. I'm all about the realistic job previews. Also, probably a good opportunity for us to see how our area is being presented--what do candidates see when plopped down in front of our buildings? Would you want to be there?

Friday, June 01, 2007

Becoming passive employers

Let's take a moment and think about what job search could be.

Right now, job search is static. Someone searches for a job, and either a vacancy exists or it doesn't. But what if we were a little more creative?

What if instead of getting "zero results for your search", the candidate received something like:

There are no current openings that match your search. However, the following positions exist that may have openings in the future.

What followed would be a detailed description of current positions in the organization that matched the search criteria--jobs people actually had. And you would allow people to submit a job interest request so they would be notified when that job (or similar job) became open. Yes, some systems already have job interest requests, but too often it's based on broad job titles and it fails to provide the rich information a job seeker needs (e.g., who they will work with, learning opportunities).

What else could we do with this feature? We could profile the individuals that are in the current job. Okay, maybe not everyone, but a sample. At the very least we could provide a basic job description (and not a boring one).

This idea fits with a concept I think we all need to focus more on. In addition to seeking passive candidates, we should be passive employers. Passive job seekers aren't looking for a job, but they could be. Passive employers don't have that particular opening--but they could. But unless you tell candidates that, how will they know? How do they know that a perfect match exists in your organization, and if they just had waited another week to search, they would have seen it?

Why do we make applicants the servants of the ATS, not the other way around?

Let's take this a step further.
Let's say I'm an attorney in Seattle looking to relocate to Boston. I know I'd like to work for a smallish firm with decent billable hours, co-workers that know their stuff and are good at their jobs but value work-life balance.

How the HECK am I supposed to find that firm? Sure, I can look for current vacancies on job boards. Or maybe I just happen to know someone who works for such a firm and they have an opening. Or I might be able to find some information through a Google search or services such as Vault or Hoover's (although that information is very limited, you still have to know the company name, and information on public sector agencies is anemic). But that'll only get me so far. Then what?

There is no general database of employer qualities to search through (sites like Jobfox are trying a similar idea but it's still based on vacancies). No easy way to punch in the above criteria and have a system spit out, "Here are all the firms that meet your criteria. Here are the ones that currently have openings, here are the ones that don't currently may may in the future."

People search is getting more and more sophisticated. What about employer search? If we expect applicants to take an active role in managing their career, we should give them the information they need to do it. We can, and should, do better.

Wednesday, May 30, 2007

Evidence-based recruitment and assessment

Recently I've been reading Pfeffer and Sutton's Hard facts, dangerous half-truths and total nonsense: Profiting from evidence-based management.

It's one of those business books that's both entertaining and enlightening. As you know, there is a LOT of "fluff" out there--this is not one of those books. The authors are all about using good data and experimentation to discover what really works, not just what sounds good or what someone else recommended.

Case in point: the "War for Talent." This phrase gained popularity in the late 90's through several employees of McKinsey Consulting and their book of the same name. Those authors argued that the best performing companies had a deep commitment (obsession?) with finding and promoting talented individuals and offered data that claimed to support a link between this mindset and firm performance. But as Pfeffer and Sutton point out, a closer look at the data raises some eyebrows. Specifically, talent management practices were measured AFTER performance measures, resulting in a classic case of correlation-causation confusion.

Certainly Pfeffer and Sutton aren't the only ones to raise concerns about a talent obsession, but they do so in a very accessible and thorough manner. They highlight three poor decisions practices that apply to talent management (as well as many other issues):

- Casual benchmarking (for example, the failure of "Shuttle by United" to copy Southwest Airline's success or U.S. automotive companies attempting to copy Toyota's success). We see this in our field when folks want to know "how other people are recruiting" or "what test everyone else is using." Good information to know, but look before you leap.

- Doing what (seems to have) worked in the past (for example, using incentive pay in your new organization because it seems to have worked at another one). The best example of this is interview questions by managers who just know their questions about favorite books and who they'd want on a deserted island work--even though they don't have any data to support their view. In my experience about 20% of managers are good interviewers (and I place a lot of the blame on HR).

- Following deeply held yet unexamined ideologies (for example, equity incentives, the so-called "first-mover advantage", and merit pay for teachers). In our area this includes things like believing applicant tracking systems always result in improvement, or that integrity tests are more discriminatory than other types of tests.

So how do we apply these lessons to recruitment and assessment? Here are just a few ways:

1. Be a critical thinker. We know we're supposed to eye HR metrics with some skepticism, but do we? Do we adopt "best practices" without thinking about how our organization might differ in important ways? Are we lured by shiny new pieces of technology without asking ourselves whether we might be better off without it? On the flip side, do we resist new ways of doing things without even considering the possibilities?

2. Know the evidence. HR is not guess work--we know a lot about what works and what doesn't. Every HR practitioner and manager should read Rynes, Brown, and Colbert's "Seven common misperceptions about human resource practices", with a more detailed analysis here.

3. Push back when you hear something that sounds too simple or too good to be true--it probably is. Two examples: behavioral interviewing does not solve all of our assessment problems, and social networking sites will not solve all our recruiting problems.

4. Model evidence-based decision making. Make it clear that you are making decisions based on the best data you could find/gather and that this is an expectation for everyone. Rather than rushing into a decision, take the extra time to gather whatever information you get your hands on--as long as it doesn't lead to paralysis by analysis.

5. Do experiments whenever possible. Include an assessment instrument as a research-only tool and see if it predicts performance. Try out different advertising methods and mediums and track applicant numbers and quality. Did you know Yahoo! typically runs about 20 experiments at any time, changing things like colors and the location of text? We can't all be Yahoo!, but we can all be experimenters.

Some of my favorite quotes from the book...

"If doctors practiced medicine the way many companies practice management, there would be far more sick and dead patients, and many more doctors would be in jail."

"The fundamental problem is that few companies, in their urge to copy--an urge often stimulated by consultants who, much as bees spread pollen across flowers, take ideas from one place to the next--ever ask the basic question of why something might enhance performance."

"Instead of being interested in what is new, we ought to be interested in what is true."

"There is really only one way around this reluctance to confront the hard facts, and that is to consciously and systematically understand the psychological propensity to want to both deliver and hear good news and to actively work against it."

Tuesday, May 29, 2007

Must-reads on OFCCP compliance

John Sumser has written two excellent posts about OFCCP compliance, job posting in light of the sunset of America's Job Bank, and the final rules for the Jobs for Veterans Act.

First post here.

Second post here.

(Bolded date may indicate 2006 but these are 2007 posts)

Highly recommended for employers covered by OFCCP laws/regulations.

Monday, May 28, 2007

LinkedIn's JobsInsider

Yes, I'm a bit late to the game on this one (okay, a LOT late to the game). But I'm guessing a significant number of my readers aren't familiar with a really useful piece of technology offered by LinkedIn, one of the most popular business social networking sites (11+ million members), called JobsInsider.

What is it? It's a free browser plug-in/toolbar (for IE or Firefox) that serves several purposes. One is simply as a quicker way to access LinkedIn content. But the much cooler feature is that when you're looking at jobs on Monster, CareerBuilder, HotJobs, Craigslist, SimplyHired, Dice, or Vault, a separate window comes up that notifies you if anyone in your LinkedIn network works for the organization and allows you to contact them to help with making the right connections.

Let's look at an example. I went to SimplyHired and looked up jobs working for Apple in Sacramento, CA. When I click on any of the jobs that come up the JobsInsider window pops up and tells me 207 people in my LinkedIn network work for Apple, and two are friends of my connections. I can click on the link and it takes me to a description of those people. Click on any one person and it tells you how you're linked to that person. Here's what it looks like:









Not only that, but (at least with SimplyHired), when job search results come up, you can click on "who do I know?" for each position to have LinkedIn search your network.

Pretty nifty, huh? So why do we care, other than it being a nifty little piece of technology?

For one, it's another reason to be a member of LinkedIn--at least if you're interested in being contacted by applicants. Given the choice between pursuing a job somewhere where I don't know anyone and a place where I can make a contact, I'll take the latter.

Second, it's a good way to double- (or triple-) check credentials of applicants. Most of these networking sites strongly encourage you to put in your educational background and job experience. If what's listed here doesn't match the resume or application they submitted to you, that's something to follow up on. Could be a simple explanation, could not be.

Finally, another reason to care about this is it's likely a sign of things to come. With meta-people search sites like ZoomInfo out there, and ones like Spock coming on board, we need to be very comfortable with our on-line identities and understand how they link to other people.

One last cool feature of the LinkedIn toolbar. When you open an e-mail in Gmail, Yahoo! Mail, MSN Hotmail, or AOL, you automatically have the option to get someone's LinkedIn information or invite them to your network. In fact there's even a tool that will do the same thing for your Outlook mail.

Happy Memorial Day!

Friday, May 25, 2007

The recruitosphere vs. the assessosphere

One thing that's vexed me lately is why there are so many blogs devoted to recruiting and so few devoted to assessment.

There are so many recruiting blogs that there are lists devoted to keeping track of them, and new ones are announced practically every day.

There are so few blogs focused on assessment that I have a hard time thinking of them. The only 100% "true" blog (besides this one) is Jamie Madigan's SelectionMatters--and he just started back up again (thank goodness) after a long hiatus. Okay, Dr. Michael Mercer has one but it doesn't allow comments. Yes, Michael Harris does a great job with EASI-HR Blog, but assessment is just one of the topics he covers. Ditto for Alice Snell's Taleo Blog. Charles Handler has the awesome Rocket-Hire newsletter, but that's periodic and doesn't allow for (as far as I know) reader comments.

Then there's PIOP, a message board that's been around for a while, but it's focused primarily around students. Perhaps the recently established iocareers.com will go somewhere but it too seems focused on students--which isn't a bad thing, but it limits the reach.

So what's up? Why the difference? I have some theories:

1. Recruiters like to hear/read themselves. Yes, that's undoubtedly true. But do you know any assessment professionals that DON'T?

2. Recruiters use them to market themselves. I think this is a pretty compelling possibility. Still, if true, why aren't more assessment professionals? There are plenty of assessment consulting outfits.

3. Recruiters are earlier adopters of technology. Assessment professionals are, IMHO, a conservative bunch when it comes to technology adoption. They've seen things come and go, and don't want to jump wholeheartedly into a possible flash-in-the pan. Prime example: SIOP's excellent news articles are not syndicated.

4. Recruiters are more focused on the business side of things, assessment folk more with research. This seems likely of I/O Ph.D.'ers, but what about the rest of the assessment community? Are we still stuck trying to apply law and rules rather than creating networks and sharing best practices?

5. Assessment folks don't know about the blogosphere. I think there is some truth to this, as well. As a group, assessment professionals just don't seem to have taken to electronic forms of information sharing (the IPMAAC listserv being a notable exception), so I don't see why blogs would be any different. They seem to prefer conferences.

6. Assessment folks don't see the value of blogs. And some in the recruiting field don't either.

7. Recruiters are more extroverted than assessment types. I have absolutely no evidence to support this claim, which hasn't stopped me in the past and won't stop me this time.

Is it just me or is something (not) going on here?

Thursday, May 24, 2007

Technorati Gets a Fresh Coat

Technorati, one of the most popular blog search engines, has undergone a face lift.

According to this post, changes include:

1 - Simplified search. It's less technical now and easier to find what you're looking for.

2 - Enhanced results--searches now includes posts, blogs, photos, videos, podcasts, etc.

3 - Improved user interface (and it's still being tweaked), including a ticker at the top that tracks popular searches.

Why do we care about blog search engines?

- They're a great way to keep up on the latest news, research, articles, etc. in your field of interest. Blogs are oftentimes updated more frequently than other forms of media and are very targeted.

- They're a good way to identify potential applicants--people who are passionate about their interests and keep up to date on developments.

- For those of you considering starting a blog, or in differentiating your blog from others out there, they can be used to canvass the existing landscape and make comparisons.

- They can be used to see what's popular--blogs and searches.

- They can, sometimes, be used to gather information about applicants you're considering for hire. You may be able to find work samples or opinions/thoughts expressed by people you're considering--and you can use that information to follow up/verify in person.

- They're a great way to make connections and become part of the larger community.

Oh, and don't forget about other blog search engines (some people have a preference), such as Google Blog Search.

Wednesday, May 23, 2007

2007 SIOP Conference: Highlights, Part 3

This is the fourth in a series of posts about the 2007 SIOP Conference. In Part 1 I talked about some of the new products out there and in Part 2 and Part 3 I reviewed some of the research that was presented. This post continues that review...


Employment interview structure and discrimination litigation verdicts: A quantitative review

Pool, McEntee, and Gomez analyzed 31 federal court cases from 1990 to 2005 (27 claims of disparate treatment, 7 of adverse impact) to see if there was a relationship between the amount of interview structure and verdicts in employment discrimination cases. Most cases (73%) were brought under Title VII and involved promotional decisions (65%). Race discrimination was the most common allegation (47%) and the vast majority of cases (84%) involved a single plaintiff. For both types of claims, the strongest factors associated with a victory for the defendant (employer side) was having interviewers that were familiar with job requirements and having a guide for conducting the interview. In disparate treatment claims, defendants were more likely to prevail if they also had standardized questions and identical interviews for each applicant. In disparate impact cases, defendants fared better when they had evidence of validity (which makes sense given the burden shifting in these cases). Similar results to Williamson et al.’s 1997 study, but good data to have—see, we’re not just saying standardize those interviews because we’re sadistic HR folks.


Recruiting through the stages: Which recruiting practices predict when?

This meta-analysis by Uggerslev and Fassina of 101 studies looked at the impact that various “recruitment predictors” (e.g., job-person fit, job/organizational attraction) had on various outcome criteria (e.g., job pursuit intention, acceptance intentions). Results depended somewhat on the criterion, but perceived fit between the individual and the job/organization was across-the-board the strongest predictor. The only criterion that matched perceived fit was job characteristics, which tied for predicting acceptance intentions. The strength of the correlations varied, from a low of .15 between perceived fit and job choice to .47 between perceived fit and recommendation intentions. So how do we use this? The authors suggest efforts to increase the appearance of a good fit between the values of goals of applicants and those of the organization may pay off (I'm thinking, say, by focusing on aesthetics and message customization or clearly indicating what you’re looking for).


Meta-analysis on the relationship between Big Five and academic success

Okay, so it's not directly about recruitment or assessment, but it's still interesting. The title pretty much says it all--the presenters (Trapmann, Hell, Hirn, and Schuler) were looking here at the relationship between Big Five personality traits and academic success. Results? As you might expect, it depends what you mean by "success." Neuroticism was related to academic satisfaction (hey, that's why they're neurotic, right?) while Conscientiousness correlated with grades and retention. The other three factors (Extraversion, Openness, and Agreeableness) were not related to success.

That's probably the end of my review of 2007 SIOP presentations, unless I manage to obtain more presentations. Stay tuned for reviews from the upcoming IPMAAC conference!

Monday, May 21, 2007

Red--it's not just for bulls anymore

Don't like red pen? Turns out you're not the only one, according to a new study by Andrew Elliott and colleagues titled, "Color and Psychological Functioning: The Effect of Red on Performance Attainment" in the February edition of the Journal of Experimental Psychology: General.

Using several different experiments, the researchers found that even a brief glimpse of the color red can lower scores on achievement tasks. For example, one of the experiments involving nearly 300 U.S. and German high school and undergraduate students found that simply looking at a red participant number (versus black or green) prior to completing an IQ test resulted in a performance decrease.

The authors hypothesize that the color red evokes an anxiety response which in turn interferes with the ability to complete the task. Where does the anxiety come from? Some possibilities, according to the authors, include:

- Evolution: we may be hardwired to respond to red (think of the association between red and aggression in nature)

- Daily life: red is often associated with warnings or commands (e.g., stop lights, stop signs, dash lights)

- School: who didn't cringe a little when they saw red marks on their essays or tests in school? Maybe you even have a supervisor who does this?

Lesson: be careful with using red in testing material. There's enough error out there being introduced in testing situations without worrying about color.

So...did this article make you nervous?

Hat tip: SIOP.org

Thursday, May 17, 2007

EEOC Meeting Focuses on Employment Testing and Screening

On Wednesday, May 16th, the U.S. Equal Employment Opportunity Commission (EEOC) held a meeting to discuss issues relating to employment testing and screening, including the relevant laws enforced by the EEOC (e.g., Title VII, ADA, ADEA).

Several issues were discussed, including potential problems with specific screening methods (e.g., cognitive ability tests, credit checks), how the EEOC can better serve employers, and steps employers need to take in order to meet professional and legal guidelines (e.g., gathering validity evidence, investigating alternative methods with less adverse impact). Not for the first time, speakers emphasized that the Uniform Guidelines on Employee Selection Procedures need to be updated.

Speakers included EEOC staff members, plaintiffs in two of the more discussed recent cases (EEOC v. Dial Corp. and EEOC v. Ford Motor Co.), attorneys, and professionals in the field of assessment, including James Outtz and Kathleen Lundquist, who have frequently been retained as expert witnesses in employment discrimination cases.

Said Richard Tonowski from the EEOC:

"A mature technology of testing promises readily-available methods that serve as a check against both traditional forms of discrimination as well as the workings of unconscious bias. If that is the promise, then the threat comes from institutionalizing technical problems not yet fully addressed, the undermining of equal employment opportunity under the guise of sound selection practice, and the unintended introduction of new problems that will require resolution to safeguard test-takers and test-users."

Personality testing was mentioned prominently as an increasingly common practice among employers, but it appears (contrary to my earlier fears) that the focus was on those tests that could be considered "medical tests" under the ADA (such as the original MMPI), which leaves out many products, including the HPI, 16PF, and PCI.

Hopefully I'll have the slides from the presentation to post soon. In the meantime, check out this excellent summary from an attendee, and you can view the EEOC press release here. Statements of the speakers, along with their bios, can be found here, and it looks like the meeting transcript will be available there as well.

Wednesday, May 16, 2007

Targeted job board: I/O Careers

Do you have a job opening in the field of Industrial/Organizational (I/O) Psychology, which ranges from individual assessment to organizational analysis? Then you might want to look at posting on I/O Careers, which now has over 650 members from 40 schools.

Membership is limited to "individuals who are serious about the field of I/O Psychology" but during the beta of the site, posting jobs is FREE. (After that it's $250 a pop).

Check it out.

Tuesday, May 15, 2007

Court approves Googling employee...sort of

On May 4, 2007, the U.S. Court of Appeals for the Federal Circuit upheld a ruling by the U.S. Merit Systems Protection Board (MSPB) against an employee of the National Ocean and Atmospheric Administration. The individual was terminated for a laundry list of reasons, including misuse of a government vehicle, misuse of official time, and falsification of travel documents.

As part of the appeal of MSPB's decision, the individual claimed that his "guaranteed right to fundamental fairness" was violated when the deciding official Googled his name and came across information regarding his work history (essentially termination) with a previous employer. The individual claimed this unduly influenced the decision to remove him.

The court disagreed. The three-judge panel found that because information regarding the previous job loss did not influence the official's decision to remove the appellant, it did not show prejudice. Additionally, they found no due process violation.

The unanswered question here is what would have happened if the prior work history information HAD influenced the termination decision. And what if it was a hiring or promotion situation rather than a termination where voluminous information already existed regarding bad behavior? It will be interesting to see how this area of law evolves.

The appeals court decision (nonprecedential) is here.

Good string of articles about using the Internet to gather background check information can be found here.

Saturday, May 12, 2007

Recruiting Trends Survey from DirectEmployers Association

The results from the 2007 Recruiting Trends survey sponsored by the DirectEmployers Association are out.

Data gathered from 47 companies indicated:

- 55% of hires were made from online sources (+8% from last year).

- Employee referrals were the largest single source (21% of hires), followed closely by the organization's website and general job boards.

- Employee referrals also generated the highest quality candidates (82% rated favorable), but niche job boards and search firms tied for second, with campus recruiting a very close third. General job boards were rated favorable by only 22% of respondents.

- The largest percentage, by far, of recruitment/advertising budget went to general job boards (34%). Referrals, the source of the highest quality candidates, received 6% of the budget.

- Putting these numbers together, the source value (cost/hire) was highest, by a large margin, for referrals, followed by the organization's web site and, perhaps surprisingly, social networking technology.

Comments and follow-up conversations indicated a growing frustration with general job boards (especially for IT jobs) as well as a growing reliance on sources of passive candidates, such as social networks, blogs, and search engine optimization.

Read the full report for a much more detailed analysis and insights. Thanks to Rocket-Hire for making this available.

Thursday, May 10, 2007

The Daily Show does Video Resumes

You know something's hit it big when it's on The Daily Show.

So it was with great amusement that I saw a piece yesterday on video resumes...check it out...


Wednesday, May 09, 2007

Job ads of the future?

Curious about the direction job postings are going?

Looking for ways to snazz up your postings?

Then read this post over at jobs2web. Check out the graphic.

How close are your postings and/or career portals to this? Are they even in the same ballpark?

How hard would it be to add things like:

- links to a webinar/job preview video

- RSS feed

- subscribe to similar jobs

Answer: not hard. Let's hurry up and get there!

Tuesday, May 08, 2007

2007 SIOP Conference: Highlights, Part 2

This is the third in a series of posts about the 2007 SIOP Conference. In Part 1 I talked about some of the new products out there and in Part 2 I went over some of the research that was presented. In this post I'll point out some more research that you may find interesting...

Legal risks and defensibility factors for employee selection procedures

Posthuma, Roehling, and Campion analyzed nearly 600 federal district court cases and came up with some very interesting results:

- Employers are most likely to win (by far) when defending tests of math or mechanical ability. Employers also fare well when defending assessments of employment history and interviews.

- Employers did worst when defending physical ability tests and medical examinations. Tests of verbal ability and job knowledge were also more likely to result in a plaintiff win.

Predicting Internet job search behavior and turnover

Using a sample of 110 nurses in Texas, Posthuma et al. found using longitudinal survey data that (among other things) Internet job search behavior was related to turnover--folks weren't just surfing for fun. This suggests that organizations need to pay close attention to job searching behavior among employees; not necessarily to curtail it but instead to figure out why high performers want to leave.

Gender differences in career choice influences

After analyzing survey data from nearly 1,400 fourth-year medical students from two U.S. schools, Behrend et al. found a gender difference in preferred career: specifically, female medical students valued "opportunities to provide comprehensive care" when choosing a specialty much more than men. This is consistent with other work that has showed women to be more "relationship-oriented" than men when it comes to choosing a career.

Portraying an organization's culture through properties of a recruitment website

In this study of 278 undergraduate students, Kroustalis and Meade found that inclusion of pictures on a website that were intended to portray a certain organizational culture did so--but only for certain cultural characteristics. Specifically, pictures that implied a culture of either innovation or diversity had the intended effect--but pictures representing a team orientation did not. Interestingly, "employee testimonials" designed to emphasize these cultural aspects failed to do so for any of the three aspects studied. Finally, individuals who perceived a greater fit between themselves and the organization (in terms of the three cultural aspects) reported being more attracted to the organization.

Recruiting solutions for adverse impact: Race differences in organizational attraction

Last but definitely not least, Lyon and Newman gathered data from nearly 600 university students on their reactions to 40 hypothetical job postings...and came away with some very interesting results. For example:

- Conscientious individuals were more likely to apply to postings that explicitly stated a preference for conscientious applicants.

- Conscientious individuals were more likely to apply to postings that described the company as results-oriented.

- Black applicants with higher cognitive ability were more likely to respond to ads seeking conscientious individuals while White applicants with higher cognitive ability were less likely to do so.

- When a company was described as innovative, Black applicants high on conscientiousness were more likely to apply; this was not the case for White applicants.

Monday, May 07, 2007

Men's Wearhouse passes on background checks

A recent Business 2.0 cover story, titled "Ripping up the Rules of Management", mentions that George Zimmer, the founder of clothier Men's Wearhouse, has a policy that no employee or interviewee will ever undergo a criminal background check.

Seem risky? Well it doesn't appear to be hurting them. In fact, the company loses only .4% of revenue to theft, much less than typical for big retailers (1.5%). Says Zimmer:

"I don't trust the U.S. justice system to get it right...I'd rather make my own decisions, and I believe in giving people a second chance."

This policy is particularly interesting given efforts by various jurisdictions to limit criminal history checks in employment screening as well as the EEOC's renewed focus on criminal history checks as part of its new E-RACE Initiative.

Friday, May 04, 2007

EEOC targets...personality tests?

So I'm looking at some information about the EEOC's new E-RACE Initiative, which they describe as a systemic effort designed to ensure workplaces are free from race and color discrimination, and I come across this quote:

"Studies reveal that some employers make selection decisions based on names, arrest and conviction records, employment and personality tests, and credit scores, all of which may disparately impact people of color."

The citations for this sentence include a study on criminal records and one on names.

This triggers a couple questions for me...

1) What is the difference between an "employment" test and a personality test? Is this just redundancy or was it intentional?

2) More importantly, where is the evidence that personality tests have disparate impact? The research I'm familiar with indicates that differences between subgroups are relatively small.

As far as I know, cognitive ability tests are still the biggest 'offender' when it comes to racial differences in test scores (although this can be reduced by focusing on aspects of cognitive ability, such as short-term memory). Seems like this is where the EEOC would want to focus, along with background checks?

Wednesday, May 02, 2007

Protuo takes candidate matching to new level

A few posts back I talked about Jobfox (formerly Market10), a great website that does much better than most job boards at matching up applicants with job opportunities. This type of technology is definitely a dominant trend right now, and we're seeing many products out there that really try to narrow down the applicant funnel in a speedy but valid way (Trovix is another example). Another trend is coupling these 'smart funneling' systems with simulation tests, such as the Virtual Job Tryout or Virtual Job Audition.

In an article in this month's Inc. Magazine, another similar type of service is described--Protuo.

Protuo's job matching system is probably the most detailed I've seen, which is good or bad depending upon how you look at it. Good in that the matches that are generated are hypothetically much more likely to be on target. Bad because given most people's attention span, I question how many people will actually fill out the entire profile.

The candidate/job profile has three broad categories:

* Personal skills and knowledge
* Social skills and training
* Business and analytical skills

Within each category is another set of fillable tabs. For example, under "Personal skills and knowledge" there are separate categories for Personal Skills (e.g., creativity, initiative) and Arts, Language, and Science (e.g., math, biology). Individuals can create their own web pages to highlight their qualifications (example here)--with full HTML editing capability and blog included, which are nice touches.

Aside from time considerations, there's another potential problem with drilling so deep. People aren't known for their ability to describe themselves particularly accurately. This is not helped by the fact that people are asked to describe their skill level in general terms that are not tailored to the category and for which no instructions are provided (e.g., Basic, Intermediate, Advanced, Expert). My guess is the match percentage will suffer for it.

On the other hand, these types of systems are still developing. Credit goes to Protuo for combining job search with personal pages. Expect much more in the way of tailored application and assessment processes that try to maximize speed and quality.

Tuesday, May 01, 2007

Joe Murphy: Buyer Beware

Today's guest post comes from Joe Murphy of Shaker Consulting Group. Joe graciously responded to my occasional Q&A series in an earlier post and has some good thoughts about the state of assessment...

--

Buyer Beware

Last year, while roaming the halls of the 2006 EMA and SHRM conferences, I was surprised by the lack of basic knowledge test and assessment providers were prepared to offer attendees. They were at it again this year. While attending the SHRM Staffing Management Conference in New Orleans (April 23-25, 2007), I was a bit shocked by what I heard from an assessment provider. Here are a few amazing assertions:

  • Our test is 97% accurate at identifying the right person.
  • Our test was validated by the EEOC. It only takes five minutes; you are going to love this.

I was so surprised by this comment that I was not quick enough to ask about their selection project at the EEOC. I regained my composure and asked how they establish job relevance. Their answer was, “Over a million people have taken this.”

When I asked how they help users adhere to The Uniform Guidelines on Employees Selection Procedures, they were clueless. When I asked about their various norm groups and which types of job criteria were predicted, they were clueless. When I asked if they would share the results of their validation analysis, they had no response.

Again, stuffing a questionnaire at me they implored me to “Take it, you’ll love it.” Explaining I was not interested, I wandered away to find another booth offering assessments.

Once again this may be a case of an ill prepared service provider expecting to meet uneducated consumers in the market place. It is actually pretty scary. I asked myself: “What type of staffing professional would actually believe that a five minute adjective checklist would be 97% accurate at identifying the right candidate?” Perhaps someone who was more hopeful than thoughtful.

After settling in with some small talk at my next target, I began my investigation. “How do you help users adhere to The Uniform Guidelines on Employees Selection Procedures?”

I got the most honest and candid answer. “No one has ever asked me that before. Let me tell you about our approach.” After a thoughtful exchange I left and drew a conclusion:

Prepared, but not proactive.

Why are testing and assessment providers NOT proactively taking the initiative to educate their consumers? I drew another conclusion:

Possibly because an educated consumer might not buy their product.

This thought took me back a few years to a time when I walked the SHRM Conference Exhibit Hall with a professional colleague, Bob Eichinger. He is an I/O psychologist, co-founder of Lominger and formerly on staff at PepsiCo and Pillsbury. We stopped to look at a booth with a long line of conference attendees waiting to take a five minute “wonder test.” Bob looked at me with a smile of disbelief, shook his head and said: “Shit sells.”

People and their behavioral tendencies are pretty complex. Predicting job-fit in five minutes is a claim that seems to hold a bit of hyperbole. Caveat emptor - Buyer beware.

The trend in compliance auditing is shifting from examining the content of historical candidate searches to a review of the job relevance of assessments. Auditors want to know about the job analysis process used to establish the content validity of the evaluation method, the date of the last validation analysis and to what degree the job may have changed since the last validation.

Organizations using an assessment that was “Validated by the EEOC” may be in for a big surprise if they are faced with an audit.

Take time to become an educated consumer of assessment products and services. You and your company will be glad you did.

Joseph P. Murphy
Shaker Consulting Group, Inc.