Friday, March 30, 2007

Help with Sadist...er, Statistics

Just plowed through a new book that I think will be immensely helpful for folks in any of the following categories:

a) Don't understand statistics
b) Have never taken a statistics course
c) Think ANOVA is a new type of star
d) Find reading journal articles to be a unique brand of torture
e) Would like to go to an I/O or HR conference but don't speak "research"

The book is "Understanding statistics: A guide for I/O psychologists and human resource professionals" by Michael Aamodt, Michael Surrette, and David Cohen.

The purpose of the book, in the words of the authors, is "to provide students and human resource professionals with a brief guide to understanding the statistics they encounter in journal articles, technical reports, and conference presentations."

On the whole, the authors do an amazing job. Considering they use under 120 pages (and we're not talking dictionary font here), they cover a lot of ground, including:

- The basics, such as data types, measures of central tendency, and variability
- Sample size
- Standard scores
- Testing for differences between groups (e.g., t-tests, ANOVA, Chi-Square)
- Correlation and regression
- Meta-analysis
- Factor analysis

The language used throughout is mostly very easy to understand. The examples used are relevant and the authers make good use of humor when they can. The book focuses on how to interpret statistics rather than how to calculate them. Is there room for improvement? I think so, particularly in the ANOVA and regression sections, where it tends to get a little "stats speak." But overall this is a great primer for people who would like to (or need to) understand I/O and HR research.

One of the authors, Dr. Mike Aamodt, was kind enough to answer a few questions I had about the book:

BB: What inspired you to write this book? How do you hope it gets used?

MA: At IPMAAC meetings I noticed that many of the attendees were frustrated because they could not follow some of the statistics used in the presentations. So the inspiration for the book was to help HR professionals who don't have advanced degrees or lots of stat courses, feel comfortable listening to presentations or reading journal articles and technical reports. Our thinking was that, once you remove all the formulas and just concentrate on the meaning and use of statistics, it is not that scary.

We hope that the text will be used by HR professionals as well as by students. We think it would be a useful start to a stat class in which students would read our short text prior to reading a more in-depth text.


BB: How did you decide what topics to include? What got left out?

MA: Deciding what to include and omit was tough. We included the topics that seemed to be the most commonly used statistics in presentations and technical reports. Because the intended audience is "stat novices," we tried not to go too much in depth on any topic. Following such a plan was easier said than done, and we really had to resist the temptation to discuss everything about a topic. The book is only 120 pages and we thought that if it were much longer, we would lose the interest of the stat novice.

BB: Are there other texts you're planning on writing, or that you think should be written?

MA: I have several other texts I plan to write. I want to do a follow-up to the stat primer that gets into more detail but still omits formulas. I also plan to write a primer on conducting systemic compensation analyses, one on police selection, and one on employee selection in general. A book I think needs to be written is a practical, applied text on how to conduct a job analysis.

Wednesday, March 28, 2007

Journal of Applied Psychology, v.92, #2

I wrote an earlier post about a particularly good article in the most recent issue of the Journal of Applied Psychology. But what about the other articles? Let's take a look because there is plenty here whether you're interested in cognitive ability tests, personality tests, or meta-analysis.

First, a study for all you meta-analysts by Schmidt and Raju about the best way to combine new research with existing meta-analysis results. Results indicate that the traditional "medical model" of adding new studies to the database and re-calculating worked well, as did an alternative Bayesian model the authors describe.

Next up, a look at what happens to people's scores on cognitive ability tests when they've taken the test before (known as a "practice effect"), by Hausknecht and colleagues. Meta-analyzing the results of 50 studies yielded an adjusted overall effect size of .26. The effects were larger when individuals received coaching and when they re-took the identical test.

Third, a study by Ellingson, Sackett, and Connelly of response distortion on personality tests. Specifically, the authors looked at data from 713 people who had taken the California Psychological Inventory (CPI) twice--once in a selection context (presumably high motivation to "cheat") and once in a development context (presumably low motivation to "cheat). Results? Limited amount of response distortion going on. Good news for personality tests, although certainly not the last word on this topic.

Fourth, a look at when birds of a feather flock together--and when they don't. Specifically, Umphress and colleagues looked at whether demographic similarity attracted prospective employees or not. What they found was that it depended on people's preference for "group based social hierarchies" (i.e., were high on social dominance orientation). If they were, those in "high status groups" were attracted to demographic similarity in an organization while those in "low status groups" were repelled by it. Bottom line? Trying to attract applicants by pointing out similarities with current incumbents may or may not be a good idea...

Next, for you stats folk, a look by Sackett, Lievens, Berry, and Landers at the effect of range restriction on correlations between predictors (e.g., between a personality test and a cognitive ability test). Conclusion? That these correlations can be quite distorted when the predictors are used as a composite in an actual selection setting. Why do we care? Because it may mess up our conclusions about things like incremental validity of one test over another. (A draft of the article goes into more detail)

Last but not least, Kuncel and Klieger look at the very important issue of how knowledge of test score results impacts behavior. Their research revealed a 23% score difference between all individuals who had taken the LSAT and those that took the LSAT and applied to law school. This has implications for range restriction corrections (there's that term again).

There you have it! Quite the smorgasborg this issue.

Monday, March 26, 2007

New MSPB Report Released

The U.S. Merit Systems Protection Board has released a new report entitled, "Accomplishing Our Mission: Results of the Merit Principles Survey 2005."

The report results are based on responses of nearly 37,000 federal employees who responded to an online survey. There's a lot of goodness in here, not just for federal employers. Like what, you ask? Let's take a look...

Recruiting/Selection

- 76% of respondents recommended the Government as a place to work. This seems like a pretty good number, and it's a huge leap from 2000 when it was 52%.

- Specific organization recommendations varied depending on where folks were in the chain of command--higher ups were much more likely to recommend working for their agency. For example, around 84% at the Executive level recommended it, while about 64% of nonsupervisors did.

- The agency percentage also varied widely by organization. NASA was at the top with a 83% recommendation rate while Education was at the bottom with only 50%.

- By a wide margin, the #1 obstacle faced by supervisors and managers when hiring was a shortage of qualified applicants (38%). The report authors wisely point out that this could be an actual talent shortage but could also point to problems with the speed of the hiring process and/or the quality of selection tools used.

- Of the six areas asked about (including appraisals, discipline, etc.) "advancement" received the lowest marks for "extent to which you believe you have been treated fairly" (37%). This dubious distinction is identical to previous surveys and does not bode well for either recruitment or retention.

Retention

- About 25% of respondents indicated they were likely to leave their agency in the next year. This isn't surprising given the feds are anticipating 60% of their workforce will be eligible to retire over the next 10 years.

- Why are people leaving? The top reason was a virtual tie between "Opportunity to earn more money" and "Increased opportunities for advancement." This is the same result as previous years (although a bit more pronounced this time around).

- In terms of getting people to stay, there were big differences between retirement eligible and non-retirement eligible respondents. The latter were much more likely to stay for "Opportunity to better use skills and abilities", "Opportunity to earn more money", "Desire to make more of a difference", and "Increased opportunities for advancement."

Final thoughts

There's a lot more in this report, including some great data on training methods, discrimination, recognition, and motivation, the last of which you have to see (check out page 48; hint: money ain't everything).

On a side note, I was saddened to see that OPM, which does such good work, had some of the lowest ratings--again--on questions like "My agency produces high quality products and services" (granted there were a lot of low scores on that one), "The workforce has job-relevant knowledge and skills necessary to accomplish organizational goals" and "I would recommend my agency as a place to work." Job security at OPM was the second-lowest among all agencies surveyed and the percent that will likely leave the agency in the next 12 months was tied for second-highest.

Anybody from OPM care to comment? I bet I could increase your applicant pool if you'd open a West coast office...

Friday, March 23, 2007

Journal of Applied Psychology, Vol. 92, #2: Web Recruitment

March journal madness ends(?) with the latest issue of the Journal of Applied Psychology.

I'll post separately about all the recruitment/assessment goodness in this issue, but today's post is specific to one article: "Aesthetic properties and message customization: Navigating the dark side of Web recruitment" by Dineen, Ling, Ash, and DelVecchio.

Why treat this one special? For starters, it's a great little piece of research. And, it's something that I think many of my readers will be interested in--a way to increase the quality of candidates responding to Web-based recruitment methods and simultaneously cut down on the flood of unqualified applicants.

How do we do such a thing? By paying a lot of attention to two things, say the authors:

1) The aesthetics of your web page/job advertisement content. This refers to things like the fonts you use, graphics, colors, and Web page design. Seems like a no-brainer, but how many specific job postings had you thinking, "Now that's an attractive ad"?

2) The customization of your content. This refers to the extent that information presented to job seekers is tailored to their particular needs, interests, and competencies.

This particular study analyzed responses from 240 upper level undergraduate students enrolled in business courses (93% were business majors). The researchers first had the students fill out a Web-based questionnaire to gather information about their needs, abilities and values. The students then came back about 4 weeks later and viewed an actual Monster.com job advertisement tweaked for the study (the article includes a great example of the actual "ad" that was presented to study participants). Each participant was in one of four conditions:

Condition one: Good aesthetics (job posting with color, pictures, multiple fonts, patterned background) and customized feedback regarding the fit between their needs, abilities, and values, and aspects of the position/organization (e.g., "It appears that your preferences for a company culture are INCONSISTENT with what you would find at [this company]")

Condition two: Good aesthetics and no feedback.

Condition three: Poor aesthetics (black-and-white, no pictures, backgrounds, or varying fonts), and customized feedback.

Condition four: Poor aesthetics and no customized feedback.

The researchers then measured the amount of time spent viewing the posting, information recall, and attraction.

Results? Depends what yer lookin' at:

Viewing time: Highest in condition one (mean of 202 seconds). Including customized information had a big impact on viewing time, but aesthetics mattered only when customized information was present--when there was no customized information, aesthetics mattered much less and overall viewing time was much lower.

Information recall: Pretty much the same thing, except providing customized information helped only if there were good aesthetics.

Attraction: Aesthetics didn't seem to make a difference, but providing customized information resulted in the highest attraction levels.

Why is this happening? The authors suggest it's a result of the amount of cognitive elaboration--the more customized and appealing the information, the "deeper" the processing, meaning better memory of the ad, etc.

But here's probably the best part...when looking at the job-person fit and the above factors, only under condition one did the "low fit" applicants report being less attracted to the job and the "high fit" applicants report being more attracted to the job. What does this mean? That paying attention to the attractiveness of your career website and job opportunities AND helping people understand if they fit with the job and your organization helps folks "select in" and "select out."

The authors say it best: "These strong effects suggest that the combination of good aesthetics and customized information allows job seekers to better recognize when they are a low fit, leading to far less attraction among the lowest fitting individuals."

Bottom line

Allowing potential applicants to self-screen out based on realistic job information has HUGE advantages, both to the individual and the organization. The potential applicant doesn't waste their time going through the (sometimes laborious) process of applying only to later find out the job wasn't what they wanted. The organization doesn't have to spend time and money selecting out these people. The payoff from a little invested upfront, whether it's working on the attractiveness of the web page, providing customized results, or some type of job preview video, pays huge dividends, and is in effect the most effective and efficient form of selection.

By the way, if this type of research is something you're interested in, take a gander at this (thank you, Dr. Lievens), this, and this.

Thursday, March 22, 2007

BBB Warns of Online Job Scams

Yesterday the Better Business Bureau serving Alaska, Oregon, and Western Washington issued a warning to job seekers to watch out for misleading online job postings designed to steal identity information or money.

The BBB has received dozens of complaints about a variety of job boards, including Monster, CareerBuilder, and Yahoo! HotJobs.

Tip-offs that the posting may be fraudulent include an unwillingness on the part of the employer to meet prospective applicants, the BBB reports.

In addition, the BBB advises "To further guard against identity theft, the BBB advises job hunters to refrain from including their Social Security Numbers, birth dates, or college graduation dates in resumes that are posted online. Consider posting your resume anonymously and providing an e-mail address as your primary contact rather than your home address or phone number."

They advise applicants to check out businesses on BBB's webpage to make sure they are legitimate.

If this becomes a big problem, it will have serious implications for gathering information from applicants and the ability to contact them. Another sad development in the evolving world of identity theft and online scams.

Tuesday, March 20, 2007

Human Performance, Vol. 20, #1: The EQ-i:S

March journal madness continues with the latest issue of the journal Human Performance.

There's really only one article in here related to selection, but it's an interesting one, so let's take a look.

In this study of 229 students from an southeastern U.S. university, Grubb and McDaniel looked at the constructs measured and fakability of the Emotional Quotient Inventory Short Form (EQ-i:S), a popular proposed measure of emotional intelligence.

The bottom line results:

- Participants were able to "fake" their scores, raising them substantially (by .83 standard deviations). Not particularly surprising as it's fairly well established that non-cognitive measures can be "faked." (What's not clear is whether it matters...)

- The two "screens" built into the EQ-i:S to try to identify fakers correctly identified only 31% of the fakers.

- EQ-i:S scores were predicted by the Big Five measure with a multiple correlation of .79. This result, say the authors, "casts doubt on the construct of emotional intelligence as operationalized in the EQ-i:S."

But there's some additional goodness in this article, largely because the authors also had the participants take the Wonderlic Personnel Test (WPT) and a measure of the Big Five personality factors (IPIP):

- WPT scores were correlated strongly with only one Big Five factor--Openness.

- The correlation between WPT scores and gender was small.

- Black-White score differences on the Big Five factors were small.

- Gender was correlated with Big Five scales, but the nature varied depending on the condition (honest or faking).

- The ability to fake on the EQ-i:S was a function of cognitive ability and personality (mostly agreeableness).

There were other articles, one of which I'll discuss over at HR Coal.

Monday, March 19, 2007

DDI/Monster Selection Forecast 2006-2007

DDI and Monster recently released the results of their 2006-2007 "Selection Forecast", which consists of survey results from staffing directors (628), hiring managers (1,250) and job seekers (3,725) in five global regions. They also conducted 31 one-on-one interviews with job seekers to "flesh out the results."

This report has been discussed elsewhere, but what interests me isn't so much the forecast for "competition for talent", which we seem to get conflicting reports about on a daily basis, but some other nuggets in the report...

#1: Recruiting methods: Staffing directors reported relying heavily on the organization's career website and large online job boards. Yes, lots of folks get jobs this way. But is this the most effective source? And how's that career website, anyway?

#2: What job seekers want:
There were some serious mismatches between what job seekers reported wanting (beyond salary/benefits) and what hiring managers and staffing directors think they want:

A good manager/boss: 75% of job seekers want this, and 69% of hiring managers thought they did--but only 57% of staffing directors did. Could this hint at HR placing insufficient importance on selection for line supervisors?

An organization you can be proud to work for: 74% of job seekers want this, but only 58% of hiring managers and 55% of staffing directors thought they did.

A creative/fun workplace culture: 67% of job seekers want this, but only 50% of hiring managers and 43% of staffing directors thought they did.

A compatible work group/team: Desired by 67% of job seekers but identified by only 50% of hiring managers and 37% of staffing directors.

Looking at these responses, it appears (at least in this sample) that management and HR are seriously underestimating the importance of organizational factors to job seekers. This of course will vary by organization, by job type, and by other factors such as demographics. Speaking of demographics...

#3: Age and job search: Younger job seekers placed increased importance on a fun culture and work friends, whereas older job seekers placed more value on advancement and developing organizational pride. This is consistent with other surveys.

#4: Satisfaction with selection practices: Fewer than 50% of staffing directors and hiring managers were highly satisfied with their hiring practices. "Efficiency" was rated particularly low. I'm guessing these are pretty typical results, but good benchmarks nonetheless.

#5: HR is lacking creativity--or resources: Fewer than half of all staffing directors had used each form of assessment listed in the survey. Resumes? Check. Interviews? Check. Other types? Not so much.

#6: Think carefully about interviewers: Two-thirds of job seekers said the interviewer influences their acceptance intentions. Are you just pulling in whoever is available?

#7: Do it again: Candidates pick up on interviewer behavior--intentional or not. A particularly offensive interviewer behavior was "Acting like has no time to talk to me" (70% of respondents). Don't forget how important perceptions of the selection process are.

#8: The grass is always greener: Nearly a third of job seekers had been in their jobs less than six months but were prowling for a new one. So no, your organization isn't alone. But what are you doing to retain people? Which brings us to...

#9: The reason they left may not be what you think: Honest exit interview feedback can be hard to get, but chew on this:

Did not feel efforts were appreciated: Ranked #3 in reason for turnover by job seekers--but #7 and #6 by staffing directors and hiring managers, respectively.

Treated unfairly:
Ranked #4 by job seekers--unfortunately ranked #14 and #10.5 by staffing directors and hiring managers, respectively.

What about errors the other way? Managers and HR thought "external factors" (a spouse moving, going back to school, etc.) were pretty darn important--ranking them #1 and #2, respectively. But job seekers didn't, ranking it #10.5.

This is low hanging fruit. Recognition doesn't have to cost a thing; neither does taking steps to treat people fairly. A few minutes of your time, maybe, but how does that compare to the time, money, and effort involved in replacing someone?

Friday, March 16, 2007

New issue of IJSA--vol. 15, #1

The March journal madness continues with the latest issue of the International Journal of Selection and Assessment.

There's quite a lot in this issue (and plenty for you personality testing junkies), so let's take a look at some of the articles...

The main section

1. Hulsheger and colleagues report a meta-analysis of the operational validity of cognitive ability (general mental ability, g, whatever ya wanna call it) in Germany. After analyzing data from 54 articles and reports, the authors report a population correlation of .467 with training success and .534 with job performance. Interestingly (and counter to other research I've seen), they found the relationship was higher for low-complexity jobs.

2. Next, a study of situational judgment tests (SJTs) by O'Connell and colleagues. Using data from seven U.S. manufacturing companies (total N of around 1,000), the authors had findings in three areas. First,they found a mean Black-White difference on the SJT of .38 and a gender difference of -.27 (favoring females). Second, for task performance they found that the SJT added incremental validity to a cognitive ability test or a personality test, but not if both were already being used. Last, for contextual performance, they found the SJT added validity only if a cognitive ability test was being used as the sole instrument. Good stuff, and kudos as always to Dr. Michael McDaniel for making this (and other articles) available through his website.

3. Next up, a very interesting concurrent study of 154 customer service employees from DeGroot and Kluemper. Why very interesting? Because they looked at the impact of "vocal attractiveness." Results? Vocal attractiveness correlated with both situational interview scores and job performance. AND, two Big 5 factors (agreeableness and conscientiousness) predicted performance more strongly for people with more attractive voices. Interesting! I knew all those vocal lessons would pay off.

Let's see...what else looks good...

4. How about another concurrent study of person-organization (P-O) fit? McCulloch and Turban looked at the incremental validity of a measure of P-O fit, defined as the match between managers' description of the work culture with incumbent work preferences. Results? The measure added significant incremental variance over a cognitive ability measure in terms of predicting turnover (important given the historically high levels for these jobs) but was not related to job performance. A great illustration (along with #2 above) of the importance of the criteria being studied.

5. A study of Hartman's Color Code Personality Profile by Ault and Barney. If you don't know much about this test (I didn't), it classifies people into four colors--Red (motive: power), Blue (intimacy), White (peace), and Yellow (fun). Anyhoo, the authors state this is the first psychometric research on the highly popular instrument (don't even get me started), and found "high" test-retest reliability and support for the instrument measuring "some" personality traits. They note, however, that the instrument has high error variance and suggest caution when using it at the individual level. This is one of those studies you'll want to have in your back pocket when someone comes up to you and says, "Hey, I read about this cool personality test on my flight. Let's use that from now on."

Speaking of personality testing...

This issue has a special section on personality. Let's see what we've got:

6. A study that looks into response distortion on personality tests. Berry, Page, and Sackett found support among a sample of 261 managers for enhanced prediction of job performance when self-deceptive enhancements (SDE) were accounted for, but not impression management (IM) scores. Why is this important? Because most research on "faking" personality tests has looked at IM--intentionally inflating your scores to score better on the test. SDE, on the other hand, happens when you honestly believe you are greater than you are. I think I need this article.

7. The brain stimulation continues with a meta-analysis by Connolly, Kavanagh, and Viswesvaran. The authors were looking at the relationship between self-ratings and observer-ratings of Big 5 personality dimensions. Results? Correlations in the .5-.6 range, suggesting each contributes substantial unique variance. Oh yes, and duration of acquaintance had a large moderating effect, but source of observer ratings did not. I may need this one too.

8. Last but not least (yeah, I skipped a couple, but I know your e-mail Inbox is filling up as we speak), Smithikrai found, in a sample of 2,518 Thais employed in seven different occupations, that the Big 5 factor of neuroticism was significantly negatively correlated with job success across the board, while extraversion and conscientiousness showed significant positive correlations. Conscientiousness was the only trait to predict success across occupations (sounds familiar!).

That's all folks! Enjoy your weekends!

Thursday, March 15, 2007

Q&A #4: Joe Murphy

This is the fourth in a series of Q&As I'm doing with thought leaders in the area of recruitment and selection.

This edition features Joe Murphy, VP and Co-Founder of Shaker Consulting Group. Brief bio: Joe is a former principal at SHL USA, Inc. and co-founder of Olsen, Stern, Murphy & Hogan, Inc. He has published articles in a variety of professional publications and presented seminars on staffing metrics at a number of conferences. Joe is very active in the recruiting space (e.g., ERE) as well as an active member of SHRM, having conducted the 2004 survey of assessment practices and a national web cast on Quality of Hire.

I've been meaning to post about Shaker's Virtual Job Tryouts for a while because I like what I've seen, so this gives me a chance to highlight it. Joe's thought-provoking responses are longer than previous Q&A'rs, but well worth the read.

(Note: as always, some links are provided by yours truly)

BB: What do you think are the primary recruitment/assessment issues that employers are struggling with today?

JM: Education. Both for consumers of assessment products and providers of assessment resources. The retailer SYMS uses the phrase: “an educated consumer is our best customer.” This is also true in the field of assessment.

Just as recently as last year a VP of HR asked me if using assessment was legal in the US. I was a bit shocked.

I walked the exhibit hall at the EMA conference in San Diego and the SHRM conference in DC last year and stopped into many booths that offered some form of testing or assessment. I asked a few simple questions that I thought might trigger a good dialogue and allow me to gauge the quality of the resource. I varied my probing based upon the responses I received.

  1. What do you do to establish job relevance of your assessment?
  2. What steps do you take to conform to the uniform guideline on employee selection procedures?
  3. Describe your approach to job analysis.
  4. What criteria have been used in your validation work?
  5. Do you have a sample validation technical report I could examine?
  6. What norm groups do you have?

With the exception of a few providers, those questions brought blank stares or “What do you mean?, type responses. One of my favorite responses was: “Well we didn’t use prison inmates like one firm did.”

My take aways were:

  • Many assessment providers have no meaningful results or best practices to share
    Training personnel working in an exhibitor’s booth on knowledge of assessment fundamentals is not valued
  • And last, but most telling - Assessment providers do not expect a well educated assessment consumer

I have been involved in objective candidate evaluation practices as a user or provider for over 25 years. From my experience HR practitioners have limited knowledge regarding the use of objective candidate evaluation methods. This is most evident when exploring one word: Valid.

Validity is not an easy construct to wrap your arms around. However, many assessment consumers fall prey to the generic offering “Our test is valid.” It never dawns on many to ask: “Valid for what?” The answer to that question is far more important than just being valid.

I strongly recommend that any one considering the use of assessment go to a nearby university and take two courses: one on Tests and Measures and another on Personality Theory or Measurement. These courses will raise the caliber of users significantly.

BB: What is an example of an innovative or creative recruitment/assessment practice that you've seen recently?

JM: A focus on the nature of the candidate experience has been leading companies to be innovative. There are a number of companies that have taken the resume out of the front end screening and candidate evaluation process. The resume is then only used as context for the structured interview. By using scorable applications or biodata questionnaires, companies have become more objective at putting candidates into “Yes” and “No” piles. This also sends a nice message: “No resume needed to apply here.”

At TalUncon [Ed: Joe's link] in Redwood City, CA Gerry Crispin was talking about competency based interviews captured on video. A candidate could receive a set of questions, record their responses and post them to a YouTube like location for the recruiter. This raises questions about what is really being evaluated, but is certainly can provide side-by-side comparisons.

We like to think our Virtual Job Tryout™ is out on the leading edge. This approach to objective candidate evaluation transforms the candidate experience into an interactive, informative two-way exchange. The candidate learns about the job by performing work samples, and making choices on how they might handle specific interactions. Recruiters learn about work style and job related competencies, all from one experience.

BB: What is an area of research that you think deserves increased attention?

JM: Three things have been gnawing at me recently:

  1. Attitudes and beliefs around un-proctored, on-line assessment.

I believe industry has to accept that there are sound and effective methods to administer and interpret un-proctored assessments.

  1. Why do organizations place such low expectations to be measurement oriented on HR?

I believe we hide behind the term “soft costs” as an alternative to doing the rigorous work of developing the infrastructure and discipline of measurement required for the task at hand

  1. How can the misguided opinion of one executive prevent an organization from realizing the gain available from objective candidate evaluation methods?

I believe we need to find better methods of documenting and presenting the benefits of assessments for executive audiences.

Here are three examples that illustrate my points.

One of our clients has the enviable, yet challenging scale of a 500 to 1 applicant to hire ratio. Their current methods of screening have been deemed both unfair and un-scalable. Unfair because the recruiter wrestles with looking at 50 candidates and makes a hiring decision, and wonders what talent was in the 450 with only a resume in the data base. Un-scalable because their growth plans simply make this semi manual process and the scope of needed recruiters unreasonable. The marketplace demands a faster and more objective data gathering process from the candidate. Relying on resumes searches or forcing the candidate into an on-site experience adds steps and negates cycle time reduction initiatives.

I heard a participant at a conference on HR Metrics openly state she went into HR because she was “bad with numbers.” The unfortunate part is that being light on that competency was OK with the company’s executives. The term “soft costs’ shows up a great deal in HR discussions. Some might say that term is used because you can’t measure it. If that is the case, will your boss mind if you double your soft costs next year?. There are sound methods to make the intangible more tangible. At the other end of the measurement continuum, one recruiter I met at an EMA conference stated her company captured and reported on 27 variables that made up their cost per hire figure. It can be done.

The now retired CEO of one of a company had a bad experience with an assessment center for executive development early in his career. As he rose through the ranks, he was able to forbid the further use of assessment in the organization. This edict lasted several decades. Last year, objective candidate evaluation was implemented at a new plant start-up. The financial implications were described by the VP of HR as one of the highest ROI projects in the company last year. Objective candidate evaluation has economic value.

BB: As someone who has their own consulting business, have you noticed any changes or patterns in the types of requests you're getting from clients?

JM: I see two camps. The novice user is still seeking the silver bullet. They want the omnibus assessment that can be used for many positions and predict any job behaviors. This is exasperated by the providers who lead under-educated HR practitioners to believe such an assessment exists.

Practitioners with a passion for process improvement get excited about taking measurement more seriously and seek more job relevant approaches to objective candidate evaluation. Project teams with six sigma training see the application and value of assessment clearly. The experienced user has achieved some degree of success and has learned that the more job relevant the assessment, the more accurate and valuable the results. Sound analytics help a user sees both the limitations and value from assessments.

BB: Have you read any books or articles lately that you would recommend to the professional community?

JM: I am in the middle of reading Susan Conway’s The Think Factory (Wiley 2007) [Ed: Joe's link]. Her mission is to bring Lean and Six Sigma practices to Information Worker (I Worker)process models. In reading it from a recruiting perspective (one of my major frames of reference) I continually see how formal and objective data gathering methods such as job analysis, assessment, and predictor-criteria correlation analysis fit right into her framework. The hiring process is a perfect example of her definition of I-Work: gathering and/or transforming data and making decisions,.

I have been pushing forth the concept of measuring the financial impact of waste and re-work in staffing processes for years. Conway addresses these issues, re-work in particular, as being significant drains on profitability. The high turnover in many entry level jobs causes re-work to the tune of millions of dollars. Yet company presidents have looked me in the eye and said, “That’s just the way it is in our business.” Responses like that change quickly when someone owns the budget for staffing waste.

Susan also addresses issues of efficiency and effectiveness in a manner that reinforces the real value driver is the later. While cost per hire and cycle time factors are important to manage, the high gains come from increases in effectiveness. This translates into making hiring decisions based on candidate data that correlates to performance outcomes.

I can’t wait to finish the book and apply a range of her assertions and models in communicating what we do. The alignment is fascinating.

BB: Is there anything else you think recruiters/assessment professionals should be focused on right now?

JM: I conducted a small survey (N = 558,) with SHRM in February of 2004. 85% of respondents stated they had not conducted an ROI analysis of their selection practices. This forced me down two paths:

  1. Why do organizations allow so many resources to be invested in a business process without demanding some measure of return or contribution?
  2. Why do staffing and HR professionals jeopardize their credibility by not holding themselves accountable with sound economic measures of performance?

The most common metrics in staffing are based upon cost and time. When you report cost, you get pressure to make it smaller. In essence budget constraints on staffing functions might be directly attributed to the nature of reporting.

I suggest, if not compel recruiters to partner with other in-house disciplines such as finance and process improvement. Use these combined resources to focus on value metrics, and invest in the rigor and discipline of capturing business process data that can be used to document the contribution of recruiting to the top line and the bottom line.

If a tree falls in the woods and nobody hears it, does it make a sound?

If a recruiter adds value but nobody knows it, does it make a difference?


Indeed. Thank you Joe!

Tuesday, March 13, 2007

Podcasting and jobs

Podcasts are one of the technology tools de jour. Whether it will stick around for the long haul is up for debate, but for right now there are a number of ways they can be used effectively.

Advertise your opportunities

One way is to use podcasts as another way to get the word out about your organization and career opportunities. One firm that facilitates this is jobsinpods. Check out this example of how this content is already becoming popular in Google searches.

Gather job information

Another way to use them is to gather information about positions, say as part of a job analysis. Take a listen to this example, "Life as an Escalation Engineer" with Microsoft. I think you'll find it's a rich source of information about typical tasks and job requirements.

Can you think of other ways to podcast?

Friday, March 09, 2007

Recruiting 2007 Conference


Not only is it journal time these days, it's conference time.

Kennedy Information is holding its Recruiting 2007 Conference and Expo at the Las Vegas Hilton on May 9-11.

Price
: $1095 for the whole shebang, cheaper for solo days. $495 for pre- and post-conference workshops.

Focus: Internet recruiting, using the Internet for recruiting, and recruiting over the Internet. Oh yeah, with a special focus on health care recruiters. On the Internet.

Who's presenting: Practically everyone you can think of, including:

Steven Pemberton from Monster about its diversity survey;

Steven Rothberg (CollegeRecruiting) about social networking sites;

Jason Goldberg (Jobster) about how to use Web 2.0;

Shally Steckerl and Dave Mendoza (JobMachine) about using LinkedIn;

Carol Mahoney from Yahoo! about building a talent acquisition organization;

Peter Weddle about best practices in online recruitment;

Those two guys from CareerXroads about building relationships;

John and Bridget Sumser on generational recruiting;

Paul Forster (Indeed) on Web 2.0 (again...hey, I smell a trend);

...and that's just the beginning. There's a whole mess 'o content being presented here.

Wait...where's Joel Cheesman? Oh there he is...at the very end of the regularly scheduled conference. At 4:45. In May. In Vegas. Heck, it's worth going just for that!

Thursday, March 08, 2007

New Issue of JOOP

I'd like to officially nominate the Journal of Occupational and Organizational Psychology as the journal with the best acronym (JOOP). Sure, there's Journal of Applied Psychology (JAP), Human Performance (HP), Personnel Psychology (PP), but...c'mon....JOOP! Just say it.

Anyway, I mentioned a couple days ago that the March journals are starting to pour in, and it just so happens that the new JOOP is out.

So what's in it?

I gotta tell ya, there's some really interesting stuff in this issue. I know, I know, I always say that. I'll talk about the two that relate to assessment here and the rest on HR Coal.

The first is a study by Silvester & Dykes that tries to answer a question I've had for a long time: What predicts success as a politician? The authors were looking at whether individual differences predicted electoral success. What did they find? After studying 106 candidates in the UK they found strong support for critical thinking ability (CTA) and performance on a structured interview predicting "percentage swing" (r=.45 and r=.31, respectively) and CTA also predicted "percentage votes" (r=.26). Of course this doesn't speak to their actual job performance...

The second is a study by Vasilopoulos, Cucina, and Hunter of personality and training performance among a sample of U.S. law enforcement personnel. I'll bet money that there are graphs included in this article that would make explaining their results a lot easier, but basically what they found was that training performance was best predicted by facet personality scales--essentially smaller parts of Big 5 traits of conscientiousness and emotional stability, such as dependability and stress resistance. In addition, they found it's important to consider non-linear relationships between personality and performance. In other words, don't assume there's a straight line relationship between the two--it might be U-shaped, inverted U, etc.

You could say this about a lot of different predictors, actually. Might there be an optimum amount of cognitive ability for a job (not simply the highest you can attract)? An optimum level of interviewing skill? An optimal amount of training or experience?

Wednesday, March 07, 2007

What do personality test results look like?

You may have taken a personality test before, but do you know what the results look like? Do you know what decision-makers are looking at when trying to determine if a candidate would be a good fit?

I decided to show you what the results could look like by taking a personality inventory myself. The instrument in question, the IPIP NEO-IP, is related, but not identical to, the well-known NEO PI-R instrument that is used to assess the Big 5 personality factors.

The big thing about the IPIP NEO-IP is...it's free. And in the public domain. That's right, you can take it whenever you want, just to satisfy your curiosity. More details about the IPIP (International Personality Item Pool) can be found here.

That's what I did, and I'm sharing the results with you.

Some notes on the test and the results: I tried to be as accurate as possible in my responses, but struggled with how to interpret some items. Again, this is not a commercially viable test, it's primarily a way of educating folks about personality. If you were to purchase a commercial test, the questions would likely be different, and the results would no doubt look very different, with lots of pretty colored bar charts and comparisons to the traits you're looking for based on an analysis of the position and high-performers.

What do I think?

Well, the results are what they are. It feels pretty accurate in many ways, not so much in others. I come off as sounding pretty Machiavellian. "Morality" is low, which looks bad until you read the description (which I didn't include), which clarifies it has more to do with how strictly you adhere to rules.

I'm also not so sure about the "Openness" description. I don't consider myself to be particularly conservative. I also resent having zero imagination, which no doubt has to do with the way it's defined (I don't daydream a lot, but I have a lot of creative ideas!).

And I'd take issue with me being unwilling to sacrifice myself for others (under Agreeableness). I'm more than willing to sacrifice for someone else...as long as it gets me something in return (kidding!).

If you have time, have a go at it. The shorter version has 120 items and took me probably 10-15 minutes.

Tuesday, March 06, 2007

2007 SHRM Staffing Management Conference in New Orleans


From April 23-25, SHRM will be having its annual conference devoted to employment and staffing in New Orleans, LA. (Note this used to be the Employment Management Association Conference & Exposition).

So why would I go?

1) You're a SHRM member and you have $875 to spare, OR
2) You're a non-SHRM member and you have $1120 to spare

AND

3) You're really into conferences devoted to recruitment and assessment
4) You like New Orleans

Unfortunately I meet the third and fourth criteria but neither the first nor the second. But, I can give you a flavor of what's in store in addition to some great keynote speakers, including Malcolm Gladwell, whose books are often mentioned on this blog.

What are the topics?

Rather than attempt to summarize the entire conference, I'm going to list the most relevant presentation titles, which are heavy on sourcing techniques and avoiding legal pitfalls. Several of these look really interesting; the full menu can be found here.

- Recruiting the best and the brightest: How to develop a market-and customer-focused mindset
- Interviewing: Identifying the liars, avoiding illegal questions
- Interactive recruitment marketing: Navigating the Internet to attract A-level talent
- Extreme caution advised: Dealing with federal and state laws regulating preemployment screening and safe hiring
- Recruiting an agile workforce that adds value to customers and shareholders
- Superstar selling techniques for non-sellers
- Impacting recruitment, retention and employee engagement with culture
- Daniel Boone and the tracking of applicants [new OFCCP regs]
- Extreme makeover: Renovating recruiting at Great-West
- Personality assessment in employee selection and promotion
- Avoid negligent hiring--best practices and legal compliance
- America's new regional demography
- "DOT" jobs: What have we learned about Internet recruiting in the past five years?
- Talent hunting: Sales skill development for the corporate recruiter
- Recent changes in immigration law that affect HR decision-making
- Rebuilding a world-class staffing function--from administration to profit center
- I'm not just a recruiter...I'm an expert consultant! Key consulting skills for recruitment professionals
- Effectively interviewing global talent
- Best practice techniques for finding and selling professional-level candidates
- Winning the talent war--meeting the recruitment challenges of the next decade
- Your candidate's experience: Black hole or North star?
- Background checks and the law
- The circle of recruitment success
- Get GenderSmart! Communicating with and managing women for recruiting and retention results

Monday, March 05, 2007

New Personnel Psychology--vol. 60, no. 1


March brings the Spring issue of Personnel Psychology and some great material to sink our teeth into.

Let's take a look at the articles relevant for recruitment and assessment (I'll summarize some of the other articles on my general HR blog).

The articles

First up, a study of --wait for it--personality tests. Well, integrity tests really. Using a sample of employees and students from Canada and Germany (thankfully moving beyond the U.S.), Marcus, Lee, and Ashton found that for the overt integrity tests investigated, honesty-humility (as described in the HEXACO model) accounts for more of the criterion-related validity than did the Big 5 dimensions of personality. For the personality-based integrity tests, the results were the opposite. The results were the same regardless of study group or instrument. Specific findings included correlations of -.51 and -.67 between an shortened version of the CPI and self-reported counterproductive work behavior and academic behavior, respectively.

Next, an excellent meta-analysis of situational judgment tests (SJTs) by McDaniel, Hartman, Whetzel, and Grubb. The authors continue the exploration of the difference between SJTs with knowledge instructions (e.g., "What would be the best...") and those with behavioral tendency instructions (e.g., "What would you do..."). The analysis found (as have previous studies) that SJTs with knowledge instructions had higher correlations with cognitive ability, while those with behavioral tendency instructions correlated more highly with personality constructs (Big 5). Criterion-related validity was unaffected by response instruction and was reported at .26 for both types--lower than had been previously reported due to analysis of new data. Results also showed SJTs have (modest) incremental validity over cognitive ability, Big 5, and a composite of the two. Given that previous studies have found larger racial differences with knowledge-based instructions I'm tempted to think this is an overall win for behavioral tendency instructions, but the situation is more complicated than that and most likely depends on the the job (e.g., complexity).

Last but not least, an article on test validity, bias, selection errors, and adverse impact by Aguinis and Smith (full text available here, many other of Aguinis' publications available here). The authors present a framework that integrates the four concepts and describe a computer program (available here) that allows individuals to input test and criterion distributions and analyze the impact on and relationship with selection ratios, adverse impact, and performance levels. Useful but definitely not a casual read.

The books

In addition to the studies listed above (and several others), this issue reviews several books that are worth looking into:

1) A critique of emotional intelligence: What are the problems and how can they be fixed? edited by Kevin Murphy. Intended to be a balanced analysis of this popular concept, the reviewer sees the book as more one-sided (providing fuel for critics). The book features chapters by prominent researchers and practitioners including Landy, Van Rooy, Hogan, and Spector.

2) Assessment centers in human resource management: Strategies for prediction, diagnosis, and development by George Thornton and Deborah Rupp. This one looked good enough that I ordered it--especially since I'm lacking a good book devoted to assessment centers. The reviewer states, "we have a new comprehensive description and expert review of the literature on the practice and evaluation of the assessment center method." I have Thornton's book on simulations which I find very useful.

3) Essentials of personnel assessment and selection by Robert Guion and Scott Highhouse. If I didn't already have Guion's larger tome, I'd probably get this one. The authors state the book is intended for undergraduate and master's level students, and the reviewer says "I believe that the book is just right for first or second year graduate students. It is a handy and compact compendium of fact and best practice."

That's it! As I said, there are other articles in here that I'll summarize in my other blog.

Friday, March 02, 2007

Assessment in the Mainstream Press

Boy, folks sure have personality tests on the brain these days.

This article on Forbes.com, titled "Surviving the personality test" discusses how organizations are using personality tests these days and also touches on the issue of "cheating."

Tip of the hat for...

- Talking to real experts in personnel selection, such as Robert Hogan and Murray Barrick. Too often journalists seem like they just pick the first name in the yellow pages.

- Pointing out that using a good personality inventory in conjunction with an interview can increase predictive power.

- Emphasizing the importance of personality inventories being standardized (and the fact that interviews often aren't).

- Reminding everyone that characteristics required of one job aren't required of all jobs. It's a scary moment when a hiring manager wants to use the same assessment, in the same way, for every job.

- Encouraging people to be honest, as the point here is to match what the person can do (and wants to do) with what the organization needs. Not that this will necessary work.


Wag of the finger for...

-
Saying that personality tests are "not trying to discern whether you're an extrovert or an introvert." They sure as heck ARE if you're hiring for a job where those traits are associated with success (e.g., long haul trucker, salesperson).

- Claiming that personality tests can't be cheated on. Sure they can. We know they can. It's not always easy to do so, but it doesn't take a rocket scientist to realize that questions like "do you like to go to parties?" are probably getting at extroversion and that's probably important for jobs like a restaurant host/hostess. But the more important point is it doesn't seem to matter all that much.


All things considered, not a bad article. I give it a B+.

Thursday, March 01, 2007

Introducing the HR Tests Wiki


I've been toying around for a while with the idea of creating a wiki for this blog, and finally went ahead and did it. I didn't want to create one just to create one but strongly believe in their usefulness and came up with (I hope) a good idea.

For those of you that aren't familiar with wikis, they are web pages that can be modified--often by anyone, but they can also be password protected. The most famous collection of wikis is Wikipedia, now one of the top 15 websites visited. For an example of a wiki page, check this one out.

Wikis are constantly evolving digital recording mediums that grow in power the more they are used. They serve as knowledge management devices and enormously useful resources, as the more they are added to and reviewed, the more accurate (and thorough) they become. In fact, studies have found that Wikipedia entries are as accurate as those in the Encyclopaedia Britannica.

So why do we need one? Here's the way I see it.

I'm guessing every one of us, at some point or another, has had to answer a question like: "What does the research say about X?" with X being anything from the value of job analysis to structured interviewing to personality testing. Maybe it's because you were writing a thesis or an article. Maybe it's because a manager wanted to know. Or maybe you were just curious.

Whatever the reason, I'm thinking there's a lot of duplication of effort out there. I know I've created at least 20 separate files summarizing various research (just don't ask me to find them!).

With that in mind, I present the HR Tests Wiki, available at hrtests.pbwiki.com. The purpose of this wiki will be for us to gather together answers to those common questions like, "What does the research say about structuring an interview?" You'll see that it's organized according to these topics.

Viewing the wiki is available to everyone. Editing it, however, requires a password. And that password is very creative: test123

Feel free to play around with it--it's everyone's resource. I will be adding to it slowly over time but I hope you will add to it, correct any mistakes, and make use of it. I really do think this could be a helpful resource for all of us. And this is just one idea for a wiki--I'm sure you can think of others.

Note: Editing can be a bit squirrely. I've bolded a sentence only to find entire paragraphs bolded. But no worries, it's not meant to be perfect.

p.s. I've also started another blog, HR Coal, which is devoted to more general HR news. It can be found at hrcoal.blogspot.com