Tuesday, August 21, 2007

August ACN

The August, 2007 issue of Assessment Council News is out and Dr. Mike Aamodt provides his usual great writing, this time in article titled, "A Test! A Test! My Kingdom for a Valid Test!" where he goes over what you need to look for when selecting a commercially available test...in two easy steps!

Some of my favorite quotes:

"Previously, [the] clients had their supervisors create their own tests, and we advised them that this was not a good idea." (I just like the idea of saying that to clients, aside from the fact that it's true 99% of the time)

"Creating a reliable, valid, and fair measure of a competency is difficult, time consuming, frustrating, costly, and just about any other negative adjective you can conjure up. Think of the frustration that accompanies building or remodeling a home and you will have the appropriate picture." (So it ISN'T a coincidence that I enjoy testing and home remodeling. Whew.)

"...it is essential to remember that no test is valid across all jobs and that criterion validity is established by occupation, and depending on who you talk (argue) with, perhaps by individual location." (Just don't tell this to Schmidt and Hunter.)

More info about ACN, including links to past issues, here.

And by the way...major kudos to Dr. Aamodt for offering so much of his work online. This is rare and to be commended.

Monday, August 20, 2007

OPM Has New Assessment Website

The U.S. Office of Personnel Management (OPM) continues to show what a professional assessment shop should be doing with it's new personnel assessment page.

There's some great stuff here, including:

- A very detailed decision guide, including a great overview of pretty much all the major topics

- Reference documents

- Assessment resources

There's even a survey built in to gather feedback on the guide, as well as a technical support form.

Major tip 'o the hat.

Saturday, August 18, 2007

September 2007 issue of IJSA

The September, 2007 issue (vol. 15, #3) of the International Journal of Selection and Assessment is out, with the usual cornucopia of good reading for us, particularly if you're into rating formats and personality assessment. Let's skim the highlights...

First, Dave Bartram presents a study of forced choice v. rating scales in performance ratings. No, not as predictors--as the criterion of interest. Using a meta-analytic database he found that prediction of supervisor ratings of competencies increased 50% when using forced choice--from a correlation of .25 to .38. That's nothing to sneeze at. Round one for forced choice scales--but see Roch et al.'s study below...

Next up, Gamliel and Cahan take a look at group differences with cognitive ability measures v. performance measures (e.g., supervisory ratings). Using recent meta-analytic findings, the authors find group differences to be much higher on cognitive ability measures than on ratings of performance. The authors suggest this may be due to the test being more objective and standardized, which I'm not sure I buy (not that they asked me). Not super surprising findings here, but it does reinforce the idea that we need to pay attention to group differences for both the test we're using and how we're measuring job performance.

Third, Konig et al. set out to learn more about whether candidates can identify what they are being tested on. Using data from 95 participants who took both an assessment center and a structured interview, the authors found results consistent with previous research--namely, someone's ability to determine what they're being tested on contributes to their performance on the test. Moreover, it's not just someone's cognitive ability (which they controlled for). So what is going on? Perhaps it's job knowledge?

Roch et al. analyzed data from 601 participants and found that absolute performance rating scales were perceived as more fair than relative formats. Not only that, but fairness perceptions varied among each of the two types. In addition, rating format influenced ratings of procedural justice. The researchers focus on implications for performance appraisals, but we know how important procedural justice is for applicants too.

Okay, now on to the section on personality testing. First up, a study by Carless et al. of criterion-related validity of PDI's employment inventory (EI), a popular measure of reliability/conscientiousness. Participants included over 300 blue-collar workers in Australia. Results? A mixed bag. EI performance scores were "reasonable" predictors of some supervisory ratings but turnover scores were "weakly related" to turnover intentions and actual turnover. (Side note: I'm not sure, but I think the EI is now purchased through "getting bigger all the time" PreVisor. I'm a little fuzzy on that point. What I do know is you can get a great, if a few years old, review of it for $15 here).

Next, Byrne et al. present a study of the Emotional Competence Inventory (ECI), an instrument designed to measure emotional intelligence. Data from over 300 students from three universities showed no relationship between ECI scores and academic performance or general mental ability. ECI scores did have small but significant correlations (generally in the low .20s) with a variety of criteria. However, relationships with all but one of the criteria (coworkers' ratings of managerial skill) disappeared after controlling for age and personality (as measured by the NEO-FFI). On the plus side, the factor structure of the ECI appeared distinct from the personality measure. More details on the study here.

Last but not least, Viswesvaran, Deller, and Ones summarize some of the major issues presented in this special section on personality and offer some ideas for future research.

Whew!

Wednesday, August 15, 2007

De-motivators

Humor break.

I've posted before about Despair.com's "de-motivational" posters. They're a (funny) version of the ubiquitous "motivational" posters you see all over the place that mostly make you roll your eyes.

Well, Despair.com now has Do It Yourself posters. Here are the three that I've done so far:
































The only thing I don't get is why they don't offer printing of these. Seems like a natural money maker.

Anyhoo, hope you enjoy!

Tuesday, August 14, 2007

Great July 2007 Issues of Merit

The U.S. Merit Systems Protections Board (MSPB) puts out a great newsletter focused on staffing called Issues of Merit.

The July 2007 edition has some great stuff in it, including:

- Risks inherent with using self-assessment for high-stakes decisions, such as hiring (hint: people are horrible at it)

- Tips for workforce planning

- How to write good questions

- Analyzing entry hires into the federal workforce

- An introduction to work sample tests

Good stuff!

Saturday, August 11, 2007

Class certified in Novartis gender discrimination suit

Bad news for Novartis Pharmaceuticals.

On July 31, 2007 Judge Gerald Lynch of the Southern District of New York granted class certification status to "[a]ll women who are currently holding, or have held, a sales-related job position with [Novartis] during the time period July 15, 2002 through the present."

The plaintiffs are seeking $200 million in compensatory, nominal, and punitive damages, claiming that Novartis discriminates against women in a variety of ways, including compensation, promotions, performance appraisals, and adverse treatment of women who take pregnancy leave.

The case in instructive for us because of how the judge viewed expert opinion in this case. One of the plaintiffs' experts noted that Novartis' performance evaluation system was flawed because ratings were subject to modification by higher-level supervisors and because ratings had to fit into a forced distribution. In addition, appeals by employees went to either the manager who made the original rating or an HR person with no real authority to change ratings.

Another plaintiffs' expert noted that male sales employees are 4.9 times more likely to get promoted to first-line manager than female sales employees. In addition, 15.2% of male employees were selected to be in the management development program compared to only 9.1% of eligible female employees--a difference of 6.0 standard deviations.

What these statistics really signify and whether the plaintiffs end up ultimately winning the suit is anyone's guess. The important thing here is to keep in mind that what you may think is a logical way to make promotion decisions may look "subjective" to others and riddled with potential for bias to enter the equation.

Bias (and risk) can be reduced by implementing practices such as:

1 - Having raters undergo intensive training, including a discussion of potential biases and several "dry runs" of the process.

2 - Having a standardized rating form with clear benchmarks based on an analysis of job requirements.

3 - Considering carefully the use of a "forced distribution" system. If you do use one, make sure raters and ratees alike understand why--and how--this is being done.

4 - Making performance in the current job only part of the promotional criteria--give applicants a chance to show their stuff through ability tests, work sample tests, personality tests, and the like.

5 - Taking complaints seriously. If someone believes there is an opportunity for abuse of the system, investigate.

6 - Track, track, track those applicant flow statistics, including selection into training programs. Uncover discrepancies before they uncover you.

7 - Get HR involved--not just as gatekeepers but as partners. Hold HR accountable for providing best practices.

8 - If you have something like a management academy, make the criteria for entry transparent and have a point person for questions.

You can read the order here, and read more analysis of the case here.

Thursday, August 09, 2007

Spock launches

I've posted a couple times about Spock, a new people search engine. I'll be honest, I'm pretty excited about it.

I won't go into (again) why I'm excited, but suffice to say a search engine that gives us rich data about folks that we can use for recruitment and (potentially) assessment is pretty promising.

Yesterday they had their official public beta launch and you can now check it out, although it's so popular that it looks like their servers are struggling.

And no, they're not the only game in town. They compete directly with other sites like Wink and PeekYou, and indirectly with sites including LinkedIn, ZoomInfo, and Xing. Oh yeah, and WikiYou (although that's user-generated).

As I said, I'm pretty excited about it. Maybe it's just the name. And keep in mind I bought Webvan stock, so take my opinions with a grain of salt.

Tuesday, August 07, 2007

New feed: IPMAAC website updates

Can't get enough news about assessment?

Wish there were more feeds you could track?

Well, your wish has been granted. Now you can keep track of major changes to the IPMAAC website via the new RSS feed. This includes:

- Job openings

- New conference presentations available

- New items added to the library

- Announcements of new issues of the Assessment Council News (ACN)

Not familiar with feeds? Check out Google Reader or Feedreader. There are a ton of applications out there you can use to track feeds (including most web browsers), but these are two I've found to be darn easy to use.

Maybe this will encourage SIOP and SHRM to do the same...

Monday, August 06, 2007

2007 Academy of Management Conference

There have been some news stories about one of the presentations at this year's Academy of Management (AOM) conference--about an online survey where a majority of respondents said that bad bosses either get promoted or have nothing happen to them. But there's a heck of a LOT of other good stuff at this year's conference. So take a deep breath and let's take a look...

First up, a whole set of presentations devoted to selection, including:

- Hiring for Retention and Performance
- Work Sample Test Ethnic Group Differences in Personnel Selection: A Meta-analysis
- Stigmatizing Effects of Race-Based Preferential Selection
- Longitudinal Changes in Testing Applicants and Labor Productivity Growth



Next, a session devoted to recruitment and selection, including:

- The Role of Sociolinguistic Cues in the Evaluation of Job Candidates
- Recruitment as Information Search: The Role of Need for Cognition in Employee Application Decisions
- A House Divided: Cooperative and Competitive Recruitment in Vital Industries
- The Practice of Sense-Making and Repair during Recruitment Interviews
- Overqualified Employees: Too Good to Hire or Too Good to Be True?



Next up, a session devoted to recruitment. Included topics:

- Customizing Web-Based Recruiting: Theoretical Development and Empirical Examination
- Network-based Recruiting and Applicant Attraction: Perspective from Employer and Applicants
- Fancy Job Titles: Effects on Applicants' Job Perceptions and Intentions to Apply
- Recruitment and National Culture: A Value-Based Model of Recruitment



Next, a set devoted to person-organization (P-O) fit, including:

- Going Beyond Current Conceptualizations of P-E Fit and Presenting a Status Report on the Literature
- Outcomes of Multidimensional Misfit: An Empirical Test of a Theoretical Model
- FIT: Scale Development and Initial Validation of a New Measure
-
Considering the Contextualized Person: A Person-In-Content Approach to Goal Commitment


Next, a set on predictors of individual performance, including:

- An Examination of Ability-based Emotional Intelligence and Job Performance
- Predicting NFL Performance: The Role of Can-do and Will-do Factors
- A Fresh Perspective on Extraversion and Automobile Sales Success
- Auditor Effectiveness and Efficiency in Workpaper Review: The Impact of Regulatory Focus



Last but not least, one of my favorite topics, how organizations and individuals perceive selection. Topics include:

- Understanding Job Applicant Reactions: Test of Applicant Attribution Reaction Theory
- Effects of Ingratiation and Similarity on Judgments of P-O Fit, Hiring Recommendations and Job Offer
- The Effects of Resume Contents on Hiring Recommendations: The Roles of Recruiter Fit Perceptions
- Organization Personality Perceptions and Attraction: The Role of PO Fit and Recruitment Information



This is just a sample of what the conference has to offer; if you went, or otherwise know of other presentations we should know about, please share with us.

And no, most of the presentations aren't available on-line but the presenters' e-mail addresses are provided and most folks are willing to share.

Thursday, August 02, 2007

Is USAJobs enough?

Check out this article that came out recently on GovernmentExecutive.com. It's about how the federal government may need to branch out and start using other advertising venues besides USAJobs.gov, which it relies on heavily.

Some individuals quoted in the article, which happens to include a manager at CareerBuilder, point out that:

- Opportunities are not automatically posted on other career sites, like CareerBuilder, Monster, and HotJobs.

- Job openings are not "typically" searchable through search engines like Google. (Although look what happens when I search for an engineering job with the federal government).

- You can't expect people to automatically look for jobs on USAjobs.

The Office of Personnel Management (OPM), the fed's HR shop, fires back:

- USAJobs gets 8 million hits a month. This compares to CareerBuilder's 1.2 million searches a month for government jobs.

- USAJobs is well known and marketing efforts have been ramped up (e.g., last year's television commercials, which unfortunately didn't work with my version of Firefox).

So who wins the argument? I don't think the feds need to panic just yet. But it can't hurt them to investigate other posting opportunities, particularly given how much traffic the heavy hitters like Monster and CareerBuilder get compared to USAJobs:

By the way, don't overlook the comments on that page; in some ways they are more telling than the article. Readers point out that the application process is overly complicated--to the point that one of the readers makes his/her living guiding people through the process (reminds me of a guy that does the same thing for the State of California). My bet is the application process is equally, if not more, important than how the feds are marketing their opportunities.

I would also be willing to bet that it isn't just the feds that have this issue. As more organizations implement automated application and screening programs, they risk falling in love with the technology at the expense of the user experience. I may love the look of your job, but if it takes me 2 hours to apply, well...I may just look elsewhere.

Tuesday, July 31, 2007

B-school requires PPT slides for admission

So apparently Chicago's Graduate School of Business is going to require four pages of PowerPoint-like slides as part of its admission process this fall.

According to school reps, this will allow students to "show off a creative side that might not reveal itself in test scores, recommendations and even essays." Another rationale given by the school is that students will have to master this type of software before entering the business world.

One problem I see here is the vast majority of applicants will already know PowerPoint--if you get through high school and college without using it, I'm betting you're the rare applicant.

The larger problem here is the same problem employers face with supplemental questionnaires and work samples--namely, who did it? In high-stakes situations like school admissions and job applications, people are known to take, shall we say, less than ethical routes to increase their chances.

The benefit of something like GPA or the GMAT is identity verification--you can be virtually assured (as long as you can validate the numbers) that the person who's applying is the one that took that test.

With things like previous work samples, resumes, and now this PowerPoint idea, you have no idea who actually created the product. So you make an admissions or hiring decision based on an assumption. Do you validate that they actually created these documents? Probably not. Even if you wanted to, how would you do it?

It might not even matter, since this may be more of a way to add excitement to application reviews and to simply get more applicants, which the school acknowledges. There seems to be a trend among organizations to implement projects that aren't so much concerned with valid predictions of performance but with simply attracting attention. This will likely get even more blatant as organizations struggle to keep their staffing levels up in the coming years.

But we should keep this in mind: gimmicks may attract some applicants, but do they turn others away? What about highly qualified individuals who think, "Well that's silly." That's why the best solutions will pique interest while being as close as possible to the actual job (or school) environment. How about asking applicants to give a presentation as part of their interview--now that's a skill they'll need. Plus, absent any Mission Impossible-style disguises, you can be pretty sure the person in front of you is who they claim to be.

Monday, July 30, 2007

Webinar on Assessment Best Practices

On July 25, 2007, Dr. Charles Handler of rocket-hire.com gave a very good overview of best practices in a webinar titled Screening and Assessment - Best Practices for Ensuring a Productive Future.

Some of the topics he covered included:

- Different types of assessment
- Planning for assessment
- Validity and prediction
- Screening out vs. screening in

You can view a pdf of the slides here, and listen to an mp3 of his presentation here.

Wednesday, July 25, 2007

Tuesday, July 24, 2007

IPMAAC Presentations + Cheaper Membership

I posted earlier about presentations from the 2007 IPMAAC conference going up online. Well now there's a whole gaggle of 'em and there's some really great content. Check out these sample titles (PDF):

Applicant Reactions to Online Assessments

Succession Planning and Talent Management


2007 Legal Update

Potholes on the Road to Internet Applicant Compliance

Measuring Complex Reasoning


Tips on Writing an Expert Witness Report

And that's just the beginning. For all the goodies, check out the full list.

But wait, there's more...

In addition, IPMAAC recently enacted a change to its membership categories & fees. You can now become an IPMAAC member for only $75! Talk about cheap. $75 pays for the difference in conference fees between a member and a non-member! And you get all this to boot. Plus, you're "affiliated" with IPMA-HR, which means you get the awesome weekly HR newsletter and discounts on all sorts of IPMA-HR stuff (that's a technical term). And you DON'T have to work in the public sector to join.

There really aren't that many professional organizations associated with assessment. There's SIOP, but they're about a lot more than just staffing. There are local groups. But when it comes to national or international groups, IPMAAC is it. Which is a good thing, because it's a great group of people (not that I'm biased or anything).

Saturday, July 21, 2007

Legos: They're not just for Google anymore

So apparently Legos (or "Lego bricks") are enjoying quite the popularity among corporate recruiters these days.

Not only did Google use them at Google Games (and apparently employees enjoy them as well), PricewaterhouseCoopers (PwC) asks candidates to create a tower with Legos, according to this Economist article.

So what exactly are candidates doing? PwC asks candidates to create the tallest, sturdiest structure they can using the fewest number of "bricks." Google asked candidates to build the strongest bridges they could.

Is this a valid form of assessment? A "professional Lego consultant" in Buenos Aires stated that, "Lego workshops are effective because child-like play is a form of instinctive behaviour not regulated by conscious thought. " There's even a website devoted to Lego's efforts in this area--Serious Play.

So my question is: Do most of us do work that is "not regulated by conscious thought"? Perhaps sometimes, say in emergencies. But the vast majority of time we're pretty darn deliberate in our actions. The only situation I can see where this might be predictive of actual job performance would be for jobs like bridge engineer or architect. But...computer programmer? If I wanted to know how creative a programmer is, I'd ask him/her to solve a difficult coding problem.

Does this even matter? Perhaps not (unless they're sued). As one of the candidates states, correctly I think, "It was as much advertising as a way of trying to get recruits." So in this day and age of "talent wars", this may be just another branding technique.

Will it be successful? Probably depends on how much the candidate likes to play with blocks.

This post is dedicated to my Grandpa Ben, who had a great sense of humor. And probably would have thought using Legos in this way is a bit silly :)

Thursday, July 19, 2007

New issue of Journal of Applied Psychology (v.92, #4)

Guess how many articles are in the most recent Journal of Applied Psychology. Go ahead, take a gander.

10? 15?

Try 23. I mean....that's just showing off.

So what's in there about recruitment & assessment? Believe it or not, only two articles. Let's take a look at 'em.

First up, a study by Klehe and Anderson looked at typical versus maximum performance (the subject of the most recent issue of Human Performance) during an Internet search task. Data from 138 participants indicated that motivation to perform well (measured by direction, level, and persistence of effort) rose when people were trying to do their best (maximum performance). But the correlation between motivation and performance diminished under this condition, while the relationship between ability (measured by declarative knowledge and procedural skills) and performance increased.

What the heck does this mean? If you're trying to predict the MAXIMUM someone can do, you're better off using knowledge-based and procedure-based tests. If, on the other hand, you want to know how well they'll perform ON AVERAGE, check out tests that target things like personality, interests, etc.

Second, Lievens and Sackett investigated various aspects of situational judgment tests (SJTs). The authors were looking at factors that could increase reliability when you're creating alternate forms of the same SJT. Using a fairly large sample (3,361) in a "high-stakes context", they found that even small changes in the context of the question resulted in lower consistency between versions. On the other hand, being more stringent in developing alternate forms proved to be of value.

What the heck does this mean? If you're developing alternate forms of SJTs (say, because you give the test a lot and you don't want people seeing the same items over and over) this study suggests you don't get too creative in changing the situations you're asking about.

As usual, the very generous Dr. Lievens has made this article available here. Just make sure to follow fair use standards, folks.

Monday, July 16, 2007

Finally, an assessment gameshow

Okay, so it's not "guess the criterion-related validity", but it's about as close as we're going to get to a game show focused on assessment (The Apprentice notwithstanding).

The show is called "Without Prejudice" and it premiers July 17th on the Game Show Network (GSN). The concept is that a diverse panel made up of "ordinary members of the public" will be judging a similarly diverse group of people and determining who should be given the $25,000 prize.

So how is this like assessment, you say? Well the judges have to decide who they like the most or hate the least and use their judgment to determine who to award the prize to, based on seeing video clips of the contestants and information about their background. What does this sound like? Your average job interview!

What does it look like? In the premier, the first task the panel must do is decide who among the five contestants should be denied the money based on a very quick (about 5 seconds) introduction by the person. The panel focuses heavily on appearance rather than what was said, including making judgments about how wealthy the person is, their age, and their "vibe."

In an interesting twist, the host talks to the people that were "eliminated" about how they felt. (Ever asked a denied job applicant how they were feeling? Could be informative.)

Is it overly dramatic? Absolutely. Will it last? Probably not. Does it give us a vivid example of how quickly impressions are made, and on what basis? Yep.

There's a sneak peek of the premier available on the website. There is also, to their credit, links to information about prejudice, including two questionnaires you can take to probe your own beliefs.