Tuesday, July 31, 2007

B-school requires PPT slides for admission

So apparently Chicago's Graduate School of Business is going to require four pages of PowerPoint-like slides as part of its admission process this fall.

According to school reps, this will allow students to "show off a creative side that might not reveal itself in test scores, recommendations and even essays." Another rationale given by the school is that students will have to master this type of software before entering the business world.

One problem I see here is the vast majority of applicants will already know PowerPoint--if you get through high school and college without using it, I'm betting you're the rare applicant.

The larger problem here is the same problem employers face with supplemental questionnaires and work samples--namely, who did it? In high-stakes situations like school admissions and job applications, people are known to take, shall we say, less than ethical routes to increase their chances.

The benefit of something like GPA or the GMAT is identity verification--you can be virtually assured (as long as you can validate the numbers) that the person who's applying is the one that took that test.

With things like previous work samples, resumes, and now this PowerPoint idea, you have no idea who actually created the product. So you make an admissions or hiring decision based on an assumption. Do you validate that they actually created these documents? Probably not. Even if you wanted to, how would you do it?

It might not even matter, since this may be more of a way to add excitement to application reviews and to simply get more applicants, which the school acknowledges. There seems to be a trend among organizations to implement projects that aren't so much concerned with valid predictions of performance but with simply attracting attention. This will likely get even more blatant as organizations struggle to keep their staffing levels up in the coming years.

But we should keep this in mind: gimmicks may attract some applicants, but do they turn others away? What about highly qualified individuals who think, "Well that's silly." That's why the best solutions will pique interest while being as close as possible to the actual job (or school) environment. How about asking applicants to give a presentation as part of their interview--now that's a skill they'll need. Plus, absent any Mission Impossible-style disguises, you can be pretty sure the person in front of you is who they claim to be.

Monday, July 30, 2007

Webinar on Assessment Best Practices

On July 25, 2007, Dr. Charles Handler of rocket-hire.com gave a very good overview of best practices in a webinar titled Screening and Assessment - Best Practices for Ensuring a Productive Future.

Some of the topics he covered included:

- Different types of assessment
- Planning for assessment
- Validity and prediction
- Screening out vs. screening in

You can view a pdf of the slides here, and listen to an mp3 of his presentation here.

Wednesday, July 25, 2007

Tuesday, July 24, 2007

IPMAAC Presentations + Cheaper Membership

I posted earlier about presentations from the 2007 IPMAAC conference going up online. Well now there's a whole gaggle of 'em and there's some really great content. Check out these sample titles (PDF):

Applicant Reactions to Online Assessments

Succession Planning and Talent Management


2007 Legal Update

Potholes on the Road to Internet Applicant Compliance

Measuring Complex Reasoning


Tips on Writing an Expert Witness Report

And that's just the beginning. For all the goodies, check out the full list.

But wait, there's more...

In addition, IPMAAC recently enacted a change to its membership categories & fees. You can now become an IPMAAC member for only $75! Talk about cheap. $75 pays for the difference in conference fees between a member and a non-member! And you get all this to boot. Plus, you're "affiliated" with IPMA-HR, which means you get the awesome weekly HR newsletter and discounts on all sorts of IPMA-HR stuff (that's a technical term). And you DON'T have to work in the public sector to join.

There really aren't that many professional organizations associated with assessment. There's SIOP, but they're about a lot more than just staffing. There are local groups. But when it comes to national or international groups, IPMAAC is it. Which is a good thing, because it's a great group of people (not that I'm biased or anything).

Saturday, July 21, 2007

Legos: They're not just for Google anymore

So apparently Legos (or "Lego bricks") are enjoying quite the popularity among corporate recruiters these days.

Not only did Google use them at Google Games (and apparently employees enjoy them as well), PricewaterhouseCoopers (PwC) asks candidates to create a tower with Legos, according to this Economist article.

So what exactly are candidates doing? PwC asks candidates to create the tallest, sturdiest structure they can using the fewest number of "bricks." Google asked candidates to build the strongest bridges they could.

Is this a valid form of assessment? A "professional Lego consultant" in Buenos Aires stated that, "Lego workshops are effective because child-like play is a form of instinctive behaviour not regulated by conscious thought. " There's even a website devoted to Lego's efforts in this area--Serious Play.

So my question is: Do most of us do work that is "not regulated by conscious thought"? Perhaps sometimes, say in emergencies. But the vast majority of time we're pretty darn deliberate in our actions. The only situation I can see where this might be predictive of actual job performance would be for jobs like bridge engineer or architect. But...computer programmer? If I wanted to know how creative a programmer is, I'd ask him/her to solve a difficult coding problem.

Does this even matter? Perhaps not (unless they're sued). As one of the candidates states, correctly I think, "It was as much advertising as a way of trying to get recruits." So in this day and age of "talent wars", this may be just another branding technique.

Will it be successful? Probably depends on how much the candidate likes to play with blocks.

This post is dedicated to my Grandpa Ben, who had a great sense of humor. And probably would have thought using Legos in this way is a bit silly :)

Thursday, July 19, 2007

New issue of Journal of Applied Psychology (v.92, #4)

Guess how many articles are in the most recent Journal of Applied Psychology. Go ahead, take a gander.

10? 15?

Try 23. I mean....that's just showing off.

So what's in there about recruitment & assessment? Believe it or not, only two articles. Let's take a look at 'em.

First up, a study by Klehe and Anderson looked at typical versus maximum performance (the subject of the most recent issue of Human Performance) during an Internet search task. Data from 138 participants indicated that motivation to perform well (measured by direction, level, and persistence of effort) rose when people were trying to do their best (maximum performance). But the correlation between motivation and performance diminished under this condition, while the relationship between ability (measured by declarative knowledge and procedural skills) and performance increased.

What the heck does this mean? If you're trying to predict the MAXIMUM someone can do, you're better off using knowledge-based and procedure-based tests. If, on the other hand, you want to know how well they'll perform ON AVERAGE, check out tests that target things like personality, interests, etc.

Second, Lievens and Sackett investigated various aspects of situational judgment tests (SJTs). The authors were looking at factors that could increase reliability when you're creating alternate forms of the same SJT. Using a fairly large sample (3,361) in a "high-stakes context", they found that even small changes in the context of the question resulted in lower consistency between versions. On the other hand, being more stringent in developing alternate forms proved to be of value.

What the heck does this mean? If you're developing alternate forms of SJTs (say, because you give the test a lot and you don't want people seeing the same items over and over) this study suggests you don't get too creative in changing the situations you're asking about.

As usual, the very generous Dr. Lievens has made this article available here. Just make sure to follow fair use standards, folks.

Monday, July 16, 2007

Finally, an assessment gameshow

Okay, so it's not "guess the criterion-related validity", but it's about as close as we're going to get to a game show focused on assessment (The Apprentice notwithstanding).

The show is called "Without Prejudice" and it premiers July 17th on the Game Show Network (GSN). The concept is that a diverse panel made up of "ordinary members of the public" will be judging a similarly diverse group of people and determining who should be given the $25,000 prize.

So how is this like assessment, you say? Well the judges have to decide who they like the most or hate the least and use their judgment to determine who to award the prize to, based on seeing video clips of the contestants and information about their background. What does this sound like? Your average job interview!

What does it look like? In the premier, the first task the panel must do is decide who among the five contestants should be denied the money based on a very quick (about 5 seconds) introduction by the person. The panel focuses heavily on appearance rather than what was said, including making judgments about how wealthy the person is, their age, and their "vibe."

In an interesting twist, the host talks to the people that were "eliminated" about how they felt. (Ever asked a denied job applicant how they were feeling? Could be informative.)

Is it overly dramatic? Absolutely. Will it last? Probably not. Does it give us a vivid example of how quickly impressions are made, and on what basis? Yep.

There's a sneak peek of the premier available on the website. There is also, to their credit, links to information about prejudice, including two questionnaires you can take to probe your own beliefs.

Friday, July 13, 2007

New issue of Human Performance: Typical v. maximum performance

There's a new issue of Human Performance out (Volume 20, Issue 3) and it's devoted to a very worth topic--typical versus maximum performance.

What is the distinction, you say? Well it's pretty much what it sounds like. From Smith-Jentsch's article in this issue:

"Typical performance in its purest sense reflects what a person "will do" on the job over a sustained period of time, when they are unaware that their performance is being evaluated. By contrast, maximum performance reflects what a person "can do" when they are explicitly aware that they are being evaluated, accept instructions to maximize effort, and are required to perform for a short-enough period of time that their attention remains focused on the task."

The recent interest in this area stems largely from Sacket, Zedeck, & Fogli's 1988 article in Journal of Applied Psychology. Previous research suggested that measures of ability (e.g., cognitive ability tests) more accurately predict maximum performance whereas non-ability measures (e..g, personality tests) are correlated more with typical performance. This of course has implications for who we recruit and how we assess: Are we trying to predict what people can do or will do? The answer, I think, depends on the job--for aircraft pilot or police officer, you want to know what people can do when they're exerting maximum effort. For customer service representatives, you may be more interested in their day-to-day performance.

This topic is mentioned often in I/O textbooks but (as the authors point out) hasn't been researched nearly enough. The authors of this volume attempt to remedy that in some part. Let's look at the articles in the recruitment/assessment area:

First, Kimberly Smith-Jentsch opens with a study of transparency of instructions in a simulation. Analyzing data from two samples of undergraduates, the results validate previous findings: Making assessment dimensions transparent (i.e. telling candidates what they're being measured on) allows for better measurement of the abilities necessary for maximum performance, while not making this information transparent appears to result in better measurement of traits that motivate typical performance. So if the question is, "Do we tell people what we're measuring?" the answer is: It depends on what your goal is!

Next, Marcus, Goffin, Johnston, and Rothstein tackle the personality-cognitive ability test issue with a sample of candidates for managerial positions in a large Canadian forestry products organization. The results underline how important it is to recognize that "personality" (measured here by the 16PF and PRF) has several components. While cognitive ability scores (measured by the EAS) consistently outperformed personality scores in predicting maximum performance, measures of extraversion, conscientiousness, dominance, and rule-consciousness substantially outperformed cognitive ability when predicting typical performance.

Third
, Ones and Viswevaran investigate whether integrity tests can predict maximal, in addition to typical, performance. The answer? Yes--at least with this sample of 110 applicants to skilled manufacturing jobs. Integrity test scores (measured using the Personnel Reaction Blank) correlated .27 with maximal performance (measured, as is typical, with a work sample test). The caveat here, IMHO, is that job knowledge scores correlated .36 with maximal performance. So yes, integrity test scores (a "non-ability" test) can predict maximal performance, but perhaps still not as well as cognitively-loaded tests.

Last but not least, Witt & Spitzmuller look at the relationship between cognitive ability and perceived organizational support (POS) on typical and maximum performance. Results from two samples (programmers and cash vault employees) reinforce the other results we've seen: Cognitive ability (measured by the Wonderlic Personnel Test), was correlated with maximum performance but not typical performance, while POS was related to two out of three measures of typical performance but not with maximum performance.

Overall, the results reported here support previous findings: maximum performance is predicted well by ability tests while typical performance has stronger correlations with non-ability tests. But Ones & Viswevaran are correct when they state (about their own study): "Future research is needed with larger samples, different jobs, and different organizations to test the generalizability of the findings." Let's hope these articles motivate others to follow their lead.

Wednesday, July 11, 2007

FBI settles sex discrimination case for $11M

The Federal Bureau of Investigation (FBI) has settled a class action sex discrimination lawsuit brought on behalf of current and former female employees who were prohibited from applying for numerous GS-14 and GS-15 promotions. The case was titled Boord v. Gonzales.

These higher-paying jobs were restricted to individuals that had experience as a Special Agent and included a range of position titles, including EAP counselor, laboratory analyst, and lawyer. For many of these, according to the plaintiffs, having Special Agent experience was not job-related and consistent with business necessity, as required under Title VII of the Civil Rights Act of 1964.

The total cost: Approximately $11 million

Dr. Kathleen Lundquist, well known in I/O circles, served as a neutral expert to review positions that required Special Agent experience. The request for position review form can be found here.

Lesson? Make sure your minimum qualifications are valid. They are open to scrutiny--just like every other part of the screening process.

More details can be found in the settlement agreement.

Monday, July 09, 2007

TalentSpring takes on peer ratings

A while back I mentioned RecruitmentRevolution, a UK site focused on temporary employment that allows previous employers to input reference scores for use by future employers. Creative idea, if ya ask me. I've often wondered if someday there will be a general database of verified work history that employers could easily check.

Now along comes TalentSpring with a similar idea. This time it's not previous employers, it's peers. TalentSpring uses something it calls a "Merit Score." From the website:

"TalentSpring creates accurate merit scores by using the votes from candidates. Advanced mathematics are used to detect inaccurate votes and remove them while still accurately ranking candidates. The top resume in an industry receives a merit score of 2,000 and the most entry level candidate receives a merit score of 1,000."

How does it work?

"The voting process used to generate the Merit Score rankings is very simple. Voters are shown a series of resume pairs. With each pair the voter is asked which of the two candidates is most likely to be brought into an interview for the typical job opening in this job category. It is that simple - is Candidate A or Candidate B better in this job category. There is no worrying about previous pairs or what resumes are going to show up next. Each pair is considered in isolation."

And who's voting?

"Your resume is voted on by other people seeking to be ranked in the same category you are. Just as you are voting on other candidates in the same job category you are in. Since TalentSpring offers quite a few job categories to choose from, on occasion you may be voting on (and be voted on by) candidates in related job categories. For example, a C++ programming candidate might end up voting on Java programmers."

What about accuracy?

"We know when people are voting outside the "normal" range and remove these votes from the ranking calculations. We think that the ability to accurately vote is a skill that recruiters are interested in because it reflects both your understanding of the position you are interested in and your attention to detail. That is why we calculate and post your voting score as part of your Candidate Overview."


So what do you think? I was disappointed at who's doing the ranking--I assumed by "peers" they meant one's actual co-workers. Now that would be interesting, given that these type of peer ratings are at least partially trustworthy. I wonder how accurate ratings of competing job hunters will be. With no control over subject matter expertise, this relies solely on (what I assume is) statistical abnormalities. Not particularly encouraging. In addition, selecting one person over another for a general job category may prove to be an impossible task, as even jobs in a single category can vary substantially in terms of competencies/KSAOs required.

BUT, that said, it is encouraging to see steps taken in a different direction. If we could just combine some of these approaches, we may be making our way slowly toward a database that employers could feel good about. Of course that means a whole other discussion on rating formats...

Hat tip.

Friday, July 06, 2007

EEOC Releases Updated ADEA Regulations

Today the U.S. Equal Employment Opportunity Commission (EEOC) issued final revised regulations on the Age Discrimination in Employment Act (ADEA) that take into account the Supreme Court's 2004 decision in General Dynamics v. Cline. In Cline the court determined that employers could lawfully give preference to an older worker over a younger worker in situations where both were covered by the Act (i.e., both are over 40).

So what's changed? Changes have been made to three sections in Title 29 of the Code of Federal Regulations:

- Section 1625.2 has been re-worded to clarify that employers may give preference to the older applicant in the scenario described above but states there is no requirement to do so and that this does not impact applicable state, municipal, or local laws that prohibit such preferences.

- Section 1625.4 has been similarly re-worded to clarify that employers may use language in job advertisements such as "over age 60", "retirees", or "supplement your pension"--this is exactly the opposite of what the rule stated previously.

- Section 1625.5 has been re-worded but no significant changes to content were made.

The revised regulations are available in either PDF or text format (PDF is much easier to read). Anyone with an interest in this area should read this Federal Register entry because it goes (briefly) into more detail about what the revision does and doesn't do.

EEOC press release is here.

An idea for checking false credentials

We all know how important it is to validate education and experience claimed by candidates. I've seen numbers as high as 50% for the frequency of, shall we say, embellishments, on resumes and applications.

Reference and backgrounds checks are the typical route for this check on applicant honesty, but they're time consuming and it can be challenging to get high quality information. Here's an idea to consider: how about using the interview as part of the background check process?

Have you considered asking questions like:

"I see you went to Texas A&M. Tell us a little about the types of courses you took and projects you worked on."

Job-related, specific, and forces the candidate to do a little more digging.

Or if you wanted to be more blatant:

"I see you went to UCLA. Tell us a little about the campus--where were the majority of your classes? What did you enjoy about the school?"

Not so job-related, perhaps, but certainly defensible as a check on their truthfulness.

Yes, deceivers could still prepare pat answers for these types of questions, but my guess is many won't and you'll save yourself a lot of time and headaches.

Thursday, July 05, 2007

A solution to discrimination lawsuits?

Dr. Anne Marie Knott, assistant professor at Washington University (Olin School of Business) has come up with an idea she says will reduce employment discrimination claims: an "anti-discrimination bond." That's bond as in financial instrument, not bond as in promise.

The "bond" is purchased by applicants and acts similarly to a 401k (payroll contributions are put into individual accounts) but with a hitch: the contributions are forfeited if the employee files a discrimination claim.

The idea seems to be that "litigious people" will find this distasteful and not apply to the organization. In fact, according to Knott, experiments using the bond may reduce litigation by 96%. And she's serious about this solution--she's even filed a patent for it.

I must admit this is pretty creative--the organization is not requiring people to sign away their right to sue (which has been upheld, but the EEOC can still go after you)--but instead tying it to a benefit. I'd be curious what the legal minds out there have to say about this one.

Looking at it from a different angle, I wonder what sort of message this sends to applicants? I know if I was asked to sign something like this I might wonder why they're even bothering--do they assume I'm a litigious type of person? Are they planning on discriminating against me?

Wednesday, July 04, 2007

Where does my traffic come from?

Given that it's a holiday (here in the U.S.), I thought I would post something on the lighter side...

Because I use Feedburner and Google Analytics, I'm able to see how my readers reach my home page. I thought it might be informative to show you how people get (t)here...

Here are, in descending order, the most popular ways people find HR Tests:

1. Through Wikipedia

2. By searching for Jobfox (a very promising job matching service that I wrote a post about)

3. From my colleague Michael Harris' ex-blog, EASI-HR blog (now he's over at HRMplus)

4. Through IPMAAC (a great professional organization devoted to public sector assessment)

5. Through recruiting.com (clearinghouse for recruitment-related matters)

6. From selectionmatters.com written by fellow blogger Jamie Madigan (who also writes for TIP)

7. Through searches for Hogan assessments

8. Through recruitingblogs.com

If you own a blog, you might consider becoming more visible through these avenues. Yahoo! Site Explorer will give you similar, albeit non-rolled-up, information.

Alternatively, you could now point out to me that I'm missing some obvious source of traffic :)

Other random information:

Visitor location: The majority of my visitors are from the states, with Pittsburgh being the most common source. I also get a fair amount of traffic from Bombay (Mumbai), India. Other visitor locations include Bangkok, Canada, Dubai, London, and Singapore (to name a few).

Search engine: Overwhelmingly Google. The other search engines don't even come close.

Web browser: Internet Explorer has the lion's share (83%) with Firefox at 12%.

That's all for now--thanks for reading!

Monday, July 02, 2007

JPSP, Vol. 92, Issue 6

There's a new issue of the Journal of Personality and Social Psychology out (volume 92, #6), with some juicy research for us...


First up, a fascinating study by Kawakami, et al. that may assist with efforts to eliminate or minimize discriminatory behavior. Participants in the study were trained to either pull a joystick toward themselves or push it away when shown pictures of Black, Asian, or White individuals. They then took the Implicit Association Test (a measure of how connected things in your memory are, used in this context to measure bias) or were observed for nonverbal behavior in an interracial context. Results suggested that simply engaging in approach behavior reduced "implicit racial prejudice" (as measured by the IAT) and increased "immediacy" in the nonverbal situation. Could this be incorporated into some type of training to reduce recruitment and selection bias? We'll see. (Mere exposure may be the more likely training route)


Second, an article that directly relates to the current focus in assessment circles on measures of training and experience (dovetailing with the increase in ATS). Moore & Small note that people generally believe they are better than others on easy tasks and worse on difficult tasks. The authors propose that these difference occur because people have much more information about themselves than about others. The result is even stronger when people have accurate information about themselves (!). The solution, it would seem, is to provide people with accurate information about how others perform.

What might this look like? A simplistic example would be instead of having people simply select categories such as Expert-Journey-Learning-Beginner, provide some data on how many folks tend to fall into each category. Unfortunately, I doubt this would be enough to overcome our built-in inaccuracy when it comes to self rating--but everything helps.


Last but not least, a study of a non-cognitive trait--and it's not one of the Big Five! No, this time it's grit, defined by Duckworth et al. as "perseverance and passion for long-term goals." Using various measures, the authors show that measures of grit added incremental variance to the prediction of a variety of criteria, including:

- Educational attainment among two samples of adults
- GPA among ivy league undergrads
- Retention in two classes of West Point cadets
- Ranking on the National Spelling Bee

Grit was not correlated with IQ, but was highly correlated with conscientiousness. It only accounted for about 4% of the variance in predicting the above outcomes, but the incremental validity added was beyond both IQ and conscientiousness. Is this practically meaningful? Depends on your point of view. If you're dealing with a large candidate group, or a particularly sensitive one (e.g., peace officers), could be worth a second look. Methinks more research is needed, particularly research on any subgroup differences.