Monday, September 17, 2007

September '07 Issue of JOOP

The September, 2007 issue of the Journal of Occupational and Organizational Psychology is out and has several articles worth taking a look at. Let's look at some of them:

The first article, by Ng et al., presents an overview of the different theories of job mobility. Specifically, they look at the impact of "structural" factors (e.g., the economy), individual differences, and decisional factors (e.g., readiness for change, desirability of the move). Good stuff to keep in mind when thinking about why people get and change jobs.

Next, Kenny and Briner provide an overview of 54 years worth of British research on ethnicity and behavior. A very broad article that includes discussion of research on recruitment/assessment (draft here).

Third, a fascinating study of the impact of job insecurity on behavior by Probst, et al. Using data gathered from both students and employees, the authors found that perceptions of job insecurity tended to have a negative impact on creativity (I'm thinkin' because your brain's busy thinking about the upcoming unemployment) but seems to have a moderately positive impact on productivity ("maybe if I work hard enough they won't fire me"?).

Next up, Hattrup, Mueller, and Aguirre analyzed data from the International Social Survey Programme on work value importance across 25 different nations. The authors found that conclusions about cross-cultural differences in work values will vary depending on how "work values" are operationalized. Why is this important? Because oftentimes sweeping statements are made about how people in certain countries view work-life balance, the importance of job security, interesting work, etc. This research reminds us to pause before adopting those conclusions.

Last but not least, Lapierre and Hackett present findings from a meta-analytic structural equation modeling study of conscientiousness, organizational citizenship behaviors (OCBs), job satisfaction, and leader-member exchange. If this makes you say, "Huh?" then here's the bottom line: (with this data at least) conscientious employees demonstrated more OCBs, which enhanced the supervisor-subordinate relationship, leading to greater job satisfaction. Job satisfaction also seemed to result in more demonstration of OCBs. More evidence to support the value of assessing for conscientiousness, methinks. Also more support for expanding the measure of recruitment/assessment success beyond simply "productivity."

Friday, September 07, 2007

A hiatus and government blogging

I'll be taking a brief hiatus from blogging as I move from the Pacific Northwest to California. There's plenty more blogging to come, it just may be a few weeks as I get settled in.

In the meantime, for those of you interested in learning more about blogs--how to make them and how to use them--you should check out an IBM study that came out recently titled The Blogging Revolution: Government in the Age of Web 2.0 by David Wyld. It's chock full of info, and not just for those of you in the public sector. Topics include:

- How do I blog?

- Touring the blogosphere

- Blogging policy

If this is a topic that interests you, don't forget to check out Scoble & Israel's Naked Conversations: How Blogs are Changing the Way Businesses Talk with Customers.

Oh, and if you look at the bottom of my homepage you might just see a link to an article that a certain someone (okay, me) wrote recently about how to use blogs for recruitment, assessment, and retention.

Thanks for reading & I'll be back soon!

Tuesday, September 04, 2007

The Corporate Leavers Survey

This just in from the Level Playing Field Institute: a new study, sponsored by Korn/Ferry, that finds that corporate unfairness, in the form of "every-day inappropriate behaviors such as stereotyping, public humiliation and promoting based upon personal characteristics" costs U.S. employers $64 billion annually.

This sum, based on survey responses from 1,700 professionals and managers, is an estimate of "the cost of losing and replacing professionals and managers who leave their employers solely due to workplace unfairness. By adding in those for whom unfairness was a major contributor to their decision to leave, the figure is substantially greater."

Examples of the type of behavior they're talking about:

- the Arab telecommunications professional who, upon returning from visiting family in Iraq, is asked by a manager if he participated in any terrorism

- the African-American lawyer who is mistaken THREE TIMES for a different black lawyer by a partner at that firm

- the lesbian professional who is told that the organization offers pet insurance for rats, pigs, and snakes, but does not offer domestic partner benefits

What does this have to do with recruiting? Aside from the obvious (turnover-->need to backfill), check this out:

One of the top four behaviors most likely to prompt someone to quit: being asked to attend extra recruiting or community related events because of one's race, gender, religion or sexual orientation.

Not only that, but 27% of respondents who experience unfairness at work in the last year said this experience "strongly discouraged them" from recommending their employer to other potential applicants.

What can employers do to prevent this? Aside from the tried and true methods (good and regular training for all supervisors, prompt and thorough investigations), the report offers other suggestions, which vary depending on the group (e.g., more/better benefits for gay and lesbian respondents, better managers for people of color).

Definitely some things to ponder.

Summary here

Friday, August 31, 2007

More games

I've posted before (here and here) about how Google and other companies are literally using boardgames as part of their applicant screening process, and how I'm not a big fan of this technique.

The September, 2007 issue of Business 2.0 has an article titled "Job Interview Brainteasers" that highlights another type of game employers play--this time, it's asking "creative" questions during the interview.

Let's take a look at some interview questions from the article and who's asked them:

How much does a 747 weigh? (Microsoft)

Why are manhole covers round and not, say, square? (Microsoft)

How many gas stations are there in the United States? (Amazon.com)

How much would you charge for washing all the windows in Seattle? (Amazon.com)

You have 5 pirates, ranked from 5 to 1 in descending order. The top pirate has the right to propose how 100 gold coins should be divided among them. But the others get to vote on his plan, and if fewer than half agree with him, he gets killed. How should he allocate the gold in order to maximize his share but live to enjoy it? (eBay, and, similarly, Pirate Master)

You are shrunk to the height of a nickel and your mass is proportionally reduced so as to maintain your original density. You are then thrown into an empty glass blender. The blades will start moving in 60 seconds. What do you do? (Google)

These questions have been around for quite a while and are used to measure things like creativity and estimation ability. The question is: Are they any better than board games? Probably. But they're still a bad idea.

Why do I say that? Well, first of all, a lot of people find these questions plain silly. And this says something about your organization. Sure, some people think they're fun or different. But many more will scratch their head and wonder what you're thinking. And then they'll wonder if they really want to work with you. Particularly folks with a lot of experience who aren't into playing games--they want to have a serious conversation.

Second, there are simply better ways of assessing people. If you want to know how creative someone is, ask them a question that actually mirrors the job they're applying for.

Want to know how they would tackle a programming question? Ask them. In fact, you can combine assessment with recruitment, as Spock recently did.

Want them to estimate something? Think about what they'll actually be estimating on the job and ask them that question. And so on...

Another advantage of these types of questions? The answers give you information you can actually use. (Hey, you've got them in front of you, why not use their brains)

If you don't really care about the assessment side of things, and in reality are just using these questions as a way to communicate "we're cool and different" (as I suspect many of these companies are doing) there are better ways of doing this. Like communicating in interesting and personal ways (e.g., having the CEO/Director call the person). Like talking about exciting projects on the horizon. Like asking candidates what THEY think of the recruitment and assessment process (gasp!).

My advice? Treat candidates with respect and try your darnedest to make the entire recruitment and assessment process easy, informative, and as painless as possible. Now THAT'S cool and different.

Wednesday, August 29, 2007

Georgia-Pacific fined by OFCCP for using literacy test

In a display of "See? It's not just the EEOC you need to worry about", the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) has fined the Georgia-Pacific Corp. nearly $750,000.

Why? During a "routine audit of the company's hiring practices", the OFCCP discovered that one of Georgia-Pacific's paper mills was giving job applicants a literacy test that resulted in adverse impact against African-American applicants (saw that one coming a mile away). The $749,076 will be distributed to the 399 applicants who applied for a job while the mill was using the test.

The test required applicants to read "bus schedules, product labels, and other "real-life" stimuli." The OFCCP determined that the test was not backed by sufficient evidence of validation for the particular jobs it was being used for.

The company defended itself by saying it promotes heavily from within and wanted workers to be able to move around easily.

A sensible policy, but completely irrelevant in terms of defending the legality of a test. In fact it works against an employer, since (as one of the attorneys points out) you're in effect testing people for higher-level positions, which is a no-no.

Several attorneys are quoted in the article, and they mention the importance of the Uniform Guidelines, which really only apply when a test has adverse impact, as in this case. It does make me wonder what sort of validation evidence G-P collected (if any)...

Note: the article states incorrectly that "all federal contractors" are subject to OFCCP rules. Actually only certain ones are, and details can be found here.

Hat tip.

Tuesday, August 28, 2007

A funny employment lawyer

Of course they exist. If you don't know one, you do now.

Mark Toth is the Chief Legal Officer at Manpower and he's just started a blog on employment law that so far is highly amusing.

For example, he sings a song about employment law.

A song.

About employment law.

I mean, you gotta be into this stuff to go that far.

He's also got a REALLY BAD hiring interview up for you to watch, along with his top 10 "employment law greatest hits."

My personal favorite? #6: "Communicate, communicate, communicate (unless you communicate stupidly)"

One of the more creative blogs I've seen. Here's to hoping it lasts.

And no, I won't be singing a song about assessment. Unless you really want me to (and trust me, you don't want me to).

Hat tip.

Monday, August 27, 2007

National Work Readiness Credential

Have you heard about the National Work Readiness Credential?

It's a 3-hour pass-or-fail assessment delivered over the web that is designed to measure competencies critical for entry-level workers, and consists of four modules:

1. Oral language (oral language comprehension and speech)
2. Situational judgment (cooperating, solving problems, etc.)
3. Reading
4. Math

I love the idea of a transferable skills test; kinda like the SAT of the work world. I think this approach, combined with assess-and-select-yourself notions are two of the truly creative directions we're going in.

Downsides?
(1) Right now it's not available in all areas of the country.
(2) A searchable database (either as a recruiting tool or as a verification) would be great.
(3) Last but not least, employers have to be cautious that the position they're hiring for truly requires the competencies measured by this exam.

But all that aside, a promising idea. It will be interesting to see where this goes.

Here are links to some of the many resources available:

Brochure
Training guide
Candidate handbook
Assessment sites
Appropriate uses
FAQs

Thursday, August 23, 2007

Big Disability Discrimination Decision for California Employers

On August 23, 2007, the California Supreme Court published an important decision in the case of Green v. State of California. The decision should be reviewed by any employer covered by California's Fair Employment and Housing Act (FEHA), which like the Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities.

What'd they say? Rather than muddy the waters, I'll quote directly from the case:

"The FEHA prohibits discrimination against any person with a disability but, like the ADA, provides that the law allows the employer to discharge an employee with a physical disability when that employee is unable to perform the essential duties of the job even with reasonable accommodation. (§ 12940, subd. (a)(1); 42 U.S.C. § 12112(a).) After reviewing the statute's language, legislative intent, and well-settled law, we conclude the FEHA requires employees to prove that they are qualified individuals under the statute just as the federal ADA requires." (pp. 1-2)

"...we conclude that the Legislature has placed the burden on a plaintiff to show that he or she is a qualified individual under the FEHA (i.e., that he or she can perform the essential functions of the job with or without reasonable accommodation)." (p. 5)

What does this mean? It means employers covered by FEHA can breathe a little easier, and employees bringing suit under FEHA for a disability claim may have a slightly more uphill battle. The court has now made clear that in these cases it is the plaintiff/employee who has the burden of showing they are "qualified" under FEHA, not the defendant/employer. And if the plaintiff can't satisfy this "prong" of their case, they won't win.

...unless this case is appealed to the U.S. Supreme Court...

Stop playing games

First, Google and PricewaterhouseCoopers have prospective candidates playing with Lego blocks.

Now, another company has candidates playing Monopoly (see minute 1:50) to judge multi-tasking ability.

C'mon people. You don't need to play games. Spend just a little time putting together a good assessment. Just follow these simple steps:

1. Study the job. Figure out what the key KSAs/competencies needed day one are. And spend more than 5 minutes doing it.

2. Think about what JOB TASK you could re-create in a simulation that would measure the required competencies.

3. Spend some time putting together the exercise and how you will rate it. Spend some more time on it. Practice it. Then spend some more time preparing.

4. Give it. Rate it. Treat candidates with respect throughout the process.

5. Gather performance data once people are on the job and see if it predicts job performance.

6. Hire a professional to fix your mistakes. No, I'm kidding. If you've done the other steps right, you should be golden.

Stop playing games and stop making candidates play them. If you want to know how well an Office Manager candidate multi-tasks, put them in a scenario that matches what they would really face on the job. Phones ringing, Inbox filling up, managers at your door. Not playing with phony money.

Tuesday, August 21, 2007

August ACN

The August, 2007 issue of Assessment Council News is out and Dr. Mike Aamodt provides his usual great writing, this time in article titled, "A Test! A Test! My Kingdom for a Valid Test!" where he goes over what you need to look for when selecting a commercially available test...in two easy steps!

Some of my favorite quotes:

"Previously, [the] clients had their supervisors create their own tests, and we advised them that this was not a good idea." (I just like the idea of saying that to clients, aside from the fact that it's true 99% of the time)

"Creating a reliable, valid, and fair measure of a competency is difficult, time consuming, frustrating, costly, and just about any other negative adjective you can conjure up. Think of the frustration that accompanies building or remodeling a home and you will have the appropriate picture." (So it ISN'T a coincidence that I enjoy testing and home remodeling. Whew.)

"...it is essential to remember that no test is valid across all jobs and that criterion validity is established by occupation, and depending on who you talk (argue) with, perhaps by individual location." (Just don't tell this to Schmidt and Hunter.)

More info about ACN, including links to past issues, here.

And by the way...major kudos to Dr. Aamodt for offering so much of his work online. This is rare and to be commended.

Monday, August 20, 2007

OPM Has New Assessment Website

The U.S. Office of Personnel Management (OPM) continues to show what a professional assessment shop should be doing with it's new personnel assessment page.

There's some great stuff here, including:

- A very detailed decision guide, including a great overview of pretty much all the major topics

- Reference documents

- Assessment resources

There's even a survey built in to gather feedback on the guide, as well as a technical support form.

Major tip 'o the hat.

Saturday, August 18, 2007

September 2007 issue of IJSA

The September, 2007 issue (vol. 15, #3) of the International Journal of Selection and Assessment is out, with the usual cornucopia of good reading for us, particularly if you're into rating formats and personality assessment. Let's skim the highlights...

First, Dave Bartram presents a study of forced choice v. rating scales in performance ratings. No, not as predictors--as the criterion of interest. Using a meta-analytic database he found that prediction of supervisor ratings of competencies increased 50% when using forced choice--from a correlation of .25 to .38. That's nothing to sneeze at. Round one for forced choice scales--but see Roch et al.'s study below...

Next up, Gamliel and Cahan take a look at group differences with cognitive ability measures v. performance measures (e.g., supervisory ratings). Using recent meta-analytic findings, the authors find group differences to be much higher on cognitive ability measures than on ratings of performance. The authors suggest this may be due to the test being more objective and standardized, which I'm not sure I buy (not that they asked me). Not super surprising findings here, but it does reinforce the idea that we need to pay attention to group differences for both the test we're using and how we're measuring job performance.

Third, Konig et al. set out to learn more about whether candidates can identify what they are being tested on. Using data from 95 participants who took both an assessment center and a structured interview, the authors found results consistent with previous research--namely, someone's ability to determine what they're being tested on contributes to their performance on the test. Moreover, it's not just someone's cognitive ability (which they controlled for). So what is going on? Perhaps it's job knowledge?

Roch et al. analyzed data from 601 participants and found that absolute performance rating scales were perceived as more fair than relative formats. Not only that, but fairness perceptions varied among each of the two types. In addition, rating format influenced ratings of procedural justice. The researchers focus on implications for performance appraisals, but we know how important procedural justice is for applicants too.

Okay, now on to the section on personality testing. First up, a study by Carless et al. of criterion-related validity of PDI's employment inventory (EI), a popular measure of reliability/conscientiousness. Participants included over 300 blue-collar workers in Australia. Results? A mixed bag. EI performance scores were "reasonable" predictors of some supervisory ratings but turnover scores were "weakly related" to turnover intentions and actual turnover. (Side note: I'm not sure, but I think the EI is now purchased through "getting bigger all the time" PreVisor. I'm a little fuzzy on that point. What I do know is you can get a great, if a few years old, review of it for $15 here).

Next, Byrne et al. present a study of the Emotional Competence Inventory (ECI), an instrument designed to measure emotional intelligence. Data from over 300 students from three universities showed no relationship between ECI scores and academic performance or general mental ability. ECI scores did have small but significant correlations (generally in the low .20s) with a variety of criteria. However, relationships with all but one of the criteria (coworkers' ratings of managerial skill) disappeared after controlling for age and personality (as measured by the NEO-FFI). On the plus side, the factor structure of the ECI appeared distinct from the personality measure. More details on the study here.

Last but not least, Viswesvaran, Deller, and Ones summarize some of the major issues presented in this special section on personality and offer some ideas for future research.

Whew!

Wednesday, August 15, 2007

De-motivators

Humor break.

I've posted before about Despair.com's "de-motivational" posters. They're a (funny) version of the ubiquitous "motivational" posters you see all over the place that mostly make you roll your eyes.

Well, Despair.com now has Do It Yourself posters. Here are the three that I've done so far:
































The only thing I don't get is why they don't offer printing of these. Seems like a natural money maker.

Anyhoo, hope you enjoy!

Tuesday, August 14, 2007

Great July 2007 Issues of Merit

The U.S. Merit Systems Protections Board (MSPB) puts out a great newsletter focused on staffing called Issues of Merit.

The July 2007 edition has some great stuff in it, including:

- Risks inherent with using self-assessment for high-stakes decisions, such as hiring (hint: people are horrible at it)

- Tips for workforce planning

- How to write good questions

- Analyzing entry hires into the federal workforce

- An introduction to work sample tests

Good stuff!

Saturday, August 11, 2007

Class certified in Novartis gender discrimination suit

Bad news for Novartis Pharmaceuticals.

On July 31, 2007 Judge Gerald Lynch of the Southern District of New York granted class certification status to "[a]ll women who are currently holding, or have held, a sales-related job position with [Novartis] during the time period July 15, 2002 through the present."

The plaintiffs are seeking $200 million in compensatory, nominal, and punitive damages, claiming that Novartis discriminates against women in a variety of ways, including compensation, promotions, performance appraisals, and adverse treatment of women who take pregnancy leave.

The case in instructive for us because of how the judge viewed expert opinion in this case. One of the plaintiffs' experts noted that Novartis' performance evaluation system was flawed because ratings were subject to modification by higher-level supervisors and because ratings had to fit into a forced distribution. In addition, appeals by employees went to either the manager who made the original rating or an HR person with no real authority to change ratings.

Another plaintiffs' expert noted that male sales employees are 4.9 times more likely to get promoted to first-line manager than female sales employees. In addition, 15.2% of male employees were selected to be in the management development program compared to only 9.1% of eligible female employees--a difference of 6.0 standard deviations.

What these statistics really signify and whether the plaintiffs end up ultimately winning the suit is anyone's guess. The important thing here is to keep in mind that what you may think is a logical way to make promotion decisions may look "subjective" to others and riddled with potential for bias to enter the equation.

Bias (and risk) can be reduced by implementing practices such as:

1 - Having raters undergo intensive training, including a discussion of potential biases and several "dry runs" of the process.

2 - Having a standardized rating form with clear benchmarks based on an analysis of job requirements.

3 - Considering carefully the use of a "forced distribution" system. If you do use one, make sure raters and ratees alike understand why--and how--this is being done.

4 - Making performance in the current job only part of the promotional criteria--give applicants a chance to show their stuff through ability tests, work sample tests, personality tests, and the like.

5 - Taking complaints seriously. If someone believes there is an opportunity for abuse of the system, investigate.

6 - Track, track, track those applicant flow statistics, including selection into training programs. Uncover discrepancies before they uncover you.

7 - Get HR involved--not just as gatekeepers but as partners. Hold HR accountable for providing best practices.

8 - If you have something like a management academy, make the criteria for entry transparent and have a point person for questions.

You can read the order here, and read more analysis of the case here.

Thursday, August 09, 2007

Spock launches

I've posted a couple times about Spock, a new people search engine. I'll be honest, I'm pretty excited about it.

I won't go into (again) why I'm excited, but suffice to say a search engine that gives us rich data about folks that we can use for recruitment and (potentially) assessment is pretty promising.

Yesterday they had their official public beta launch and you can now check it out, although it's so popular that it looks like their servers are struggling.

And no, they're not the only game in town. They compete directly with other sites like Wink and PeekYou, and indirectly with sites including LinkedIn, ZoomInfo, and Xing. Oh yeah, and WikiYou (although that's user-generated).

As I said, I'm pretty excited about it. Maybe it's just the name. And keep in mind I bought Webvan stock, so take my opinions with a grain of salt.

Tuesday, August 07, 2007

New feed: IPMAAC website updates

Can't get enough news about assessment?

Wish there were more feeds you could track?

Well, your wish has been granted. Now you can keep track of major changes to the IPMAAC website via the new RSS feed. This includes:

- Job openings

- New conference presentations available

- New items added to the library

- Announcements of new issues of the Assessment Council News (ACN)

Not familiar with feeds? Check out Google Reader or Feedreader. There are a ton of applications out there you can use to track feeds (including most web browsers), but these are two I've found to be darn easy to use.

Maybe this will encourage SIOP and SHRM to do the same...

Monday, August 06, 2007

2007 Academy of Management Conference

There have been some news stories about one of the presentations at this year's Academy of Management (AOM) conference--about an online survey where a majority of respondents said that bad bosses either get promoted or have nothing happen to them. But there's a heck of a LOT of other good stuff at this year's conference. So take a deep breath and let's take a look...

First up, a whole set of presentations devoted to selection, including:

- Hiring for Retention and Performance
- Work Sample Test Ethnic Group Differences in Personnel Selection: A Meta-analysis
- Stigmatizing Effects of Race-Based Preferential Selection
- Longitudinal Changes in Testing Applicants and Labor Productivity Growth



Next, a session devoted to recruitment and selection, including:

- The Role of Sociolinguistic Cues in the Evaluation of Job Candidates
- Recruitment as Information Search: The Role of Need for Cognition in Employee Application Decisions
- A House Divided: Cooperative and Competitive Recruitment in Vital Industries
- The Practice of Sense-Making and Repair during Recruitment Interviews
- Overqualified Employees: Too Good to Hire or Too Good to Be True?



Next up, a session devoted to recruitment. Included topics:

- Customizing Web-Based Recruiting: Theoretical Development and Empirical Examination
- Network-based Recruiting and Applicant Attraction: Perspective from Employer and Applicants
- Fancy Job Titles: Effects on Applicants' Job Perceptions and Intentions to Apply
- Recruitment and National Culture: A Value-Based Model of Recruitment



Next, a set devoted to person-organization (P-O) fit, including:

- Going Beyond Current Conceptualizations of P-E Fit and Presenting a Status Report on the Literature
- Outcomes of Multidimensional Misfit: An Empirical Test of a Theoretical Model
- FIT: Scale Development and Initial Validation of a New Measure
-
Considering the Contextualized Person: A Person-In-Content Approach to Goal Commitment


Next, a set on predictors of individual performance, including:

- An Examination of Ability-based Emotional Intelligence and Job Performance
- Predicting NFL Performance: The Role of Can-do and Will-do Factors
- A Fresh Perspective on Extraversion and Automobile Sales Success
- Auditor Effectiveness and Efficiency in Workpaper Review: The Impact of Regulatory Focus



Last but not least, one of my favorite topics, how organizations and individuals perceive selection. Topics include:

- Understanding Job Applicant Reactions: Test of Applicant Attribution Reaction Theory
- Effects of Ingratiation and Similarity on Judgments of P-O Fit, Hiring Recommendations and Job Offer
- The Effects of Resume Contents on Hiring Recommendations: The Roles of Recruiter Fit Perceptions
- Organization Personality Perceptions and Attraction: The Role of PO Fit and Recruitment Information



This is just a sample of what the conference has to offer; if you went, or otherwise know of other presentations we should know about, please share with us.

And no, most of the presentations aren't available on-line but the presenters' e-mail addresses are provided and most folks are willing to share.

Thursday, August 02, 2007

Is USAJobs enough?

Check out this article that came out recently on GovernmentExecutive.com. It's about how the federal government may need to branch out and start using other advertising venues besides USAJobs.gov, which it relies on heavily.

Some individuals quoted in the article, which happens to include a manager at CareerBuilder, point out that:

- Opportunities are not automatically posted on other career sites, like CareerBuilder, Monster, and HotJobs.

- Job openings are not "typically" searchable through search engines like Google. (Although look what happens when I search for an engineering job with the federal government).

- You can't expect people to automatically look for jobs on USAjobs.

The Office of Personnel Management (OPM), the fed's HR shop, fires back:

- USAJobs gets 8 million hits a month. This compares to CareerBuilder's 1.2 million searches a month for government jobs.

- USAJobs is well known and marketing efforts have been ramped up (e.g., last year's television commercials, which unfortunately didn't work with my version of Firefox).

So who wins the argument? I don't think the feds need to panic just yet. But it can't hurt them to investigate other posting opportunities, particularly given how much traffic the heavy hitters like Monster and CareerBuilder get compared to USAJobs:

By the way, don't overlook the comments on that page; in some ways they are more telling than the article. Readers point out that the application process is overly complicated--to the point that one of the readers makes his/her living guiding people through the process (reminds me of a guy that does the same thing for the State of California). My bet is the application process is equally, if not more, important than how the feds are marketing their opportunities.

I would also be willing to bet that it isn't just the feds that have this issue. As more organizations implement automated application and screening programs, they risk falling in love with the technology at the expense of the user experience. I may love the look of your job, but if it takes me 2 hours to apply, well...I may just look elsewhere.

Tuesday, July 31, 2007

B-school requires PPT slides for admission

So apparently Chicago's Graduate School of Business is going to require four pages of PowerPoint-like slides as part of its admission process this fall.

According to school reps, this will allow students to "show off a creative side that might not reveal itself in test scores, recommendations and even essays." Another rationale given by the school is that students will have to master this type of software before entering the business world.

One problem I see here is the vast majority of applicants will already know PowerPoint--if you get through high school and college without using it, I'm betting you're the rare applicant.

The larger problem here is the same problem employers face with supplemental questionnaires and work samples--namely, who did it? In high-stakes situations like school admissions and job applications, people are known to take, shall we say, less than ethical routes to increase their chances.

The benefit of something like GPA or the GMAT is identity verification--you can be virtually assured (as long as you can validate the numbers) that the person who's applying is the one that took that test.

With things like previous work samples, resumes, and now this PowerPoint idea, you have no idea who actually created the product. So you make an admissions or hiring decision based on an assumption. Do you validate that they actually created these documents? Probably not. Even if you wanted to, how would you do it?

It might not even matter, since this may be more of a way to add excitement to application reviews and to simply get more applicants, which the school acknowledges. There seems to be a trend among organizations to implement projects that aren't so much concerned with valid predictions of performance but with simply attracting attention. This will likely get even more blatant as organizations struggle to keep their staffing levels up in the coming years.

But we should keep this in mind: gimmicks may attract some applicants, but do they turn others away? What about highly qualified individuals who think, "Well that's silly." That's why the best solutions will pique interest while being as close as possible to the actual job (or school) environment. How about asking applicants to give a presentation as part of their interview--now that's a skill they'll need. Plus, absent any Mission Impossible-style disguises, you can be pretty sure the person in front of you is who they claim to be.

Monday, July 30, 2007

Webinar on Assessment Best Practices

On July 25, 2007, Dr. Charles Handler of rocket-hire.com gave a very good overview of best practices in a webinar titled Screening and Assessment - Best Practices for Ensuring a Productive Future.

Some of the topics he covered included:

- Different types of assessment
- Planning for assessment
- Validity and prediction
- Screening out vs. screening in

You can view a pdf of the slides here, and listen to an mp3 of his presentation here.

Wednesday, July 25, 2007

Tuesday, July 24, 2007

IPMAAC Presentations + Cheaper Membership

I posted earlier about presentations from the 2007 IPMAAC conference going up online. Well now there's a whole gaggle of 'em and there's some really great content. Check out these sample titles (PDF):

Applicant Reactions to Online Assessments

Succession Planning and Talent Management


2007 Legal Update

Potholes on the Road to Internet Applicant Compliance

Measuring Complex Reasoning


Tips on Writing an Expert Witness Report

And that's just the beginning. For all the goodies, check out the full list.

But wait, there's more...

In addition, IPMAAC recently enacted a change to its membership categories & fees. You can now become an IPMAAC member for only $75! Talk about cheap. $75 pays for the difference in conference fees between a member and a non-member! And you get all this to boot. Plus, you're "affiliated" with IPMA-HR, which means you get the awesome weekly HR newsletter and discounts on all sorts of IPMA-HR stuff (that's a technical term). And you DON'T have to work in the public sector to join.

There really aren't that many professional organizations associated with assessment. There's SIOP, but they're about a lot more than just staffing. There are local groups. But when it comes to national or international groups, IPMAAC is it. Which is a good thing, because it's a great group of people (not that I'm biased or anything).

Saturday, July 21, 2007

Legos: They're not just for Google anymore

So apparently Legos (or "Lego bricks") are enjoying quite the popularity among corporate recruiters these days.

Not only did Google use them at Google Games (and apparently employees enjoy them as well), PricewaterhouseCoopers (PwC) asks candidates to create a tower with Legos, according to this Economist article.

So what exactly are candidates doing? PwC asks candidates to create the tallest, sturdiest structure they can using the fewest number of "bricks." Google asked candidates to build the strongest bridges they could.

Is this a valid form of assessment? A "professional Lego consultant" in Buenos Aires stated that, "Lego workshops are effective because child-like play is a form of instinctive behaviour not regulated by conscious thought. " There's even a website devoted to Lego's efforts in this area--Serious Play.

So my question is: Do most of us do work that is "not regulated by conscious thought"? Perhaps sometimes, say in emergencies. But the vast majority of time we're pretty darn deliberate in our actions. The only situation I can see where this might be predictive of actual job performance would be for jobs like bridge engineer or architect. But...computer programmer? If I wanted to know how creative a programmer is, I'd ask him/her to solve a difficult coding problem.

Does this even matter? Perhaps not (unless they're sued). As one of the candidates states, correctly I think, "It was as much advertising as a way of trying to get recruits." So in this day and age of "talent wars", this may be just another branding technique.

Will it be successful? Probably depends on how much the candidate likes to play with blocks.

This post is dedicated to my Grandpa Ben, who had a great sense of humor. And probably would have thought using Legos in this way is a bit silly :)

Thursday, July 19, 2007

New issue of Journal of Applied Psychology (v.92, #4)

Guess how many articles are in the most recent Journal of Applied Psychology. Go ahead, take a gander.

10? 15?

Try 23. I mean....that's just showing off.

So what's in there about recruitment & assessment? Believe it or not, only two articles. Let's take a look at 'em.

First up, a study by Klehe and Anderson looked at typical versus maximum performance (the subject of the most recent issue of Human Performance) during an Internet search task. Data from 138 participants indicated that motivation to perform well (measured by direction, level, and persistence of effort) rose when people were trying to do their best (maximum performance). But the correlation between motivation and performance diminished under this condition, while the relationship between ability (measured by declarative knowledge and procedural skills) and performance increased.

What the heck does this mean? If you're trying to predict the MAXIMUM someone can do, you're better off using knowledge-based and procedure-based tests. If, on the other hand, you want to know how well they'll perform ON AVERAGE, check out tests that target things like personality, interests, etc.

Second, Lievens and Sackett investigated various aspects of situational judgment tests (SJTs). The authors were looking at factors that could increase reliability when you're creating alternate forms of the same SJT. Using a fairly large sample (3,361) in a "high-stakes context", they found that even small changes in the context of the question resulted in lower consistency between versions. On the other hand, being more stringent in developing alternate forms proved to be of value.

What the heck does this mean? If you're developing alternate forms of SJTs (say, because you give the test a lot and you don't want people seeing the same items over and over) this study suggests you don't get too creative in changing the situations you're asking about.

As usual, the very generous Dr. Lievens has made this article available here. Just make sure to follow fair use standards, folks.

Monday, July 16, 2007

Finally, an assessment gameshow

Okay, so it's not "guess the criterion-related validity", but it's about as close as we're going to get to a game show focused on assessment (The Apprentice notwithstanding).

The show is called "Without Prejudice" and it premiers July 17th on the Game Show Network (GSN). The concept is that a diverse panel made up of "ordinary members of the public" will be judging a similarly diverse group of people and determining who should be given the $25,000 prize.

So how is this like assessment, you say? Well the judges have to decide who they like the most or hate the least and use their judgment to determine who to award the prize to, based on seeing video clips of the contestants and information about their background. What does this sound like? Your average job interview!

What does it look like? In the premier, the first task the panel must do is decide who among the five contestants should be denied the money based on a very quick (about 5 seconds) introduction by the person. The panel focuses heavily on appearance rather than what was said, including making judgments about how wealthy the person is, their age, and their "vibe."

In an interesting twist, the host talks to the people that were "eliminated" about how they felt. (Ever asked a denied job applicant how they were feeling? Could be informative.)

Is it overly dramatic? Absolutely. Will it last? Probably not. Does it give us a vivid example of how quickly impressions are made, and on what basis? Yep.

There's a sneak peek of the premier available on the website. There is also, to their credit, links to information about prejudice, including two questionnaires you can take to probe your own beliefs.

Friday, July 13, 2007

New issue of Human Performance: Typical v. maximum performance

There's a new issue of Human Performance out (Volume 20, Issue 3) and it's devoted to a very worth topic--typical versus maximum performance.

What is the distinction, you say? Well it's pretty much what it sounds like. From Smith-Jentsch's article in this issue:

"Typical performance in its purest sense reflects what a person "will do" on the job over a sustained period of time, when they are unaware that their performance is being evaluated. By contrast, maximum performance reflects what a person "can do" when they are explicitly aware that they are being evaluated, accept instructions to maximize effort, and are required to perform for a short-enough period of time that their attention remains focused on the task."

The recent interest in this area stems largely from Sacket, Zedeck, & Fogli's 1988 article in Journal of Applied Psychology. Previous research suggested that measures of ability (e.g., cognitive ability tests) more accurately predict maximum performance whereas non-ability measures (e..g, personality tests) are correlated more with typical performance. This of course has implications for who we recruit and how we assess: Are we trying to predict what people can do or will do? The answer, I think, depends on the job--for aircraft pilot or police officer, you want to know what people can do when they're exerting maximum effort. For customer service representatives, you may be more interested in their day-to-day performance.

This topic is mentioned often in I/O textbooks but (as the authors point out) hasn't been researched nearly enough. The authors of this volume attempt to remedy that in some part. Let's look at the articles in the recruitment/assessment area:

First, Kimberly Smith-Jentsch opens with a study of transparency of instructions in a simulation. Analyzing data from two samples of undergraduates, the results validate previous findings: Making assessment dimensions transparent (i.e. telling candidates what they're being measured on) allows for better measurement of the abilities necessary for maximum performance, while not making this information transparent appears to result in better measurement of traits that motivate typical performance. So if the question is, "Do we tell people what we're measuring?" the answer is: It depends on what your goal is!

Next, Marcus, Goffin, Johnston, and Rothstein tackle the personality-cognitive ability test issue with a sample of candidates for managerial positions in a large Canadian forestry products organization. The results underline how important it is to recognize that "personality" (measured here by the 16PF and PRF) has several components. While cognitive ability scores (measured by the EAS) consistently outperformed personality scores in predicting maximum performance, measures of extraversion, conscientiousness, dominance, and rule-consciousness substantially outperformed cognitive ability when predicting typical performance.

Third
, Ones and Viswevaran investigate whether integrity tests can predict maximal, in addition to typical, performance. The answer? Yes--at least with this sample of 110 applicants to skilled manufacturing jobs. Integrity test scores (measured using the Personnel Reaction Blank) correlated .27 with maximal performance (measured, as is typical, with a work sample test). The caveat here, IMHO, is that job knowledge scores correlated .36 with maximal performance. So yes, integrity test scores (a "non-ability" test) can predict maximal performance, but perhaps still not as well as cognitively-loaded tests.

Last but not least, Witt & Spitzmuller look at the relationship between cognitive ability and perceived organizational support (POS) on typical and maximum performance. Results from two samples (programmers and cash vault employees) reinforce the other results we've seen: Cognitive ability (measured by the Wonderlic Personnel Test), was correlated with maximum performance but not typical performance, while POS was related to two out of three measures of typical performance but not with maximum performance.

Overall, the results reported here support previous findings: maximum performance is predicted well by ability tests while typical performance has stronger correlations with non-ability tests. But Ones & Viswevaran are correct when they state (about their own study): "Future research is needed with larger samples, different jobs, and different organizations to test the generalizability of the findings." Let's hope these articles motivate others to follow their lead.

Wednesday, July 11, 2007

FBI settles sex discrimination case for $11M

The Federal Bureau of Investigation (FBI) has settled a class action sex discrimination lawsuit brought on behalf of current and former female employees who were prohibited from applying for numerous GS-14 and GS-15 promotions. The case was titled Boord v. Gonzales.

These higher-paying jobs were restricted to individuals that had experience as a Special Agent and included a range of position titles, including EAP counselor, laboratory analyst, and lawyer. For many of these, according to the plaintiffs, having Special Agent experience was not job-related and consistent with business necessity, as required under Title VII of the Civil Rights Act of 1964.

The total cost: Approximately $11 million

Dr. Kathleen Lundquist, well known in I/O circles, served as a neutral expert to review positions that required Special Agent experience. The request for position review form can be found here.

Lesson? Make sure your minimum qualifications are valid. They are open to scrutiny--just like every other part of the screening process.

More details can be found in the settlement agreement.

Monday, July 09, 2007

TalentSpring takes on peer ratings

A while back I mentioned RecruitmentRevolution, a UK site focused on temporary employment that allows previous employers to input reference scores for use by future employers. Creative idea, if ya ask me. I've often wondered if someday there will be a general database of verified work history that employers could easily check.

Now along comes TalentSpring with a similar idea. This time it's not previous employers, it's peers. TalentSpring uses something it calls a "Merit Score." From the website:

"TalentSpring creates accurate merit scores by using the votes from candidates. Advanced mathematics are used to detect inaccurate votes and remove them while still accurately ranking candidates. The top resume in an industry receives a merit score of 2,000 and the most entry level candidate receives a merit score of 1,000."

How does it work?

"The voting process used to generate the Merit Score rankings is very simple. Voters are shown a series of resume pairs. With each pair the voter is asked which of the two candidates is most likely to be brought into an interview for the typical job opening in this job category. It is that simple - is Candidate A or Candidate B better in this job category. There is no worrying about previous pairs or what resumes are going to show up next. Each pair is considered in isolation."

And who's voting?

"Your resume is voted on by other people seeking to be ranked in the same category you are. Just as you are voting on other candidates in the same job category you are in. Since TalentSpring offers quite a few job categories to choose from, on occasion you may be voting on (and be voted on by) candidates in related job categories. For example, a C++ programming candidate might end up voting on Java programmers."

What about accuracy?

"We know when people are voting outside the "normal" range and remove these votes from the ranking calculations. We think that the ability to accurately vote is a skill that recruiters are interested in because it reflects both your understanding of the position you are interested in and your attention to detail. That is why we calculate and post your voting score as part of your Candidate Overview."


So what do you think? I was disappointed at who's doing the ranking--I assumed by "peers" they meant one's actual co-workers. Now that would be interesting, given that these type of peer ratings are at least partially trustworthy. I wonder how accurate ratings of competing job hunters will be. With no control over subject matter expertise, this relies solely on (what I assume is) statistical abnormalities. Not particularly encouraging. In addition, selecting one person over another for a general job category may prove to be an impossible task, as even jobs in a single category can vary substantially in terms of competencies/KSAOs required.

BUT, that said, it is encouraging to see steps taken in a different direction. If we could just combine some of these approaches, we may be making our way slowly toward a database that employers could feel good about. Of course that means a whole other discussion on rating formats...

Hat tip.

Friday, July 06, 2007

EEOC Releases Updated ADEA Regulations

Today the U.S. Equal Employment Opportunity Commission (EEOC) issued final revised regulations on the Age Discrimination in Employment Act (ADEA) that take into account the Supreme Court's 2004 decision in General Dynamics v. Cline. In Cline the court determined that employers could lawfully give preference to an older worker over a younger worker in situations where both were covered by the Act (i.e., both are over 40).

So what's changed? Changes have been made to three sections in Title 29 of the Code of Federal Regulations:

- Section 1625.2 has been re-worded to clarify that employers may give preference to the older applicant in the scenario described above but states there is no requirement to do so and that this does not impact applicable state, municipal, or local laws that prohibit such preferences.

- Section 1625.4 has been similarly re-worded to clarify that employers may use language in job advertisements such as "over age 60", "retirees", or "supplement your pension"--this is exactly the opposite of what the rule stated previously.

- Section 1625.5 has been re-worded but no significant changes to content were made.

The revised regulations are available in either PDF or text format (PDF is much easier to read). Anyone with an interest in this area should read this Federal Register entry because it goes (briefly) into more detail about what the revision does and doesn't do.

EEOC press release is here.

An idea for checking false credentials

We all know how important it is to validate education and experience claimed by candidates. I've seen numbers as high as 50% for the frequency of, shall we say, embellishments, on resumes and applications.

Reference and backgrounds checks are the typical route for this check on applicant honesty, but they're time consuming and it can be challenging to get high quality information. Here's an idea to consider: how about using the interview as part of the background check process?

Have you considered asking questions like:

"I see you went to Texas A&M. Tell us a little about the types of courses you took and projects you worked on."

Job-related, specific, and forces the candidate to do a little more digging.

Or if you wanted to be more blatant:

"I see you went to UCLA. Tell us a little about the campus--where were the majority of your classes? What did you enjoy about the school?"

Not so job-related, perhaps, but certainly defensible as a check on their truthfulness.

Yes, deceivers could still prepare pat answers for these types of questions, but my guess is many won't and you'll save yourself a lot of time and headaches.

Thursday, July 05, 2007

A solution to discrimination lawsuits?

Dr. Anne Marie Knott, assistant professor at Washington University (Olin School of Business) has come up with an idea she says will reduce employment discrimination claims: an "anti-discrimination bond." That's bond as in financial instrument, not bond as in promise.

The "bond" is purchased by applicants and acts similarly to a 401k (payroll contributions are put into individual accounts) but with a hitch: the contributions are forfeited if the employee files a discrimination claim.

The idea seems to be that "litigious people" will find this distasteful and not apply to the organization. In fact, according to Knott, experiments using the bond may reduce litigation by 96%. And she's serious about this solution--she's even filed a patent for it.

I must admit this is pretty creative--the organization is not requiring people to sign away their right to sue (which has been upheld, but the EEOC can still go after you)--but instead tying it to a benefit. I'd be curious what the legal minds out there have to say about this one.

Looking at it from a different angle, I wonder what sort of message this sends to applicants? I know if I was asked to sign something like this I might wonder why they're even bothering--do they assume I'm a litigious type of person? Are they planning on discriminating against me?

Wednesday, July 04, 2007

Where does my traffic come from?

Given that it's a holiday (here in the U.S.), I thought I would post something on the lighter side...

Because I use Feedburner and Google Analytics, I'm able to see how my readers reach my home page. I thought it might be informative to show you how people get (t)here...

Here are, in descending order, the most popular ways people find HR Tests:

1. Through Wikipedia

2. By searching for Jobfox (a very promising job matching service that I wrote a post about)

3. From my colleague Michael Harris' ex-blog, EASI-HR blog (now he's over at HRMplus)

4. Through IPMAAC (a great professional organization devoted to public sector assessment)

5. Through recruiting.com (clearinghouse for recruitment-related matters)

6. From selectionmatters.com written by fellow blogger Jamie Madigan (who also writes for TIP)

7. Through searches for Hogan assessments

8. Through recruitingblogs.com

If you own a blog, you might consider becoming more visible through these avenues. Yahoo! Site Explorer will give you similar, albeit non-rolled-up, information.

Alternatively, you could now point out to me that I'm missing some obvious source of traffic :)

Other random information:

Visitor location: The majority of my visitors are from the states, with Pittsburgh being the most common source. I also get a fair amount of traffic from Bombay (Mumbai), India. Other visitor locations include Bangkok, Canada, Dubai, London, and Singapore (to name a few).

Search engine: Overwhelmingly Google. The other search engines don't even come close.

Web browser: Internet Explorer has the lion's share (83%) with Firefox at 12%.

That's all for now--thanks for reading!

Monday, July 02, 2007

JPSP, Vol. 92, Issue 6

There's a new issue of the Journal of Personality and Social Psychology out (volume 92, #6), with some juicy research for us...


First up, a fascinating study by Kawakami, et al. that may assist with efforts to eliminate or minimize discriminatory behavior. Participants in the study were trained to either pull a joystick toward themselves or push it away when shown pictures of Black, Asian, or White individuals. They then took the Implicit Association Test (a measure of how connected things in your memory are, used in this context to measure bias) or were observed for nonverbal behavior in an interracial context. Results suggested that simply engaging in approach behavior reduced "implicit racial prejudice" (as measured by the IAT) and increased "immediacy" in the nonverbal situation. Could this be incorporated into some type of training to reduce recruitment and selection bias? We'll see. (Mere exposure may be the more likely training route)


Second, an article that directly relates to the current focus in assessment circles on measures of training and experience (dovetailing with the increase in ATS). Moore & Small note that people generally believe they are better than others on easy tasks and worse on difficult tasks. The authors propose that these difference occur because people have much more information about themselves than about others. The result is even stronger when people have accurate information about themselves (!). The solution, it would seem, is to provide people with accurate information about how others perform.

What might this look like? A simplistic example would be instead of having people simply select categories such as Expert-Journey-Learning-Beginner, provide some data on how many folks tend to fall into each category. Unfortunately, I doubt this would be enough to overcome our built-in inaccuracy when it comes to self rating--but everything helps.


Last but not least, a study of a non-cognitive trait--and it's not one of the Big Five! No, this time it's grit, defined by Duckworth et al. as "perseverance and passion for long-term goals." Using various measures, the authors show that measures of grit added incremental variance to the prediction of a variety of criteria, including:

- Educational attainment among two samples of adults
- GPA among ivy league undergrads
- Retention in two classes of West Point cadets
- Ranking on the National Spelling Bee

Grit was not correlated with IQ, but was highly correlated with conscientiousness. It only accounted for about 4% of the variance in predicting the above outcomes, but the incremental validity added was beyond both IQ and conscientiousness. Is this practically meaningful? Depends on your point of view. If you're dealing with a large candidate group, or a particularly sensitive one (e.g., peace officers), could be worth a second look. Methinks more research is needed, particularly research on any subgroup differences.

Friday, June 29, 2007

New SIOP Journal in 2008

In 2008 SIOP will begin publishing its own scholarly journal, Industrial and Organizational Psychology: Perspectives on Science and Practice.

Here are the vitals:

Cost: Free to members (unknown fee for nonmembers)

Editor: Dr. Paul Sackett

Focus: Articles will cover basic science, applied science, practice, public policy, and (most likely) a blend.

Format: Focal article-peer response. Each article will be followed by 8-10 responses. From the website: "[Peer responses] could challenge or critique the original article, expand on issues not addressed in the focal article, or draw out implications not developed in the focal article. The goal is to include commentaries from various perspectives, including science, practice, and international perspectives. These commentaries will be followed by a response from the original author of the focal paper."

Publisher: Blackwell (which hopefully means we'll have online access)


The first two article abstracts are available here, with members having full access:

The meaning of employee engagement - Macey and Schneider (a particularly hot topic)

Why assessment centers don't work the way they're supposed to - Lance

Wednesday, June 27, 2007

EEOC issues FY 2006 Federal Workforce Report

The EEOC has released its Annual Report on the Federal Work Force for FY 2006 (10/o5-09/06).

For federal agencies it's a treasure trove of benchmark information covering everything from EEO policies to ADR statistics.

Not in the federal workforce, or otherwise find this a yawner? Check it out anyway. The report is filled with tips on topics such as:

- Reasonable accommodation procedures
- Sexual harassment policies
- Barrier analysis

- Improving participation rate of individuals with disabilities

And if you like tables and graphs...Woah, Nelly, you're in for a treat.

The report is available in HTML or PDF.

Tuesday, June 26, 2007

New blog to watch

Michael Harris, previously of EASI-HR Blog, has started a new blog titled HRMplus.

Dr. Harris is a professor at the University of Missouri-St. Louis where he teaches HRM. He has also served as an expert witness on discrimination issues, as a trainer, and a consultant. He presented at the most recent IPMAAC conference on Disparate Impact and Employment Testing: A Legal Update.

Check it out!