Tuesday, April 27, 2010

Assessments: More than meets the eye


So I quasi-randomly completed the 'personality quiz' over at the LA Times page. I did it just for fun, but it actually did something useful with my results--tailored my news. Although the label it gave me ("hot shot") is questionable, the stories it returned were based on the pictures I selected, things like politics, vacationing in Hawaii, and spending time with family. This led to a couple thoughts:

1) Why aren't more tests like this? Those of us on the professional side of testing often forget that there are tests that people actually enjoy taking. In fact people take "personality tests" all the time, through Facebook or on random websites. It's the kind of thing people pass around via e-mail. When was the last time you looked forward to taking an employment test?

2) I wonder what theory (if any) this is based on? It uses Imagini's VisualDNA technology, but I wasn't able to determine much from their website other than it took over three years to develop. Oh, and that apparently it's used by a number of sites, including match.com and hotels.com, for marketing purposes.

Taking this quiz also made me think not only about the "fun" side of testing but about alternate uses of assessment tools. These measures don't have to be used for selecting in and out. They can be used for many purposes, including some that are obvious (development) and some that perhaps aren't, like placement.

Using assessments for placement is something career counselors do all the time, but it's relatively rare for organizations. It shouldn't be. Imagine the value of putting some of your old-but-still-good assessments on the web and allowing people to take them, get feedback about their results, and receive some information that would allow them to self select in or out of various positions. It's a tool for insight, a realistic job preview, and an efficient way to populate the top of your selection funnel--all at the same time.

But wait, there's more. Imagine if you could populate your applicant tracking system with the results of said assessments. Imagine if, at the end of the assessment(s), the results strongly indicated the individual would be a good fit for a certain type of job. You could store their results for contacting in the future, provide them with additional recruiting material, lead them to relevant vacancies, and/or encourage them to apply.

Aside from some of the bleeding edge video game-type assessments, I haven't seen any selection tests that come close to fun (yes, I know we like to think that assessment centers are "fun" for applicants but we're fooling ourselves). And I don't recall seeing anyone using tests for placement in the way I described.

Have you?

Thursday, April 15, 2010

The perils of testing incumbents


Many of us plan or administer promotional tests on a daily basis. It's a normal way for the organization to figure out if people are ready for the next step--as journey person, lead, supervisor, manager, etc.

What isn't so usual is testing incumbents to determine if they can keep their jobs (drug testing aside). Amtrak, who is poised to take control of Metrolink, Southern California's commuter rail service, ran into fierce opposition recently when they announced that they will require train crews to take and pass two personality tests traditionally used only for screening applicants.

Why now? Amtrak is trying to prevent a repeat of the Chatsworth crash, where a Metrolink engineer crashed head on into a freight train, resulting in 25 dead (including the engineer) and 135 injured. Records revealed the engineer had a history of sending and receiving text messages hundred of times while operating trains (violating safety rules), including seconds before he ran a red led and crashed. Amtrak officials hope to identify "psychological issues", particularly those that manifest themselves during times of stress.

The tests in question are the Applicant Personality Inventory (API) and the Hogan Personality Inventory (HPI). The tests have been used since 2004 (API) and 2002 (HPI) to select among applicants for engineer and conductor positions, and the pass rate for the API is around 80%.

Union leaders have aggressively resisted the idea, claiming that the tests aren't valid or "relevant" measures of a trained and experienced employee's ability to safely operate trains. Instead, they claim the tests are a "witch hunt."

There are several things I find interesting about this situation:

1) Union leaders don't believe the tests are valid for incumbents, but have no problem with using them to select among potential hires. Assuming the personality aspects tested for by these measures are relatively stable, why would they be okay with testing applicants but not members? Methinks the issue is less validity and more membership.

2) If the vast majority of applicants, who are presumably a more heterogeneous group than incumbents, pass these exams, why would the union think that any incumbents would "fail" them? If it is indeed a witch hunt, what evidence do they believe management is relying upon to pre-judge certain individuals?

3) On a related note, given the high pass rates why would Metrolink/Amtrak think that any incumbents will "fail" these exams? (One wonders if the pass point would remain the same) This could be a situation where much political capital is expended with very little utility as a result of the assessments.

Incumbent testing is always a risky business. Even in the best of situations, those that fail or are passed over may harbor feelings of resentment, even anger. HR professionals must treat these selection situations with particular care and plan for how to communicate about the exam, before and after results are announced and used. But as this case demonstrates, when the tests in question are used to determine continued employment in one's current position--and the tests are personality inventories--tempers may flare particularly high.


Hat tip to my friend Warren Bobrow for this story.

Saturday, April 10, 2010

April 2010 TIP and latest EEO Insight

The April 2010 issue of The Industrial/Organizational Psychologist (TIP) is now available here. Let's check out some of what's inside:

- Several great, touching dedications to the late great Frank Landy. I found Rick Jacobs' eulogy particularly moving. My first real job out of graduate school was working for Frank doing research for expert witness testimony. Fabulous experience, amazing individual.

- Joel Wiesen proposes a novel approach to promotional selection in fire department settings.

- A great little article about moving from an I/O position to an HR generalist one. Very timely given all the changes HR departments are experiencing.

- The current status of legal protection for sexual minorities in the workplace

- A great article about the Bridgeport, CT case and why both the media and the city got things wrong.

- A fascinating breakdown of the major activities of I/O consultants v. internal practitioners v. academics. Check out Table 3.


In addition, there's a new issue of EEO Insight with some great content, and one article in particular I'd like to point out: A comprehensive look at how to successfully develop diversity initiatives and testing programs post-Ricci, including the essential elements of a Croson study (see Table 2) and the "strong basis in evidence" standard as applied to several practical examples (Table 3). It's a keeper and starts on page 27.

Some other articles to check out include:

- Strategies for defending and framing the issue of adverse impact in selection

- How to avoid adverse impact when choosing a test

Wednesday, April 07, 2010

Some blogs to follow + FB group

Here are three blogs you have missed:

Shaker Consulting Group (makers of the Virtual Job Tryout) has a new blog out written by my friend Joe Murphy. Recent posts covered the ERE conference, the importance of data-based decision making (can't agree with that enough!), and the recruiting value proposition.

Another is I/O at work, sponsored by HR Catalyst. They do a great job summarizing recent research across a wide spectrum of topics. Recent posts covered evidence-based management, the importance of positive feedback, and what job ads say about organizational culture.

Finally, those of you that subscribe to my shared items know I'm a fan of the blog from Recruitment Directory, an Australian consulting firm. Recent topics include checking out your job site on the iPad, website security, and the Australian HR Tech Report.

On a different note, if Facebook is a big part of your life, you may want to join the HR Tests fanpage. It's a great way to get your news in your feed, learn who else reads HR Tests, and have the ability to comment, without waiting for moderation, on any post.

Saturday, April 03, 2010

ClicFlic offers assessment innovation


A while back I posted about a creative use of technology that Vestas was using for onboarding. At the time I wrote about the potential I saw for the use of such technology for assessment, but actually creating these videos was a bit of a mystery from the customer side. Now I've come across a vendor that allows us to create these tools.

I don't post about specific products very often--usually I focus on research and best practices--but I have made occasional exceptions. When I see a product that I think has the potential to be innovative, highly effective, and highly valid, I want to share the wealth.

Such is the case with ClicFlic. In a nutshell, ClicFlic allows customers to create customized interactive web-based videos that can be used for things like situational judgment tests (SJTs). But we've seen that before, right? What I hadn't seen was the branching ability of ClicFlic.

Historically, video-based testing, whether Internet-enabled or not, presents all candidates with the same content. A situation is presented, and the candidate is provided with either several pre-determined responses or an open-ended response area. But much like traditional computerized adaptive testing (CAT), ClicFlic allows for the creation of branching videos. In other words, what the user sees in the next segment will vary depending on how they respond on the current one.

Although most of the examples you'll see on their website involve customer service or training applications, the technology is easily adaptable to assessment situations, as you can see from this example.

I had an opportunity to speak with Mike Russiello, President and CEO of ClicFlic (and co-founder of Brainbench) and he allowed me to peek "under the hood"--what I saw looked plug-and-play easy. The scripting branches are easy to generate, videos simple to upload, and you can quickly assign points to different responses. The videos are flash-based and you can easily generate the HTML to place it on a webpage.

Want to learn more? Check out the examples on their website--the demo on the front page will give you a good feel for the technology. Here are some others that will give you an idea of the possibilities. For assessment-specific usages, here you can select several different types of items with some characters you may recognize.

Questions? You can learn more about how the tools are built here. You may also run into Mike at SIOP if you have questions. Finally, they're also planning on an upcoming webcast through tmgov.org.

I hope this sparks some interest for you and maybe even some ideas about where this technology could be taken even further (RJPs anyone?).