Monday, October 25, 2010

Don't ask candidates to judge themselves


Imagine you're buying a car. The salesperson throws out a price on the car you're interested in. And here are the questions you ask to try to determine whether it's a good deal:

- Is this a good price?
- How good of a salesperson are you?
- Compared to other sales you've made, how good is this one?

Think this is silly? Well it's essentially what many employers are doing when they interview people or otherwise rely on descriptions of experience when screening. They rely way too heavily on self-descriptions when they should be taking a more rigorous approach. Think these types of questions, which should be stricken from your inventory:

"What's something you're particularly good at?"

"How would you describe your skills compared to other people?"

There are two main problems with asking these types of questions in a high-stakes situation like a job interview (or buying a car):

1) People are motivated to inflate their answers, or just plain lie, in these situations. You know that. I know that. But it's surprising how many people forget it.

2) People are bad at accurately describing themselves. We know this from years of research, but if you're interested, check out a recent study published in the Journal of Personality and Social Psychology that compared Big 5 personality ratings from four countries and found that people generally hold more favorable opinions of themselves compared to how others see them.

But it's even worse than it appears. It's not just that people inflate themselves, it's that some of your best candidates deflate themselves. Think about your star performers: if you asked them how good they were in a particular area, what do you think they'd say?


You essentially want to know two things about candidates:
1) What they've done
2) What they're capable of doing


To answer the first issue, you have several options, including:

1a) Asking them to describe what they've done--the so-called "behavioral interviewing" technique. Research shows that these types of questions generally contribute a significant amount of validity to the process. But they're not perfect by any means, particularly with people with bad memories about themselves. And keep in mind at that point you're taking their word for it.

1b) Asking them for examples of what they've done. Best used as a follow-up to a claim, but tricky in any situation where there's even a remote possibility that someone else did it or did most of it (so practically everything outside of the person being videotaped).

1c) Asking others (e.g., co-workers, supervisors) what the candidate's done. Probably the most promising but most difficult data to accurately capture. Hypothetically if the person has any job history at all they've left a trail of accomplishments and failures, as well as a reliable pattern of responding to situations. This is the promise of reference checks that so often is either squandered ("I don't have time") or stymied ("They just gave me name, title, and employment dates"). Don't use these excuses, investigate.

As for the second issue, you have several options as well, including:

2a) Asking knowledge-based questions in an interview. For whatever reason these seem to have fallen out of favor, but if there is a body of knowledge that is critical to know prior to employment, ask about it. At the worst you'll weed out those who have absolutely no idea.

2b) Using another type of assessment, such as a performance test/work sample, on-site writing exercise, role play, simulation, or written multiple choice test (to name a few). Properly developed and administered, these will give you a great sense of what people are capable of--just make sure the tests are tied back to true job requirements.

2c) Using the initial period of employment (some call it probation) to throw things at the person and see what they're capable of. It's important not test their ability to deal with overload (unless that's critical to the job), but get them involved in a diverse set of projects. Ask for their input. Ask them to do some research. See what they are capable of delivering, even if it's a little rough.


Whatever you do, triangulate on candidate knowledge, skills, and abilities. Use multiple measures to get an accurate picture of what they bring. Consider using an interview for a two-way job preview as much as an assessment device.

But above all, don't take one person's word for things. Unless you like being sold a lemon.

Wednesday, October 20, 2010

Unvarnished now Honestly.com, opens up, rates Meg and Carly


Unvarnished, the Web 2.0 answer to reference checks that I've written about before, has changed its name to the more web-friendly Honestly.com.

But that's not all. They also unveiled two other big changes:

1) $1.2m in seed funding to hire more engineers and to do more product development.

2) The site is now available to anyone with a Facebook account that is 21 years or older (due to the focus on professional achievements, not inappropriate content). Previously it was invitation only from existing users.


The other interesting development going on is they've weighed into politics. You can now see what previous co-workers are saying about California gubernatorial candidate (and former E-Bay CEO) Meg Whitman as well as senatorial candidate (and previous head of HP) Carly Fiorina.

So how are they doing? Last time I checked, Meg had an overall rating of 3 out of 5 stars, with ratings of 7, 6, 6, and 5 (out of 10) for Skill, Relationships, Productivity, and Integrity, respectively. Carly has an overall rating of 2.5 out of 5, with ratings on the dimensions of...all ones. That could be because she only has 3 raters so far compared to Meg's 20. No word yet on whether Jerry Brown (Whitman's opponent) and Barbara Boxer (Fiorina's opponent), both with long careers in public service, will have pages.

Don't (presumably irrelevant) political opinions taint the ratings? It's a strong possibility. But the reviews I've read were surprisingly balanced, which is what the site owners are seeing as a general pattern. Will it stay this way? Only time will tell. I suspect the ratings have much to do with the current user community.

Overall I'm a fan of the name change, although there was something refreshingly complex about Unvarnished. I can't see or hear the word "honestly" without thinking of Austin Powers. Mostly that's a good thing, and I think these changes will be too.

Saturday, October 16, 2010

Q&A with Piers Steel: Part 2

Last time I posted the first part of my Q&A with Piers Steel, co-author of a recent piece in Industrial and Organizational Psychology (that I wrote about here) on synthetic validity and a fascinating proposition to create a system that would greatly benefit both employers and candidates. Read on for the conclusion.

Q4) Describe the system/product--what does it look like? For applicants? Employers? Governments?

A4) How do we do it? Well, that’s what our focal article in Perspectives on Science and Practice was about. Essentially, we break overall performance into the smallest piece people can reliably discern, like people, data, things (note: our ability to do this got some push back from one reviewer – that is, he was arguing we can’t tell the difference if people are good at math but not good at sales and vice-versa – it is a viewpoint that became popular because researchers assumed that “if it ain’t trait, it’s error”). We get a good job analysis tool that assesses every relevant aspect of the job, such as job complexity. We get a good performance battery, naturally including GMA and personality. We then have lots of people in about 300 different jobs take the performance battery, have their performance on every dimension as well as overall assessed to a gold standard (i.e., train those managers!), and have their jobs analyzed with equal care with that job analysis tool. From that, we can create validity coefficients for any new job simply by running the math. It is basically like validity generalization plus a moderator search, where once we know the work, we can figure out the worker. Again, read the article for more details, but this was basically it.

Once built all employers need to do to get a top-notch quality selection is describe their job using the job analysis tool and then as fast as electrons fly through a CPU, you get your selection system, essentially instantly. It is several orders of magnitude better in almost every way from what we have now from almost every criterion.

Q5) What are the benefits--to candidates, employers, and society?

A5) Everyone had a friend who struggled through life before finding out what they should have been doing in the first place. Or changed a job for a new company only to find they hated it there. Or never found anything they truly excelled at and just tried to live their lives through recreational activities. Everyone has experienced lousy service or botched jobs because the employee wasn’t in a profession that they were capable of excelling in. Everyone has heard of talented people who were down and out because no one recognized how good they really were.

Synthetic validity is all about making this happen less. How much less is the real question. If we match people to jobs and jobs to people wonderfully now, then perhaps not at all. But of course, we know that presently it's pretty terrible.

Now synthetic validity won’t be able to predict people’s work future perfectly, but it will do a damn sight better job than what we have now. Also, the best thing about synthetic validity is that it is going to start off good and then get better every year. Because it is a consolidated system, incremental improvements, “technical economies,” are cost effective to pursue and once discovered and developed, they are locked in every time synthetic validity is used.

Right now, we have a system that can only detect the largest and most obvious of predictors (e.g., GMA) because of sample size issues, but can’t pursue other incremental predictors because they aren’t cost effective for just one job site. By the very nature of selection today, we are never going to get much better. As I mentioned, nothing major has changed in 50 years and nothing major will change in the next 50 if we continue with the same methodology. Synthetic validity is a way forward. With synthetic validity, the costs are dispersed across all users, potentially tens of millions, making every inch of progress matter.

So, what we will get? Higher productivity. If synthetic validity results in just a few thousand dollars of extra productivity per employee each year, multiply that by 130 million, the US work force. Take a second to work out the number – it’s a big number.

Also, people should be happier in their jobs too, creating greater life satisfaction. They will stay in their jobs longer, creating real value and expertise. Similarly, unemployment will go down as people more rapidly find new work appropriate to their skills. In fact, I can only think of one group that won’t like this – the truly bad performer. They are the only group that wouldn’t want better selection.

Q6) Finally, what do you need to move forward?

A6) So far, no one I know is doing this. There are some organizations who think they are doing synthetic validity, though it is really just transportability and they aren’t interested in pursuing the real thing. Partly, I think it is because the real decision makers don’t know about synthetic validity or don’t understand it. I could do more to communicate synthetic validity, though I have done quite a bit already. I have sent a few press releases, received a dozen or two newspaper interviews (Google it), contacted a few government officials on both sides our border, and pursued a dozen or so private organizations. Part of my reason to do this interview here is to try to get the word out. So far, all I got back was a few “interesting” but no actual action.

I used to think this lack of pursuit was because synthetic validity was so hard to build, requiring 30,000 people -- but we know a lot more now. In the Perspectives article, McCloy pointed out we could allow ourselves to use subject matter experts to estimate some of the relationships. That won’t be as good as if we gathered the data ourselves, but we could get something running real quick, though later we would upgrade with empirical figures. Consequently, the reason why this isn’t built isn’t because it is too difficult. Also, the payoff would eventually be cornering the worldwide selection and vocational counseling market. I am not sure what that is worth but I imagine you could buy Facebook with change left over for MySpace if you wanted to. The value of it then isn’t the problem either.

I am coming to the conclusion that despite the evidence, to most people it is just my word as an individual. I’m a good scientist, winner of the Killam award for best professor at my entire University, but it still isn’t enough. You need the backing of a professional association and so far ours [SIOP] hasn’t yet taken a stand. As a professional organization, we should be promoting this, using the full resources of our association. I admit that I am “a true believer,” but this seems to be one of the bigger breakthroughs in all the social sciences in the last 100 years. Alternatively to the backing of a professional association, we need a groundswell where hundreds of voices repeat the message. I will do my bit but hopefully I will have a lot of company.

If you think I am overstating the case regarding synthetic validity, show me where I’m wrong. We handled all the technical critiques and issues in the Perspectives article. Right now, you have to make the argument that “human capital” doesn’t matter, that being good or bad at your work doesn’t matter. And if you try to make that case, I don’t think you are the type of person who would be even worth arguing with.


I'd like to thank Dr. Steel for his time and energy. I truly hope this idea sees the light of day. If you are interested in moving this forward, leave a comment and I can put you in touch with him.

Monday, October 11, 2010

Q&A with Piers Steel: Part 1

A few weeks ago I wrote about a research article that I think proposes a revolutionary idea: The creation of synthetic validity database that would generate ready-made selection systems that would rival or exceed the results generated through a traditional criterion validation study.

I had the opportunity to connect with one of the articles authors, Piers Steel, Associate Professor of Human Resources and Organizational Dynamics at the University of Calgary. Piers is passionate about the proposal and believes strongly that the science of selection has reached a ceiling. I wanted to dig deeper and get some details, so I posed some follow-up questions to him. Read on for the first set of questions, and I'll post the rest next time:

Q1) What is the typical state of today's selection system--what do we do well, and what don't we?

A1) Here is quote from well a respected selection journal, Personnel Psychology: “Psychological services are being offered for sale in all parts of the United States. Some of these are bona fide services by competent, well-trained people. Others are marketing nothing but glittering generalities having no practical value.... The old Roman saying runs, Caveat emptor--let the buyer beware. This holds for personnel testing devices, especially as regard to personality tests.”

Care to try and date it? It is from the article “The Gullibility of Personnel Managers,” published in 1958. Did you guess a different date? You might of, as the observation is as relevant today as yesterday -- nothing fundamental has changed. Just compare that with this more recent 2005 excerpt from HR Magazine, Personality Counts: “Personality has a long, rich tradition in business assessment,” says David Pfenninger, CEO of the Performance Assessment Network Inc. “It’s safe, logical and time-honored. But there has been a proliferation of pseudo tests on the market: Caveat emptor.”

Selection is typically terrible with good being the exception. The biggest reason is that top-notch selection systems are financially viable only for large companies with a high-volume position. Large companies can justify the $75,000 cost and months to develop and validate and perhaps, if they are lucky, have the in-house expertise to identify a good product. Most other employers don’t the skill to differentiate the good from the bad as both look the same when confronted with nearly identical glossy brochures and slick websites. And then the majority of hires are done with a regular unstructured job interview – it is the only thing employers have the time and resources to implement. Interviews alone are better than nothing but not much better – candidates are typically better at deceiving the interviewer than the interviewer is at revealing the candidate.

The system we have right now can’t even be described as being broken. That implies it once worked or could be fixed. Though ideally we could do good selection, typically, it is next to useless, right up there with graphology, which about a fifth of professional recruiters still use during their selection process. For example, Nick Corcodilos reviews how effective internet job sites are getting people a position. He asks us to consider “is it a fraud?”

Q2) What's keeping us from getting better?

A2) Well, there are a lot of things. First, sales and marketing works, even if the product doesn’t. When you have a technical product and untechnical employer or HR office, you have a lot of room for abuse. I keep hearing calls for more education and that management should care more. You are right they should care more and know more. People should also care and know more about their retirement funds as well. Neither is going to change much.

Second, the unstructured job interview has a lot of “truthiness” to it. Every professional selection expert I know includes a job interview component to the process even when it doesn’t do much, as the employer simply won’t accept the results of the selection system without it. There are some cases where people “have the touch” and are value added but this is the exception. Still, everyone thinks they are gifted, discerning, and thorough. This is the classic competition between clinical and statistical prediction, with evidence massively favoring the superiority of the latter over the former but people still preferring the former over the latter (here are few cites to show I’m not lying, as if you are like everyone else, you won’t believe me: Grove, 2005; Kuncel, Klieger, Connelly, & Ones, 2008).

Third, it just costs too much and takes too much time to do it right. Also, most jobs aren’t really large enough to do any criterion validation.

Q3) What might the future look like if we used the promise of synthetic validity?

A3) Well, to quote an article John Kammeyer-Mueller and I wrote, our selection systems would be "inexpensive, fast, high-quality, legally defensible, and easily administered.” Furthermore, every year they would noticeable improve, just like computers and cars. A person would have their profile taken and updated whenever they want, with initial assessments done online and more involved ones conducted in assessment centers. Once they have the profile, they would get a list of jobs they would likely be good at, ones that they would be likely good at and enjoy, and ones they would be likely good at, enjoy and that are in demand.

Furthermore, using the magic of person-organization fit, you inform them what type of organization they would like to work for. If someone submitted their profile to a job database, every day job positions would come to them automatically, with the likelihood of them succeeding at it. These jobs would come in their morning email if they wanted it. Organizations would also automatically receive appropriate job applicants and a ready built selection system to confirm that the profile submitted by the applicant was accurate.

Essentially, we would efficiently match people to jobs and jobs to people. I would recommend people update their profile as they get older or go through a major life change to improve the accuracy of the system, but even initially it would be far more accurate than anything available today -- a true game changer.

Follow-up: Some might see a contradiction here. You cite an article that bashes internet-based job matching, yet this is what you're suggesting. Would your system be more effective or simply supplement traditional recruiting methods (e.g., referrals)?

A: Yup, we can do better. The internet is just a delivery mechanism and no matter how high-speed and video enabled, it is just delivering the same crap. This would provide any attempt to match people to jobs or jobs to people with the highest possible predictiveness.


Next time: Q&A Part 2

References:
Grove, W. M. (2005). Clinical versus statistical prediction: The contribution of Paul E. Meehl. Journal of Clinical Psychology, 61(10), 1233-1243. doi: 10.1002/jclp.20179

Kuncel, N. R., Klieger, D., Connelly, B., & Ones, D. S. (2008, April). Mechanical versus clinical data combination in I/O psychology. In I. H. Kwaske (Chair), Individual Assessment: Does the research support the practice? Symposium conducted at the annual meeting of the Society for Industrial and Organizational Psychology, San Francisco, CA.

Stagner, R. (1958). The Gullibility of Personnel Managers. Personnel Psychology, 11(3), 347-352.

Sunday, October 03, 2010

How to hire an attorney


What's the best way for an organization to hire an attorney with little job experience? What should they look for? LSAT scores? Law school grades? Interviewing ability? A multi-year project that issued its final report in 2008 gives us some guidance. And while the study focused on ways law schools should select among applicants, it's also instructive for the hiring process. (By the way, individuals looking for personal representation may find the following interesting as well.)

Recall that the formalization of the "accomplishment record" approach occurred in 1984 with a publication by Leaetta Hough. She showed, using a sample of attorneys, that scores using this behavioral consistency technique correlated with job performance but not with aptitude tests or grades, and showed smaller ethnic and gender differences.

But in my (limited) experience, many hiring processes for attorneys have consisted of a resume/application, writing sample, and interview. Is that the best way to predict how well someone will perform on the job?

Assessment research would strongly point to cognitive ability tests being high predictors of performance for cognitively complex jobs. This is at least part of the logic of hurdles like the Law School Admissions Test (LSAT), a very cognitively-loaded assessment. When you're at the point of hire, however, LSAT scores are relatively pointless. Applicants have--at the very least--been through law school, and may have previous experience (such as an internship) you can use to determine their qualifications.

So what we appear to have at the point of hire is a mish-mash of assessment tools, relying heavily on un-proven filters (e.g., resume review) followed by a measure of questionable value (the writing sample) and the interview, which in many cases isn't conducted in a structured way that would maximize validity.

So what should we do to improve the selection of attorneys (besides using better interviews)? Some research done by a psychology professor and law school dean at UC Berkeley may offer some answers.

The investigators took a multi-phase approach to the study. The first part resulted in 26 factors of lawyer effectiveness--things like analysis and reasoning, writing, and integrity/honesty. In the second phase they identified several off-the-shelf assessments they wanted to investigate for usefulness, and they developed three new assessments--a situational judgment test (SJT), a biodata measure (BIO), and other measures, including optimism and a measure of emotional intelligence (facial recognition). In the final phase, they administered the assessments online to over 1,000 current and former law students and looked at the relationship between predictors and job performance (N for that part of about 700, using self, peer, and supervisor ratings).

Okay, so enough with the preamble--what did they find?

1) LSAT scores and undergraduate GPA (UGPA) predicted only a few of the 26 performance factors, mainly ones that overlapped with LSAT factors such as analysis and reasoning, and rarely higher than r=.1. Results using first-year law school GPA (1L GPA) were similar.

2) The scores from the BIO, SJT, and several scales of the Hogan Personality Inventory predicted many more dimensions of job performance compared to LSAT scores, UGPA, and 1L GPA.

3) The correlations between BIO and SJT and job performance were substantially higher-- in the .2-3 range compared to LSAT, UGPA, and 1L GPA. The BIO measure was particularly effective in predicting a large number of performance dimensions using multiple rating sources.

3) In general, there were no race and gender subdifferences on the new predictors.

These results strongly suggest that when it comes to hiring attorneys with limited work experience, organizations would be well advised to use professionally developed assessments, such as biodata measures, situational judgment tests, and personality inventories, rather than rely exclusively on "quick and dirty" measures such as grades and LSAT scores. Yet another proof of the rule that the more time spent developing a measure, the better the results.


On a final note, several years back I did a small exploratory study looking at the correlation between law school quality and job performance. I found two small to moderate results: law school quality was positively correlated with job knowledge, but negatively correlated with "relationships with people."

References:
Here is the project homepage.
You can see an executive summary of the final report here.
A listing of the reports and biographies is here.
The final report is here.