Thursday, July 29, 2010

Human Performance, v23 (3)

Three interesting articles in the most recent issue of Human Performance:

Kanar, et al. studied the impact of organizational reputation on applicant attraction with a sample of college-level job seekers and found that negative information had a greater impact than positive information on attraction to the organization as well as recall, and this effect persisted for one week. Implication? Recognize that negative information about your organization may be processed differently than positive information, and thus not easily balanced. Also reinforces keeping an eye out for negative remarks and addressing them.

Second, a fascinating study by Kell, et al. on how Big 5 personality factors differentially predict various aspects of performance within a single job. Specifically, the authors found (as judged by raters) that emotionally stable and conscientious actions were more effective in task situations and open and agreeable actions were more effective in interpersonal situations. Implications? Not only does it seem that different assessment types predict different general aspects of performance (i.e., personality measures usually are better at predicting contextual performance), there appears to be prediction differences within the same job such that there is value in considering each facet of the Big 5 and its relationship to different aspects of performance.

Lastly, if you're looking for a way to predict performance in jobs that require "multi-tasking", you'll be interested in Poposki & Oswald's description of the development of the Multitasking Preference Inventory. In addition to the development, the authors describe a study of its convergent and discriminant validity as well as initial criterion-related validity. I found it interesting that this is not an ability test but rather a preference inventory, which actually makes sense since the brain doesn't actually focus on multiple things very well!

Saturday, July 24, 2010

Learning from orchestra hiring

Lessons come from all sorts of places. Recently an article in the New York Times described the surprising number of vacancies nationwide in major symphony orchestras, including the N.Y. Philharmonic, which will have an unprecedented 12 vacancies next season (12% of its "workforce").

Here are some things that stood out from the article:

1) The customers (audience) may not notice. Top seats do not go vacant because assistant principals step up or substitutes are hired.

2) Economic problems have impacted an incredibly wide breadth of industries. Some positions are left vacant because it is cheaper to do so.

3) Leaders sometimes defer the decision. Some conductors have passed on filling positions to give their successor an opportunity to make the pick.

4) The selection process is resource intensive and difficult to coordinate. This is primarily due to the difficulty in getting the conductor and a committee of players (let's call them SMEs) together to do the judging.

5) Sometimes no decision is made. Even after the months of advertising and auditioning a suitable person isn't found. Kudos to those organizations with the wisdom to pass, assuming a valid selection process.

6) Which raises the next point--the selection process is suspect at best. Why? Well, aside from the historical gender discrimination, the process used to select musicians relies upon a single hour-long work sample test. What if the person is having a bad day? Perhaps more importantly, the most important criteria--symphony performance--relies upon all the players working together.

7) On a related point, applicants aren't being judged solely on technical proficiency. Committee members also judge them on fit, or as one described it, they look for "thinking, thoughtful musicians who are the whole package". What does that mean? It's hard to say, since, as stated in the article, "orchestra officials and musicians are loath to discuss the auditioning process in detail." Here's a question: why? Afraid that the "correct answers" will get out there--or afraid of the critical eye that may be focused on them?

Hat tip.

Friday, July 16, 2010

July 2010 J.A.P.


A new round of journals is out, so let's start with the June issue of the Journal of Applied Psychology.

First up, Schleicher et al. looked at whether there were demographic differences in how much candidate scores improved upon retesting. Turns out there were several. Whites showed larger improvements than Blacks or Hispanics on several assessments, particularly on written tests. Women and applicants under 40 showed greater improvements than men and applicants 40+. Implications? In some situations allowing applicants to retest may exacerbate adverse impact.

Next, an important piece by Aguinis et al. (that you can read here) about test bias. This follows on the heels of the June IOP articles on the same topic and seems to represent a resurgence of interest in a topic that seemed dormant. In this article the authors report the results of a very large Monte Carlo simulation (billions and billions of data points) where they found that if bias is measured using slope-based techniques, it's likely to go undetected, and intercept-based bias favoring minority group members is likely to be found when in fact it does not exist. This study, combined with points made in the IOP article suggest that some of the "established" conclusions regarding test bias may not be as solid as we thought.

Third, for those of you interested in differential functioning (of items or scales), you should check out the piece by Adam Meade where he presents a taxonomy of potential differential functioning effect sizes and also describes a software program created for computing the indices and graphing differential functioning.

Next, a piece by Wang et al. on locus of control. Importantly, they found that when locus of control (LOC) is specific to work-related issues, there are stronger correlations between LOC and work-related criteria such as job satisfaction and commitment. Similarly, when LOC is defined more broadly to include non-work issues, there are some stronger correlations with non-work criteria such as life satisfaction. Implications? Much like research on personality items, specifying a work-related context would seem to increase the predictive power of LOC measures.

Last but not least, an important article on counterproductive work behavior (CWB) and organizational citizenship behavior (OCB) by Spector, et al. CWB and OCB seem like they should be opposites of each other--one demonstrated by disengaged, unhappy workers, the other by engaged, happy ones--right? Not so fast. The authors report the results of an experiment that suggest that the concepts are unrelated and do not necessarily have opposite relationships with other variables. The authors also recommend that when measuring these behaviors, frequency of performance be used rather than level of agreement.

Sunday, July 11, 2010

Unvarnished Unwrapped


A few months ago I mentioned a website called Unvarnished, which was getting a lot of mixed press. The basic concept is (as they describe it) Yelp mixed with LinkedIn. You provide anonymous reviews of people you've worked with. Call it a social resume, call it a web-based reference, I call it fascinating. And something that anyone interested in recruitment and assessment should pay attention to.

I had a chance recently to test out the site, and then I met with the co-founder, Peter Kazanjy. I'm still not sure which direction this will go, but I think you'll agree after reading what follows that the concept merits our attention.

The Test Drive
First, the test drive, which started with an invitation through Facebook from Peter. After spending some time on the site, I'm more optimistic about Unvarnished in some respects, more cautious in others.

Why optimism? The site hits it out of the ballpark on two accounts: it's simple and fast (at least someone learns from Google). Simple and fast is good, because one of the biggest challenges will be building a large community and making reviews easy helps immensely.

Even better, the ratings are relevant. This isn't a popularity contest, it's an honest attempt to provide a useful description of someone's performance. While it's unlikely the rating scales were developed after reading a Personnel Psychology meta-analysis, I was pleased to discover that they pass the smell test and some even have benchmarks.

The ratings consist of an overall performance rating (5-point, anchors described), what job you are rating the person in, four 10-point scales that are described but not anchored (skill, relationships, productivity, integrity), and an open-ended strengths/areas for improvement box. That's it. It's like a super-basic reference check form that takes all of about a minute. You can see what this process looks like below.

Your Unvarnished homepage is fed by your network and generates suggestions for your review (PMYK--People You Might Know). You can review people at any time (even people who haven't claimed a profile) or request reviews using your Facebook contacts. The open comments section is limited to 500 characters to encourage people to review and move on.

The site was developed to be heavily reliant on algorithms. A reviewer's reputation is based in part on the pattern of reviews they have generated as well as how their reviews have been rated. Recommendations for reviews are backed by similar math. A smaller (but important) feature is a profanity filter, which may allay some concerns regarding people looking to settle a score.

Speaking of Facebook, one of my major concerns is it relies HEAVILY on Facebook (not unlike Quora). At least in its current iteration, your identity is verified through having a Facebook account, you invite people through your Facebook contacts, and invitations are posted to Facebook. This is both a potentially good thing (e.g., cuts down on spammers) as well as a bad thing (e.g., not everyone wants to use their Facebook profile in the service of another website). It also begs the question of what would happen if Facebook went belly up. Read on to see what Peter had to say about this.

The interview

I happened to be in San Francisco (interestingly enough to go the Exploratorium) and had an opportunity to talk with Peter about the product and his company. After talking to him for about 10 minutes, one thing became abundantly clear: this guy has thought a great deal about online reputation management. I'd love to get him and Bob Hogan together.

Background
The impetus for the site came from a couple directions. One was a previous job where he was continually surprised that competitors (e.g., a certain company we'll call Bicrosoft) would recruit away the lesser talented employees. Why? His theory is they lacked important information--namely the reputation people had within the organization. (One could argue that they should have done better reference checks, but we all know how easy/productive those can be)

On the other end of the spectrum, he saw excellence not being properly recognized and questioned whether upper management really knew what their talent looked like (a point not lost, btw, on purveyors of performance management software with their 9-box grids and helicopter views).

His ah-ha moment (or one of them) came when he realized that if social ratings can work for things like books and software, couldn't they work for people? If he could develop a site that aggregated high quality information about people's performance, talent decisions would be higher quality as well as more fair (it's hard to argue with that goal).

Peter also believes there is an important employer segment not being served by existing background/reference checking processes. Employers that hire hourly workers rely largely on criminal/credit checks. Those hiring for executive-level positions often rely on high cost search firms. But for employers hiring large number of employees in the middle, there isn't really a good option.

Such a site would have three primary users: those being reviewed, those providing reviews, and those using the information (e.g., employers checking out candidates and vice-versa). And quite commonly a single individual could be in all three roles at various times. The site would have to accommodate all three perspectives.

Concerns/criticism
So what about my concerns? When it comes to the reliance on Facebook, Peter pointed out that it's a good bet Facebook will be around for a while, but the site is not being built to rely solely upon it. It has--or will have--the ability to use contact information from sites like Gmail and LinkedIn. I still have a concern about forcing people through Facebook, so it will be interesting to see whether this impacts the ability to generate reviews.

Concern from others has focused on the potential for abuse. But Peter made several important points. First, this isn't like an online newspaper comment space--those are anonymous with no repercussions for inaccuracy. In Unvarnished your reputation would suffer and your reviews become less valid (assuming your reviews are themselves reviewed). Second, this information is largely out there (or potentially at least) in the form of places like Twitter. But it's not in a central location that can be easily managed, and it's not objective. By guarding invites and relying on anonymity, the goal is to make legitimate reviewers feel safe in leaving honest feedback (whether it's an A+ or a D-) without worrying about the interpersonal implications.

What about stickiness--why would someone want to keep coming back? They're working on several (re)engagement initiatives. One idea is to provide people with periodic updates letting them know when people in their network have been updated (let's hope it doesn't turn into those updates from LinkedIn that are easy deleted). They're also working on the ability to follow an individual based on your "gestures" (e.g., you've reviewed them).

Future directions
The team is considering adding several features. One is the ability for reviewers to better define their relationship--e.g., describing not only the organization they worked together in but their relationship. This would factor into behind-the-scenes algorithms but would not be published.

They're also discussing allowing people to identify themselves, but this raises all of the associated issues such as accuracy, thoroughness, and feeling a need for reciprocity.

Another is allowing access to premium features (e.g., at a cost) such as making trusted reviews more obvious, something that "super users" such as recruiters would likely be willing to pay for.

In terms of opening up access, they're in no big rush to expand access beyond Facebook invites. While this may hinder their growth, it helps keep the data quality high, and they're willing (smartly, I think) to make this trade off.

Conclusion
Currently the company is focused on acquiring the talent it needs to succeed (check out the way they recently advertised for new engineers). One of Peter's primary concerns is that the community evolve in the right direction. Right now it's somewhat of a "love fest" with lots of positive reviews. The site will gain in usefulness when reviews are a combination of pros and cons.

Given what we know about performance ratings, it will be interesting to see if the existing invitation and rating process is sufficient to generate that depth. It also remains to be seen whether some type of incentive will need to be given to generate reviewers.

My overriding concern continues to be the size and diversity of the user group (right now primarily filled with Silicon Valley IT folk). Accuracy, something other writers have been obsessed with, is less of a concern for me after kicking the tires and talking with Peter. But we'll see how the promise and concerns ebb and flow with the user base as well as changes to the service.

At the very least, I hope you'll agree with me that this website is a fascinating development and one we should watch. After all, you might want to use Unvarnished to provide feedback on someone. Or research a potential boss. Or research an applicant. Or...you may be the applicant.

Saturday, July 03, 2010

Three on EI

A couple of posts ago I wrote about the most recent issue of IOP and the focal article on emotional intelligence (EI).

Now there's another meta-analysis out by O'Boyle et al. in JOB which may add some support to fans of EI. Here are the main findings:

1) Corrected correlations of between .24 and .30 with job performance.

2) The three "streams" of measures (ability, self- or peer-report, and "mixed models") correlated differently with cognitive ability and personality measures.

3) Perhaps most interestingly, self-/peer-report and "mixed models" had the most incremental validity beyond cognitive ability and personality.

More evidence that these measures--whatever they're measuring--are correlated with job performance, and they seem to be picking up something new. But do we know what's being measured? And what aspect of job performance is being predicted (contextual rather than task performance seems likely)? And what about other considerations, such as face validity/applicant reaction?

For those of you into EI, here are a couple articles that you may have missed:

Mayer et al.'s overview in the 2008 issue of Annual Review of Psychology (PDF; full text)

Cote and Miners' 2006 piece in Administrative Science quarterly (PDF; full text).