Sunday, October 03, 2010

How to hire an attorney


What's the best way for an organization to hire an attorney with little job experience? What should they look for? LSAT scores? Law school grades? Interviewing ability? A multi-year project that issued its final report in 2008 gives us some guidance. And while the study focused on ways law schools should select among applicants, it's also instructive for the hiring process. (By the way, individuals looking for personal representation may find the following interesting as well.)

Recall that the formalization of the "accomplishment record" approach occurred in 1984 with a publication by Leaetta Hough. She showed, using a sample of attorneys, that scores using this behavioral consistency technique correlated with job performance but not with aptitude tests or grades, and showed smaller ethnic and gender differences.

But in my (limited) experience, many hiring processes for attorneys have consisted of a resume/application, writing sample, and interview. Is that the best way to predict how well someone will perform on the job?

Assessment research would strongly point to cognitive ability tests being high predictors of performance for cognitively complex jobs. This is at least part of the logic of hurdles like the Law School Admissions Test (LSAT), a very cognitively-loaded assessment. When you're at the point of hire, however, LSAT scores are relatively pointless. Applicants have--at the very least--been through law school, and may have previous experience (such as an internship) you can use to determine their qualifications.

So what we appear to have at the point of hire is a mish-mash of assessment tools, relying heavily on un-proven filters (e.g., resume review) followed by a measure of questionable value (the writing sample) and the interview, which in many cases isn't conducted in a structured way that would maximize validity.

So what should we do to improve the selection of attorneys (besides using better interviews)? Some research done by a psychology professor and law school dean at UC Berkeley may offer some answers.

The investigators took a multi-phase approach to the study. The first part resulted in 26 factors of lawyer effectiveness--things like analysis and reasoning, writing, and integrity/honesty. In the second phase they identified several off-the-shelf assessments they wanted to investigate for usefulness, and they developed three new assessments--a situational judgment test (SJT), a biodata measure (BIO), and other measures, including optimism and a measure of emotional intelligence (facial recognition). In the final phase, they administered the assessments online to over 1,000 current and former law students and looked at the relationship between predictors and job performance (N for that part of about 700, using self, peer, and supervisor ratings).

Okay, so enough with the preamble--what did they find?

1) LSAT scores and undergraduate GPA (UGPA) predicted only a few of the 26 performance factors, mainly ones that overlapped with LSAT factors such as analysis and reasoning, and rarely higher than r=.1. Results using first-year law school GPA (1L GPA) were similar.

2) The scores from the BIO, SJT, and several scales of the Hogan Personality Inventory predicted many more dimensions of job performance compared to LSAT scores, UGPA, and 1L GPA.

3) The correlations between BIO and SJT and job performance were substantially higher-- in the .2-3 range compared to LSAT, UGPA, and 1L GPA. The BIO measure was particularly effective in predicting a large number of performance dimensions using multiple rating sources.

3) In general, there were no race and gender subdifferences on the new predictors.

These results strongly suggest that when it comes to hiring attorneys with limited work experience, organizations would be well advised to use professionally developed assessments, such as biodata measures, situational judgment tests, and personality inventories, rather than rely exclusively on "quick and dirty" measures such as grades and LSAT scores. Yet another proof of the rule that the more time spent developing a measure, the better the results.


On a final note, several years back I did a small exploratory study looking at the correlation between law school quality and job performance. I found two small to moderate results: law school quality was positively correlated with job knowledge, but negatively correlated with "relationships with people."

References:
Here is the project homepage.
You can see an executive summary of the final report here.
A listing of the reports and biographies is here.
The final report is here.

4 comments:

Anonymous said...

It appears to me that this study actually shows what traits result in a good performance evaluation rather than in actual competence or job performance (which most partners or managers would have no idea how to evaluate- beyond hours billed). The confident/agreeable/yes-man/incompetent blow-hard receives a workplace "A" while the wallflower/cynic/competent drone receives a workplace "C" or worse.

The above view of the study is borne out by my own (admittedly anecdotal) experience. I spent many of my early years of practice attempting to straighten out the problems created by a number of "legal geniuses." The frustration by both client and management in my inability to quickly rectify the Gordian knot of problems, created by the geniuses' collective years of horrible drafting and inattention, never tarnished the geniuses’ reputations- only mine.

BryanB said...

You raise an excellent point about how job performance is (mis)measured. It is interesting to note that they used peer evaluations as well as supervisors, of course those may be subject to their own biases or inaccuracies.

Anonymous said...

In your small exploratory study, how did you measure job knowledge and "relationships with people"?

and,

Have you seen any technology that has improved on writing sample testing?

BryanB said...

re: my study: job knowledge and relationships with people were measured simply as part of the standard performance evaluation, so it was a 5-point scale. Not the best, hence the exploratory nature.

As far as technology improving on writing sample, my concern is not with the technology but with the concept: if you're really interested in how someone writes, have them write something on site rather than submit something that they may or may not have actually written themselves.