Friday, November 28, 2014

Why leadership in the public sector is harder to find--but more important

Occasionally I post about things that are related to recruitment and assessment, but not focused exclusively on them.  This is one of those times.

I have the following quote from Valve Software's New Employee Handbook (a fascinating document) posted on my office door:

"Hiring well is the most important thing in the universe.
Nothing else comes close. It’s more important than breathing.
So when you’re working on hiring—participating in
an interview loop or innovating in the general area of
recruiting—everything else you could be doing is stupid
and should be ignored!"

The older I get, the more I wonder whether I should cross out "hiring" and write in "leadership".  I just can't bring myself to do it because they're both so darn important.

But this post will be about leadership.  Specifically, leadership in the public (i.e., government) sector.  More specifically, lack of leadership and what to do about it.  I don't pretend that leaders in the private sector are uniformly outstanding, but public sector is what I'm most familiar with.

First things first: in some important ways, leadership in the public sector (PS) is different from the private sector.  Not night-and-day I grant you, but there are some relatively unique boundary conditions that apply, namely:

- Not only are PS leaders bound by normal organizational policies and procedures, they labor under an additional layer of laws and rules, whether federal, state, or local.  Unlike policies and procedures, they cannot be easily changed--in fact in many cases this requires moving heaven and earth.  As one (very important) example, typically there are laws/rules about how you can hire someone.  Many of these laws and rules were created 50+ years ago in reaction to spoils systems and haven't been seriously evaluated since.

- Many PS employees have civil service protections.  This isn't a bad thing, but it means moving employee that are bad fits (either over or out) is difficult.  This greatly inhibits your talent mobility strategy.

- In the PS, leadership is often treated as an afterthought, rather than the linchpin upon which organizational success relies.  The assumption seems to be that the organizational systems and processes are so strong that it almost doesn't matter who's in charge.  This means things like leadership development and training are half-hearted.

These conditions combine with several other factors to result in true leadership being relatively rare in the PS:

- Failure is invisible.  There is very little measurement of leadership success and very little transparency and accountability, absent a media storm.

- There are fewer people in management positions that have the important leadership competencies.  Things like listening ability, strategic planning ability, and emotional intelligence.  Instead they are usually chosen based on technical ability and without the benefit of rigorous assessment results.

- There is a lack of understanding of leadership.  This stems from the lack of attention paid to it as a serious discipline; without operational definitions of leadership, there is no measurement and no accountability.

- An unwillingness to treat leadership seriously.  For whatever reasons--politics, lack of motivation, entrenched cultures--leadership is relegated to second-class status when it comes to analyzing department/agency success.  Focus tends to rest on line-level employees, technology, and unions--and only on top-level leaders when there is a phenomenally bad outcome.

So why is leadership more important in the PS than the private sector?

- Governments regulate many aspects of our lives.  They're not making consumer products.  Leaders in PS organizations have purview over things like public safety, the environment, education, housing, and taxes.  Things you literally touch every day.

- There is less accountability, less transparency.  PS leaders often do not report to a board.  They don't have to produce annual reports that detail their successes and failures.  What they do is often mysterious, poorly defined, and rarely sees the light of day.

- PS leaders work for you.  Elected or not, their salary typically comes from taxpayers.  They ultimately report to the citizens.  This means you should care about what they're doing, and whether they are worthy stewards of your investment.

So what can be done?  Much like the answer to "how do we hire well?", the answers are known.  They're just not practiced very well:

1.  Publicly acknowledge the scope of the problem.  Like frogs in a pot, somehow we find ourselves in a situation where slowly over time the current situation is accepted as normal.  It's time to stop pretending that all PS Managers are leaders.  They're not.  And we must look in the mirror ourselves and acknowledge that we are likely part of the problem.

2.  Acknowledge the urgency to improve.  Stop pretending that leadership is a secondary concern.  Sub-par leadership has a negative impact on our lives every day.  Improving the quality of that leadership is one of the most critical things we can do as a society.

3.  Publicly commit to change, and actually follow through.  Specifically describe what you will change, and when, and provide regular status updates.

4.  Define leadership in measurable terms and behaviors.  Here's just a sample list of what real leaders  do (and not a particularly good one);

= continuously improve operations
= champion and reward innovation
= hold their people accountable for meeting SMART goals
= continually seek feedback and signs of their own success and failure
= create and sustain a culture that attracts high performers and dissuades poor fits
= make hiring and promoting the most qualified people THE most important part of their job

5.  Hire and promote those with leadership competencies, not the best technicians.  While knowledge of the work being performed is important, it is far from the most important competency.

6.  Make the topic of leadership a core activity for every management team.  Eliminate "information sharing" meetings and replace them with discussions on how to be better leaders.

7.  Set clear goals of leaders up front, and hold them accountable.  What does this mean?

= consequences for hiring poor fits
= consequences for poor morale on their team
= consequences for not setting and meeting SMART goals
= recognition for doing all of the above well

8.  Measure leadership success and make the results transparent.  Develop plans to address gaps and follow through.

9.  Instill a culture of boldness and innovation.  Banish fear, often borne of laboring under layers of red tape.  Encourage risk-taking, and learn from mistakes rather than punishing them.

10.  Relentlessly seek out and banish inefficiencies, especially related to the use of time.  Critically evaluate how email and meetings are used; establish rules regarding their use.

11.  Stop pretending that all of this applies only to first-line supervisors.  If anything, they're more important the higher you go in the organizational chart.

12.  When it comes to recruiting, stop focusing on low relative salaries, and capitalize on the enormous benefit of the PS an employer--namely the mission of public service.

13.  View leadership as a competency, not a position.  Leadership behaviors can be found everywhere in an organization--they should be recognized and promoted.


My intent here is not to be a downer, but to emphasize how much more focus needs to be placed on leadership in the public sector.  The current state of affairs is unacceptable.  And for those of us familiar with research and best practices in organizational behavior, it's painful.

So I apologize for the decidedly un-Thanksgivingy nature of this post.  But I am thankful for free speech and open minds.  Thanks for reading.

Monday, October 27, 2014

Just kidding...more research update!

Seriously?  Just yesterday I did my research update, ending with a note that the December 2014 issue of the International Journal of Selection and Assessment should be out soon.

Guess what?  It came out today.

So that means--you guessed it--another research update!  :)

- First, a test of Spearman's hypothesis, which states that the magnitude of White-Black mean differences on tests of cognitive ability vary with the test's g loading.  Using a large sample of GATB test-takers, these authors found support for Spearman's hypothesis, and that reducing g saturation lowered validity and increased prediction errors.

So does that mean practitioners have to choose between high-validity tests of ability or increasing the diversity of their candidate pool?  Not so fast.  Remember...there are other options.

- Next, international (Croatian) support for the Conditional Reasoning Test of Aggression, which can be used to predict counterproductive work behaviors.  I can see this increasingly being something employers are interested in.

- Applicants that do well on tests have favorable impressions of them, while those that do poorly don't like them.  Right?  Not necessarily.  These researchers found that above and beyond how people actually did on a test, certain individual differences predict applicant reactions, and suggest these be taken into account when designing assessments.

- Although personality testing continues to be one of the most popular topics, concerns remain about applicants "faking" their responses (i.e., trying to game the test by responding inaccurately but hopefully increase the chances of obtaining the job).  This study investigates the use of blatant extreme responding, consistently selecting the highest or lowest response option, to detect faking, and looked at how this behavior correlated with cognitive ability, other measures of faking, and demographic factors (level of job, race, and gender).

- Next, a study of assessment center practices in Indonesia.

- Do individuals high in neuroticism have higher or lower job performance?  Many would guess lower performance, but according to this research, the impact of neuroticism on job performance is moderated by job characteristics.  This supports the more nuanced view that the relationship between personality traits and performance is in many cases non-linear and depends on how performance is conceptualized.

- ...which leads oh so nicely into the next article!  In it, the authors studied air traffic controllers and found results consistent with previous studies--ability primarily predicted task performance while personality better predicted citizenship behavior.  Which raises an interesting question: which version of "performance" are you interested in?  My guess is for many employers the answer is both--which suggests of course using multiple methods when assessing candidates.

- Last but not least, an important study of using cognitive ability and personality to predict job performance in a three studies of Chilean organizations.  Results were consistent with studies conducted elsewhere, namely ability and personality significantly predicted performance.

Okay, I think that's it for now!

Saturday, October 25, 2014

Research update

Okay, so it been a couple months, huh?  Well, what say we do a research update then.

But before I dive in, I discovered something interesting and important.  Longtime readers know that one of my biggest pet peeves is how difficult research articles are to get a hold of.  And by difficult I mean expensive.  Historically, unless you were affiliated with a research institution or were a subscriber, you had to pay exorbitant (IMHO) fees to see research articles.  So imagine my pleasure when I discovered that at least one publisher--Wiley, who publishes several of the research journals in this area--now allows you to read-access for an article for as low as $6.  Now that's only for 48 hours and you can't print it, but hey--that's a heck of a lot better than something like $30-40, which historically has been the case!  So kudos.

Moving on.

Let's start with a bang with an article from the Autumn 2014 issue of Personnel Psych.  A few years back several researchers argued that the assumption that performance is distributed normally was incorrect; and it got a bit of press.  Not so fast, say new researchers, who show that when defined properly, performance is in fact more normally distributed.

For those of you wondering, "why do I care?"  Whether we believe performance is normally distributed or not significantly impacts not only the statistical analyses performed on selection mechanisms but theories and practices surrounding HRM.


Moving to the July issue of the Journal of Applied Psychology:

- If you're going to use a cognitively-loaded selection mechanism (which in many cases has some of the highest predictive validity), be prepared to accept high levels of adverse impact.  Right?  Not to fast, say these researchers, who show that by weighting the subtests, you can increase diversity decisions without sacrifice validity.

- Here's another good one.  As you probably know, the personality trait of conscientiousness has shown value in predicting performance in certain occupations.  Many believe that conscientiousness may in fact have a curvilinear relationship with performance (meaning after a certain point, more conscientiousness may not help)--but this theory has not been consistently supported.  According to these researchers, this may have to do with the assumption that higher scores equal more conscientiousness.  In fact, when using an "ideal point" model, results were incredibly consistent in terms of supporting the curvilinear relationship between conscientiousness and performance.

- Range restriction is a common problem in applied selection research, since you only have performance data on a subset of the test-takers, requiring us to draw inferences.   A few years back, Hunter, Schmidt, and Le proposed a new correction for range restriction that requires less information.  But is it in fact superior?  According to this research, the general answer appears to be: yes.


Let's move to the September issue of JAP:

- Within-person variance of performance is an important concept, both conceptually and practically.  Historically short-term and long-term performance variance have been treated separately, but these researchers show the advantage of integrating the two together.

- Next, a fascinating study of the choice of (and persistence in) STEM fields as a career, the importance of both interest and ability, and how gender plays an important role.  In a nutshell, as I understand it, interest and ability seem to play a more important role in predicting STEM career choices for men than for women, whereas ability is more important in the persistence in STEM careers for women.


Let's take a look at a couple from recent issue of Personnel Review:

- From volume 43(5), these researchers found support for ethics-based hiring decisions resulting in improved work attitudes, include organizational commitment.

- From 43(6), an expanded conceptual model of how hiring supervisors perceive "overqualification", which I would love to see more research on.


Last but not least, for you stats folks, what's new from PARE?

- What happens when you have missing data on multiple variables?

- Equivalence testing: samples matter!

- What sample size is needed when using regression models?  Here's one suggestion on how to figure it out.


The December 2014 issue of IJSA should be out relatively soon, so look for a post on that soon!



Wednesday, August 06, 2014

Research update

I can't believe it's been three months since a research update.  I was waiting until I got critical mass, and with the release of the September issues of IJSA, I think I've hit it.

So let's start there:

- Experimenting with using different rating scales on SJTs (with "best and worst" response format doing the best of the traditional scales)

- Aspects of a semi-structured interview added incremental validity over cognitive ability in predicting training performance

- Studying the use of preselection methods (e.g., work experience) prior to assessment centers in German companies

- The proposed general factor of personality may be useful in selection contexts (this one was a military setting)

- Evidence that effective leaders show creativity and political skill

- Investigating the relationship (using survey data) between personality facets and CWBs (with emotional stability playing a key role)

- Corrections for indirect range restriction boosted the upper end of structured interview validity substantially

- A method of increasing the precision of simulations that analyze group mean differences and adverse impact

- A very useful study that looked at the prediction of voluntary turnover as well as performance using biodata and other applicant information, including recruitment source, among a sample of call center applicants.  Reuslts?  Individuals who had previously applied, chose to submit additional information, were employed, or were referrals had significantly less voluntary turnover.



Moving on...let's check out the May issue of JAP; there are only two articles but both worth looking at:

- First, a fascinating study of the firm-level impact of effective staffing and training, suggesting that the former allow organizations greater flexibility and adaptability (e.g., to changing financial conditions).

- Second, another study of SJT response formats.  The researchers found, using a very large sample, the "rate" format (e.g., "rate each of the following options in terms of effectiveness") to be superior in terms of validity, reliability, and group differences.


Next, the July issue of JOB, which is devoted to leadership:

- You might want to check out this overview/critique of the various leadership theories.

- This study suggests that newer models proposing morality as an important component of leadership success have methodological flaws.

- Last, a study of why Whites oppose affirmative action programs


Let's move to the September issue of Industrial and Organizational Psychology:

- The first focal article discusses the increasing movement of I/O psychology to business schools.  The authors found evidence that this is due in large part to some of the most active and influential I/O researchers moving to business schools.

- The second is about stereotype threat--specifically its importance as a psychological construct and the paucity of applied research about it.


Coming into the home stretch, the Summer issue of Personnel Psych:

- The distribution of individual performance may not be normal if, as these researchers suggest, "star performers" have emerged

- Executives with high levels of conscientiousness and who display transformational leadership behavior may directly contribute to organizational performance


Rounding out my review, check out a few recent articles from PARE:

- I'm not even gonna attempt to summarize this, so here's the title: Multiple-Group confirmatory factor analysis in R – A tutorial in measurement invariance with continuous and ordinal indicators

- Improving exploratory factor analysis for ordinal data

- Improving multidimensional adaptive testing


Last but not least, it's not related to recruitment or assessment, but check out this study that found productivity increases during bad weather :)

That's all folks!

Sunday, June 22, 2014

Are job ads a relic?


Lately at my day job we've been working with a few customers on replacing their traditional "one-shot" job ads with continuously posted career opportunities.  Why?

- It helps capture qualified candidates regardless of where they are in the search process (i.e., it helps solve the "I didn't see the ad" problem).

- It gives hiring supervisors a persistent, fresh pool of applicants that they can immediately draw from.

- It saves a ton of time that gets wasted in the traditional model due to requesting to fill a position, tailoring the duty statement, determining the assessment strategy, etc.

- It changes the focus--importantly and appropriately--from filling a single position to career opportunities.

- It presents an opportunity to critically review the way we advertise our jobs, which too often are boring and uninspired.

- With the appropriate technology, it can create another community of minds; for businesses this means customers, for public sector, it means solution generators.

- With the appropriate technology, connections can be potentially tapped to increase reach.

Apparently we're not alone in going down this road.  As this article describes, online retailer Zappos has created Zappos Insider, with the goal being to create more of a talent community than a one-time transactional relationship.  This move toward "candidate relationship management" is not new but seems to be gaining steam, which is also reflected in HR technology as vendors build this approach into their products.


So what are some challenges associated with the model?

- Without specific application dates, it becomes more critical that applicants can determine their status at any time.

- It may dissuade applicants who are actively seeking for work, who may see this model as too slow.

- It requires significant up-front work to design and determine the administration (but pays dividends on the back-end).

- Hiring supervisors may be skeptical of the change.


Here are some related issues that moving to this model doesn't automatically solve:

- Engaging in a timely manner with candidates so they understand the status of their application/interest.

- Communicating effectively with those not selected.

- Giving applicants a real person to contact if they have questions (Zappos makes these contacts very clear).

- Creating attractive yet realistic descriptions of positions in the organization.

- Focusing on the KSAOs that are most strongly linked to job performance.

- Developing an assessment strategy that most effectively measures those KSAOs.


Until there is a free worldwide talent pool that matches high quality candidate assessment with realistic job profiles (yes, that's my dream of how to replace the current extremely wasteful job matching process), things like this may have the best shot of streamlining and updating a process that is holding us back rather than helping both applicants and organizations achieve their goals.

Saturday, May 10, 2014

WRIPAC's 35 Anniversary Meeting: Something old, something new, something borrowed, something to do


Over the last two days I had the pleasure of attending the Western Region Intergovernmental Personnel Assessment Council's (WRIPAC) 35th anniversary meeting in San Francisco.  I thought I would share with you some of my observations, as it's been a while, unfortunately, since I attended one of their events.

For the uninitiated, WRIPAC was founded in 1979 and is one of several regional associations in the United States devoted to personnel assessment and selection in the public sector.  Other examples include PTC-NC, PTC-SC, PTC/MW, and IPAC.  I have been involved with several of these organizations over the years and they provide a tremendous benefit to members and attendees, from networking to best practice to the latest research.

WRIPAC serves public agencies in Arizona, California, Nevada, and Oregon.  They have a somewhat unique membership model in that there is no membership fee and the meetings are free, but in order to become a member you have to attend two consecutive meetings.  They maintain their energy in large part through a commitment to their committees and providing excellent training opportunities.

So, about the 35th anniversary WRIPAC meeting, hosted by the City and County of San Francisco:

Something old:  One of my observations over the day-and-a-half meeting was the remarkable number of issues that seem to be the same, year after year.  For example:

* how to effectively screen large numbers of applicants
* how to handle increased workload with a reduced number of HR staff
* how to convince supervisors and managers to use valid assessment methods
* which assessment vendors provide the best products
* how to design selection systems that treat candidates fairly but don't unduly burden the agency

We also were treated to a retrospective by three previous WRIPAC presidents, which highlighted some really stark ways the workforce and the assessment environment has changed over the years.  This included everything from sexual harassment in the workplace being commonplace to agencies trying to figure out how to comply with the new (at the time) Uniform Guidelines.

Something new:  Dr. Scott Highhouse from Bowling Green State University presented results from several fascinating studies.

The first looked at "puzzle interview questions"--those bizarre questions like "Why are manhole covers round?" made famous by companies like Microsoft and Google.  Using a sample of participants from Amazon's Mechanical Turk (the first time I've heard this done for a psychological study), he was particularly interested in whether individual "dark side" traits such as machiavellianism and narcissism might explain why certain people prefer these types of questions.

Results?  First, men were more likely to endorse these types of items.  Why?  Well it may have to do with the second finding that respondents higher in narcissism and sadism were more likely to endorse these types of questions.  From what I can tell from my massive (ten minute) search of the internet, men are more likely to display both narcissism and sadism.  Maybe more importantly, the common denominator seemed to be callousness, as in being insensitive or cruel toward others.

So what does this mean?  Well I had two thoughts: one, if you like to ask these types of questions you might ask yourself WHY (because there's no evidence I know of to support using them).  Second, if you work with supervisors who show high levels of callousness, they might need additional support/nudging to use appropriate interview questions.  Looking forward to these results getting published.

The second study Dr. Highhouse described looked at worldviews and whether they impact beliefs about the usefulness of testing--in particular cognitive ability and personality tests.  This line of research basically is trying to discover why people refuse to entertain or use techniques that have proven to work (there's a separate line of study about doctors who refuse to use decision aids--it's all rather disturbing).

Anyway, in this study, also using participants from Mechanical Turk, the researchers found that individuals that had strong beliefs in free will (i.e., people have control of their choices and should take personal responsibility) were more open to using conscientiousness tests, and people with strong beliefs in scientific determinism (i.e., behavior stems from genetics and the environment) were more open to using cognitive ability tests.  This adds valuable insight to why certain supervisors may be resistant to using assessment methods that have a proven track record--they're not simply being illogical but it may be based on their fundamental beliefs about human behavior.

The last study he talked about looked at whether presenting people with evidence of test validity would change their related world views.  You wouldn't expect strong effects, but they did find a change.  Implication?  More support for educating test users about the strengths and weaknesses of various assessments--something we do routinely but for good reason!

Last but not least, Dr. Highhouse introduced a journal that will hopefully be coming out next year, titled Journal of Personnel Assessment and Decisions, that will be sponsored by IPAC and Bowling Green State University, that aside from the excellent subject matter, will have two key features: it will be free and open to everyone.  I'm very excited about this, and long-time readers will know I've railed for years about how difficult it is for people to access high-quality research.

Something borrowed:  one of the big benefits of being involved with professional organizations--particularly ones like WRIPAC that are practitioner-focused--is it gives you access to others facing similar challenges that have come up with great solutions.  There was a lot of information shared at the roundtable about a wide variety of topics, including:

- reasonable accommodation during the testing process (e.g., armless chairs for obese individuals)
- how to use automated pre-screening to narrow candidate pools
- how agencies will adjust to the new requirements in California related to criminal background information (i..e, "ban the box" and other similar legislation)
- how to efficiently assess out-of-state applicants (e.g., video interviewing, remote proctors)
- how and when to verify a drivers license if required for the job
- how to effectively use 360-degree evaluations
- how MQs (the subject of June's free PTC-NC meeting) should best be placed in the selection process
- cloud vs. on-premise IT solutions
- centralization vs. decentralization of HR functions
- use of item banks (e.g., WRIB, CODESP)

In addition, there was an excellent entire afternoon session devoted to succession and leadership planning that featured three speakers describing the outstanding programs they've managed to administer at their agencies.  I took a ton of information away from these and it came at exactly the right time as we're looking at implementing these exact same programs.

Something to do:  One of the main things I took away from this conference is how important it is to maintain your participation in professional associations.  It's so easy to get sucked into your daily fires and forgot how valuable it is, both personally and professionally, to meet with others and tackle our shared challenges.  I plan on sharing what I learned back in the office and upping my expectation that as HR professionals we need to be active in our professional community.  I encourage you to do the same!

Sunday, April 27, 2014

Mobile assessment comes of age + research update

The idea of administering employment tests on mobile devices is not new.  But serious research into it is in its infancy.  This is to be expected for at least two reasons: (1) historically it has taken a while with new technologies to have enough data to analyze (although this is changing), and (2) it takes a while for researchers to get through the arcaneness of publishing (this, to my knowledge, isn't changing, but please prove me wrong).

Readers interested in the topic have benefited from articles elsewhere, but we're finally at a point where good research is being published on this topic.  Case in point: the June issue of the International Journal of Selection and Assessment.

The first article on this topic in this issue, by Arthur, Doverspike, Munoz, Taylor, & Carr, studied data from over 3.5 million applicants who completed unproctored internet-based tests (UIT) over a 14-month period.  And while the percentage that completed them on mobile devices was small (2%), it still yielded data on nearly 70,000 applicants.

Results?  Some in line with research you may have seen before, but some may surprise you:

- Mobile devices were (slightly) more likely to be used by women, African-Americans and Hispanics, and younger applicants.  (Think about that for a minute!)

- Scores on a personality inventory were similar across platforms.

- Scores on a cognitive ability test were lower for those using mobile devices.  Without access to the entire article, I can only speculate on proffered reasons, but it's interesting to think about whether this is a reflection of the applicants or the platform.

- Tests of measurement invariance found equivalence across platforms (which basically means the same thing(s) appeared to be measured).

So overall, in terms of using UITs, I think this is promising in terms of including a mobile component.


The next article, by Morelli, Mahan, and Illingworth, also looked at measurement variance of mobile versus non-mobile (i.e., PC-delivered) internet-based tests, with respect to four types of assessment: cognitive ability, biodata, a multimedia work simulation, and a text-based situational judgment test.  Data was gathered from nearly 600,000 test-takers in the hospitality industry who were applying for maintenance and customer-facing jobs in 2011 and 2012 (note the different job types).  Nearly 25,000 of these applicants took the assessment on mobile devices.

Results?  The two types of administrations appeared be equivalent in terms of what they were measuring.  However, interestingly, mobile test-takers did worse on the SJT portion.  The authors reasonably hypothesize this may be due to the nature of the SJT and the amount of attention it may have required compared to the other test types.  (btw this article appears to be based on Morelli's dissertation, which can be found here--it's a treasure trove of information on the topic)

Again, overall these are promising results for establishing the measurement equivalence of mobile assessments.  What does this all mean?  It suggests that unproctored tests delivered using mobile devices are measuring the same things as tests delivered using more traditional internet-based methods.  It also looks like fakability or inflation may be a non-issue (compared to traditional UIT).  This preliminary research means researchers and practitioners should be more confident that mobile assessments can be used meaningfully.

I agree with others that this is only the beginning.  In our mobile and app-reliant world, we're only scratching the surface not only in terms of research but in terms of what can be done to measure competencies in new--and frankly more interesting--ways.  Not to mention all the interesting (and important) associated research questions:

- Do natively developed apps differ in measurement properties--and potential--compared to more traditional assessments simply delivered over mobile?

- How does assessment delivery model interact with job type?  (e.g., may be more appropriate for some, may be better than traditional methods for others)

- What competencies should test developers be looking for when hiring?  (e.g., should they be hiring game developers?)

- What do popular apps, such as Facebook (usage) and Candy Crush (score), measure--if anything?

- Oh, and how about: does mobile assessment impact criterion-related validity?


Lest you think I've forgotten the rest of this excellent issue...

- Maclver, et al. introduce the concept of user validity, which uses test-taker perceptions to focus on ways we can improve assessments, score interpretation, and the provision of test feedback.

- Bing, et al. provide more evidence that contextualizing personality inventory items (i.e., wording the items so they more closely match the purpose/situation) improves the prediction of job performance--beyond noncontexual measures of the same traits.

- On the other hand, Holtrop, et al. take things a step further and look at different methods of contextualization.  Interestingly, this study of 139 pharmacy assistants found a decrease in validity compared to a "generic" personality inventory!

- This study by Ioannis Nikolaou in Greece of social networking websites (SNWs) that found job seekers still using job boards more than SNWs, that SNWs may be particularly effective for passive candidates (!), and that HR professionals found LinkedIn to be more effective than Facebook.

- An important study of applicant withdrawal behavior by Brock Baskin, et al., that found withdrawal tied primarily to obstructions (e.g., distance to test facility) rather than minority differences in perception.

- A study of Black-White differences on a measure of emotional intelligence by Whitman, et al., that found (N=334) Blacks had higher face validity perceptions of the measure, but Whites performed significantly better.

- Last, a study by Vecchione that compared the fakability of implicit personality measures to explicit personality measures.  Implicit measures are somewhat "hidden" in that they measure attitudes or characteristics using perceptual speed or other tools to discover your typical thought patterns; you may be familiar with project implicit, which has gotten some media coverage.  Explicit measures are, as the name implies, more obvious items--in this case, about personality aspects.  In this study of a relatively small number of security guards and semiskilled workers, the researchers found the implicit measure to be superior in terms of fakability resistance.  (I wonder how the test-takers felt?)


That's it for this excellent issue of IJSA, but in the last few months we also got some more great research care of the March issue of the Journal of Applied Psychology:

- An important (but small N) within-subjects study by Judge, et al. of the stability of personality at work.  They found that while traits exhibited stability across time, there were also deviations that were explained by work experiences such as interpersonal conflict, which has interesting implications for work behavior as well as measurement.  In addition, the authors found that individuals high in neuroticism exhibited more variation in traits over time compared to those who were more emotionally stable.  You can find an in press version here; it's worth a read, particularly the section beginning on page 47 on practical implications.

- Smith-Crowe, et al. present a set of guidelines for researchers and practitioners looking to draw conclusions from tests of interrater agreement that may assume conditions that are rarely true.

- Another interesting one: Wille & De Fruyt investigate the reciprocal relationship between personality and work.  The researchers found that while personality shapes occupational experiences, the relationship works in both directions and work can become an important source of identity.

- Here's one for you assessment center fans: this study by Speer, et al. adds to the picture through findings that ratings taken from exercises with dissimilar demands actually had higher criterion-related validity than ratings taken from similar exercises!

- Last but not least, presenting research findings in a way that is understandable to non-researchers poses an ongoing--and important--challenge.  Brooks et al. present results of their study that found non-traditional effect size indicators (e.g., a common language effect size indicator) were perceived as more understandable and useful when communicating results of an intervention.  Those of you that have trained or consulted for any length of time know how important it is to turn correlations into dollars or time (or both)!

That's it for now!

Saturday, March 29, 2014

Facial analysis for selection: An old idea using new technology?


Selecting people based on physical appearance is as old as humankind.  Mates were selected based in part on physical features.  People were hired because they were stronger.

This seems like an odd approach to selection for many jobs today because physical characteristics are largely unrelated to the competencies required to perform the job, although there are exceptions (e.g., firefighters).  But employers have always been motivated to select based on who would succeed (and/or make them money), and many have been interested in the use of gross physical characteristics to help them decide: who's taller, whose head is shaped better (phrenology), etc.  The general name for this topic is physiognomy.

Of course nowadays we have much more sophisticated ways of measuring competencies that are much more related to job success, including things like online simulations of judgment.  But this doesn't mean that people have stopped being interested in physical characteristics and how they might be related to job performance.  This is due, in part I think, to the powerful hold that visual stimuli has on us as well as the importance of things like nonverbal communication.  We may be a lot more advanced in some ways, but parts of our brain are very old.

The interest in judgment based on physical appearance has been heightened by the introduction of different technologies, and perhaps no better example of this lies with facial feature analysis.  With the advent of facial recognition technology and its widespread adoption in major cities around the globe, in law enforcement, and large sporting events, a very old idea is once again surfacing: drawing inferences from relatively* stable physical characteristics--specifically, facial features.  In fact this technology is being used for some very interesting applications.  And I'm not sure I want to know what Facebook is planning on doing with this technology.

With all this renewed interest, it was only a matter of time until we circled back to personnel selection, and sure enough a new website called FaceReflect is set to open to the public this year and claims to be able to infer personality traits from facial features, already drawing a spotlight.  But have we made great advances in the last several thousand years or is this just hype?  Let's look deeper.

What we do know is that certain physical characteristics reliably result in judgment differences.  Attractiveness is a great example: we know that individuals considered to be more attractive are judged more positively, and this includes evaluative situations like personnel selection.  It even occurs with avatars instead of real people.  And the opposite is true: for example it has been shown that applicants with facial stigmas are viewed less favorably.

Another related line of research has been around emotional intelligence, with assessments such as the MSCEIT including a component of emotional recognition.

More to the point, there's research suggesting that more fine-tuned facial features such as facial width may be linked to job success in certain circumstances.  Why?  The hypothesis seems to be two-fold: certain genes and biological mechanisms associated with facial features (e.g., testosterone) are associated with other characteristics, such as assertiveness or aggression.  This could mean that men with certain facial features (such as high facial width-to-height ratio) are more likely to exhibit these behaviors, or--and this is a key point--they are perceived that way. (By the way, there is similar research showing that voice pitch is also correlated with company success in certain circumstances)

Back to FaceReflect.  This company claims that by analyzing certain facial features, they can reliably draw inferences about personality characteristics such as generosity, decision making, and confidence.

What seems to be true is that people reliably draw inferences about characteristics based on facial features.  But here's the key question: are these inferences correct?  That's where things start to break down.

The problem is there simply isn't much research showing that judgments about job-relevant characteristics based on facial features are accurate--in fact we have research that at best the accuracy is low, and at worst shows the opposite.  To some extent you could argue this doesn't matter--what matters is whether people are reliably coming to the same conclusion.  But this assumes that what drives performance is purely other peoples' perceptions, and this is obviously missing quite a lot of the equation.

In addition, even if it were true that peoples' perceptions were accurate, it would apply only to a limited number of characteristics--i.e., those that could logically be linked to biological development through a mechanism such as testosterone.  What about something like cognitive ability, obviously a well-studied predictor of performance for many jobs?  The research linking testosterone and intelligence is complicated, some indicating the reverse relationship (e.g., less testosterone leading to higher cognitive ability), and some showing no relationship between facial features and intelligence in adults--and again, this is primarily men that have been studied.  (While estrogen also impacts facial characteristics, its impact has been less studied)

Finally, the scant research we do have indicates the link between facial features and performance is true only in certain circumstances, such as organizations that are not complex.  This is increasingly not true of modern organizations.  Circling back to the beginning of this article, you could liken this to selection based on strength becoming less and less relevant.

One of the main people behind FaceReflect has been met with skepticism before.  Not to mention that the entire field of physiognomy (or the newer term "personology") is regarded with skepticism.  But that hasn't stopped interest in the idea, including from the psychological community.

Apparently this technology is being used by AT&T for assessment at the executive levels, which I gotta say makes me nervous.  There are simply much more accurate and well-supported methods for assessing managerial potential (e.g., assessment centers).  But I suspect the current obsession with biometrics is going to lead to more interest in this area, not less.

At the end of the day, I stand by my general rule: there are no shortcuts in personnel selection (yet**).  To get the best results, you must determine job requirements and you must take the time required to get an accurate measurement of the KSAOs that link to those requirements.  It's easy to be seduced by claims that seem attractive but unfortunately lack robust research support, after all we're all susceptible to magical thinking, and there is a tendency to think that technology can do everything.  But when it comes to selection, I vote less magic, more logic.


* Think about how plastic surgery or damage to the face might impact this approach.

** As I've said many times before, we have the technology to create a system whereby a database could be created with high-quality assessment scores of many individuals that would be available for employers to match to their true job requirements.  The likelihood--or wisdom--of this idea is debatable.









Thursday, February 27, 2014

Workday unveils recruiting platform

Note: I posted this a while back but had to take it down because it was pre-release. Now that its available I'm re-posting it.

This post is going to be a bit different from my normal ones.  I'm not going to talk about research, but instead focus on technology.  Long time readers know HR technology is another passion of mine, and as recruitment/assessment professionals I think it behooves us to know "what's out there."

Recently in my day job we've been looking at automated HR systems, primarily to replace our manual time and attendance process, but it's impossible to not consider other applications once you start looking.  For the uninitiated, these systems go by various names like HCM (Human Capital Management), HRMS (Human Resource Management System) or HRIS (Human Resource Information System).

In my opinion, now is a very exciting time to be looking at automated HR systems.  Why?  Because unlike years past, when using these systems was about as pleasant as reading FMLA regulations, recent applications have taken a decidedly more "consumer" approach, borrowing heavily from popular websites like Amazon and Facebook.

One of the companies that has been the most trailblazing in this regard is Workday.  Workday was founded in 2005 by the former CEO of PeopleSoft along with its former Chief Strategist following Oracle's hostile takeover.  Workday provides cloud-based SaaS software for a variety of functions, primarily around finance, HR, and analytics.  One of Workday's big differentiators is it uses a single line of code, meaning every customer is using the same version all the time (again, just like a website).  Those of you that are used to being on Release x.2 while others are on x.6, and planning on how to upgrade, know what a big deal this is.

(If you're thinking "cloud-based whatnow?" this basically this means delivering software over the web rather than relying on locally hosted systems; obvious benefits include a potential massive reduction in local IT support, particularly attractive I think for the public sector)

For me, considering a large IT project implementation, I've seen enough to know that the user experience is essential.  Obviously the product has to work as advertised, but if users (including HR) don't like using the system--usually because it's unintuitive or overly complicated--chances of ultimate success are slim.  At best people will tolerate it.  I certainly don't want my name attached to that project.

That leads me to why companies like Workday are adding so much value to HR software.  Because their interface looks like this:


Not like this:


Up until now, Workday's HR offerings have focused on things like benefits, time tracking, and internal talent management.  Their recruiting module, announced back in 2012 and eagerly anticipated, has just been rolled (GA, or general availability, to Workday customers).  Several weeks ago I had the opportunity to see a pretty-much-finished version, and here are my observations:

1.  It's clean.  As evidenced by the screenshot above, Workday prides itself on a clean UI, and the recruiting module is no exception.  I don't have any shots to share with you because, well, I couldn't find any.  But there's plenty of white space, the eye knows where to go, and you won't get overwhelmed by sub-menu upon sub-menu.  Candidates are displayed using a "baseball card"-like interface, with key stats like years of job experience, skills, social feeds, and attachments.

2.  It's mobile- and social-friendly.  These were clear marching orders to the developers, and it shows.  Workday's mobile app is great, and SNWs like Facebook, LinkedIn, and Twitter are consistently integrated.  One feature they consistently stressed (for good reason) is how easy it is for candidates to upload their info from their LinkedIn account, saving a ton of time.

3.  At this time it's basically an ATS (applicant tracking system).  This isn't a bad thing, but don't expect qualified candidates to magically jump out of your monitor.  It's a very clean way to manage applicants for requisitions, and it's integrated into their core HR.  For many long-time users of other ATS products, this is a big deal.  Additional features, such as being able to quickly change candidate status and do mass emails, will also be popular.  Finally, you can easily search your candidate pool by competency, location, etc., similar to the employee search function in their HCM product.

4.  It will be particularly useful for organizations with dedicated recruiters.  I commented in the demo that in many organizations (including my own), we don't have dedicated recruiters; rather recruiting happens locally, driven by the hiring supervisor and their staff.  So anything these systems can do to engage and reward proper behavior (dare I say gamification here?) will pay huge dividends, and I think this is a development opportunity.  On the other hand, organizations with full-time recruiters will immediately "get it".

5.  It's a work in progress.  The career portal of the system wasn't up and running yet, although I was assured it would be by GA.  To me this is a huge missing piece, and I look forward to seeing how they integrate this with the back end.  There were also clearly plans for future features like assessments (e.g., video interviewing), job board aggregation, and CRM.  Definitely features to watch.

So at the end of the day, it wouldn't solve all our problems, but it offers an enormous potential for us, as HR, to get a better handle on what our hiring supervisors are doing.  Not only will this help with compliance, it will allow us to gather information to make more strategic decisions about resources.  The built-in business intelligence functions have the potential to transform our practices. You can get more details here: http://www.workday.com/applications/human_capital_management/recruiting.php

Now lest I leave you thinking that I'm a Workday shill, its not the only game out there, there are plenty of competitors, including newer players like Ultimate as well as more established ones like Oracle, both having lots of satisfied customers.  But Workday is--at this point--one of our finalists and has been on a crazy growth spurt over the last few years.

Want to know more about this technology?  I've found CedarCrestone's annual report to be extremely helpful, as well as HRE's technology articles.  The HR tech industry is huge (see my earlier post about one of the conferences) and you can very easily spend your entire career in this space.

I can honestly say it's technology like this that has the potential to evolve much of HR from unpredictable and frustrating to exciting and engaging.  I'm ready.>

Thursday, February 20, 2014

March '14 IJSA

In my last research update just a couple days ago, I mentioned that the new issue of IJSA should be coming out soon.

I think they heard me because it came out literally the next day.

So let's take a look:

- This study adds to our (relatively little) knowledge of sensitivity reviews of test items and finds much room for improvement

- More evidence that the utility of UIT isn't eliminated by cheating, this time with a speeded ability test

- Applicant motivation may be impacted by the intended scoring mechanism (e.g., objective vs. ratings).

- The validity of work experience in predicting performance is much debated*, but this study found support for it among salespersons, with personality also playing a moderating role.

- A study of the moderating effect of "good impression" responding on personality inventories

- This review provides a great addition to our knowledge of in-baskets (a related presentation can be found through IPAC)

- Another excellent addition, this time a study of faux pas on social networking websites in the context of employer assessment

- According to this study, assessors may adjust their decision strategy for immigrants (non-native language speakers)

- Letters of recommendation, in this study of nonmedical medical school graduate students, provided helpful information in predicting degree attainment

- Interactive multimedia simulations are here to stay, and this study adds to our confidence that these types of assessments can work well

Until next time!

* Don't forget to check out the U.S. MSPB's latest research study on T&Es!







Monday, February 17, 2014

Research update

Okay, past time for another research update, so let's catch up!

Let's start with the Journal of Applied Social Psychology (Dec-Feb):

- Cultural intelligence plays a key role in multicultural teams

- Theory of Planned Behavior can be used to explain intent to submit video resumes

- More on weight-based discrimination, including additional evidence that this occurs more among women (free right now!)

- Does the physical attractiveness bias hold in same-sex evaluative situations?  Not so much, although it may depend on someone's social comparison orientation

- "Dark side" traits play a role in predicting career preference

- Evidence that efficacy beliefs play a significant role not only in individual performance, but in team performance


Next up, the January issue of JAP:

- The concept of differential validity among ethnic groups in cognitive ability testing has been much debated, and this study adds to the discussion by suggesting that the effects are largely artifactual due to range restriction.

- Or are they?  This study on the same topic found that range restriction could not account for observed differential validity findings.  So the debate continues...

- A suggestion for how to increase the salience of dimension ratings in assessment centers

- Ambition and emotional stability appear related to adaptive performance, particularly for managers


Spring Personnel Psych (free right now!)

- First, a fascinating study of P-E fit across various cultures.  Turns out relational fit may be more important in collectivistic and high power distance cultures (e.g., East Asia), whereas rational fit may be more important in individualistic and lower power distance cultures (e.g., the U.S.).

- Next, a study of recruitment messaging for job involving international travel.

- Last but definitely not least, a narrative and quantitative extensive review of the structured interview


Not quite done: One from Psychological Science on statistical power in testing mediation, and just in case you needed more evidence, the Nov/Dec issue of HRM has several research articles supporting the importance of line manager behavior and HRM practices on things like employee engagement.

The Spring issue of IJSA should be out soon, so see ya soon!



Saturday, February 01, 2014

MQs: An idea whose time has passed?

For better or worse, I've spent nearly my entire career working under merit systems.  For the uninitiated, these systems were created many years ago to combat employment decisions based on favoritism, familial relation, or other similarly non-job related factors.  For example, California's civil service system was originally created in 1913 (and strengthened in 1934) to combat the "spoils" system, whereby hiring and promotion was too often based on political affiliation and patronage.

Part of most merit systems is the idea of minimum qualifications, or MQs.  Ideally, MQs are true minimum amount of experience and/or education (along with any licenses/certifications) required for a job.  They set the requirement to participate in civil service exams, and scale up depending on the level (or "classification").  For an entry-level attorney, for example, one would need to have a Bar license.  For a journey-level attorney, you might be required to have several years of experience before being allowed to examine and be appointed.  The idea is that MQs force hiring and promotion decisions to be based on job-related qualifications rather than who you know or what political party you belong to.  Makes sense, right?

But recently, I've had the opportunity to be involved in a task force looking at minimum qualifications and it spurred a lot of discussion and thought.  I'd like to spend just a moment digging into the concept a bit more and asking: are they still the right approach?

This task force was formed because of a recent control agency decision that places increased importance on applicants meeting MQs and reduces the ability of employees to obtain positions by simply transferring from one classification to another based on similarity in level, salary, etc.  Because this will result in fewer options for employees--and hiring supervisors--the discussion around this decision has been rigorous, and at times heated, but without a doubt intellectually stimulating.

As part of my participation in this task force, I reached out to my colleagues in IPAC for their thoughts, and got a ton of thoughtful responses.  While there were arguments for and against MQs, the overall sense seemed to be that they are a necessary evil.  Perhaps most importantly, though, I was reminded how important they are and thus the amount of attention that should be paid while establishing them.

So where does this lead me?  To play my hand, over time I've become less and less of a fan of MQs, and my participation on this task force has cemented some of the reasons why, however well intentioned:

- They are overly rigid and inflexible.  If an MQ states you must have 2 years as an Underwater Basketweaver, it doesn't matter than you have 1 year and 11 months and you just attended the Basketweaver Olympics, sorry, you don't qualify to test for the next level.

- They are often difficult to apply, resulting in inconsistencies.  What exactly is a four-year degree in "Accounting"?  What is "clerical" work?  If someone worked overtime, does that count as additional experience?  How shall we consider education from other countries?  And what about fake degrees and candidates who, shall  we say, elaborate their experience?

- They serve as barriers to talented individuals.  This results in fewer opportunities for people as well as a smaller talent pool for supervisors to draw from (ironically actually cannibalizing the very concept of the merit system).

- They serve as barriers to groups that have a history of discrimination, such as women and ethnic minorities.  Take a look at any census study of education, for example, and look at the graduation rates of different groups.  Implication?  Any job requiring a college degree has discrimination built into the selection process.

- Most were likely not developed as rigorously as they should have been.  Like any other selection mechanism, MQs are subject to laws and rules (e.g., the Civil Rights Act and the Uniform Guidelines in the U.S.) that require them to be based on job analytic information and set based on data, not hunches or guesses.

- Without a process to update them quickly, they rapidly become outdated, becoming less and less relevant.  Many classification in the California state system, for example, haven't been effectively updated in thirty years (or longer).  This becomes particularly painful in jobs like IT, where educational paths and terminology change constantly.

- They require an enormous amount of resources to administer.  At some point someone, somewhere, needs to validate that the applicant has the qualifications required to take the exam.  You can imagine what this looks like for an exam involving hundreds (sometimes thousands) of applicants--and the costs associated with this work.

- From an assessment perspective, MQs are a very blunt instrument--and not a particularly good one at that.  As we know, experience and education are poor predictors of job performance.  Experience predicts best at low levels but quickly becomes irrelevant.  Education typically shows very small correlations with performance.  As anyone that has experience hiring knows, a college degree doth not an outstanding employee make.  So basically what you're doing is front-loading your "select out" decisions with a tool that has very low validity.  Sound good?

- The ultimate result of all this is employers with MQs systems are often unable to attract, hire, and promote the most qualified candidates, while spending an enormous amount of time and energy administering a system that does little to identify top talent.  This becomes particularly problematic for public sector employers as defined benefit plans are reduced or eliminated and salaries fail to keep pace, resulting in these organizations becoming less and less attractive.

Recognizing these limitations, some merit systems (the State of Washington comes to mind) have recently moved away from MQs, instead evolving into things like desirable or preferred qualifications.  This presumably still outlines the approximate experience and education that should prepare someone for the position, but relies on other types of assessments to determine someone's true qualifications, abilities, and competitiveness.  I like this idea in concept as long as an effective system is put in place to deal with the likely resulting increase in applications to sift through.

The private sector, of course, does not operate under merit system rules, and have had to deal with the challenges--as well as reaping the benefits--associated with of a lack of rigid MQs.  They do this through increased use of technology and, frankly, significantly more expenditure on HR to support the recruitment and assessment function (particularly larger employers).  Of course some private sector employers adhere to strict MQs as a matter of course, and they would do well to think about the challenges I outlined above.

So where does this leave us?  Do MQs still serve a valuable perhaps?  Perhaps.  They hypothetically prevent more patronage, although anyone that has worked in a merit system can tell you this still happens.  Perhaps the strongest argument is that as more employers move to online training and experience measures (another example of an assessment device with little validity but quick, and cheap), MQs serve as a check, presumably helping to ensure that at least some of the folks that end up on employment lists are qualified.

But I would argue that any system that still employs MQs is basically fooling itself, doing little to control favoritism and ultimately contributing to the inability of hiring supervisors to get the best person--which is what a system of merit is ultimately about.  Particularly with what we know about the effectiveness of a properly administered system of internet testing, MQs are an antiquity, serving as a barrier to more job-related assessments and simply not worth the time we spend on them.  If we don't reform these systems in some way to modernize the selection process, we will wake up some day and wonder why fewer and fewer people are applying for jobs in the public sector, and why the candidate pools seem less and less qualified.  That day may already be here and we just haven't realized it.