Friday, November 28, 2014

Why leadership in the public sector is harder to find--but more important

Occasionally I post about things that are related to recruitment and assessment, but not focused exclusively on them.  This is one of those times.

I have the following quote from Valve Software's New Employee Handbook (a fascinating document) posted on my office door:

"Hiring well is the most important thing in the universe.
Nothing else comes close. It’s more important than breathing.
So when you’re working on hiring—participating in
an interview loop or innovating in the general area of
recruiting—everything else you could be doing is stupid
and should be ignored!"

The older I get, the more I wonder whether I should cross out "hiring" and write in "leadership".  I just can't bring myself to do it because they're both so darn important.

But this post will be about leadership.  Specifically, leadership in the public (i.e., government) sector.  More specifically, lack of leadership and what to do about it.  I don't pretend that leaders in the private sector are uniformly outstanding, but public sector is what I'm most familiar with.

First things first: in some important ways, leadership in the public sector (PS) is different from the private sector.  Not night-and-day I grant you, but there are some relatively unique boundary conditions that apply, namely:

- Not only are PS leaders bound by normal organizational policies and procedures, they labor under an additional layer of laws and rules, whether federal, state, or local.  Unlike policies and procedures, they cannot be easily changed--in fact in many cases this requires moving heaven and earth.  As one (very important) example, typically there are laws/rules about how you can hire someone.  Many of these laws and rules were created 50+ years ago in reaction to spoils systems and haven't been seriously evaluated since.

- Many PS employees have civil service protections.  This isn't a bad thing, but it means moving employee that are bad fits (either over or out) is difficult.  This greatly inhibits your talent mobility strategy.

- In the PS, leadership is often treated as an afterthought, rather than the linchpin upon which organizational success relies.  The assumption seems to be that the organizational systems and processes are so strong that it almost doesn't matter who's in charge.  This means things like leadership development and training are half-hearted.

These conditions combine with several other factors to result in true leadership being relatively rare in the PS:

- Failure is invisible.  There is very little measurement of leadership success and very little transparency and accountability, absent a media storm.

- There are fewer people in management positions that have the important leadership competencies.  Things like listening ability, strategic planning ability, and emotional intelligence.  Instead they are usually chosen based on technical ability and without the benefit of rigorous assessment results.

- There is a lack of understanding of leadership.  This stems from the lack of attention paid to it as a serious discipline; without operational definitions of leadership, there is no measurement and no accountability.

- An unwillingness to treat leadership seriously.  For whatever reasons--politics, lack of motivation, entrenched cultures--leadership is relegated to second-class status when it comes to analyzing department/agency success.  Focus tends to rest on line-level employees, technology, and unions--and only on top-level leaders when there is a phenomenally bad outcome.

So why is leadership more important in the PS than the private sector?

- Governments regulate many aspects of our lives.  They're not making consumer products.  Leaders in PS organizations have purview over things like public safety, the environment, education, housing, and taxes.  Things you literally touch every day.

- There is less accountability, less transparency.  PS leaders often do not report to a board.  They don't have to produce annual reports that detail their successes and failures.  What they do is often mysterious, poorly defined, and rarely sees the light of day.

- PS leaders work for you.  Elected or not, their salary typically comes from taxpayers.  They ultimately report to the citizens.  This means you should care about what they're doing, and whether they are worthy stewards of your investment.

So what can be done?  Much like the answer to "how do we hire well?", the answers are known.  They're just not practiced very well:

1.  Publicly acknowledge the scope of the problem.  Like frogs in a pot, somehow we find ourselves in a situation where slowly over time the current situation is accepted as normal.  It's time to stop pretending that all PS Managers are leaders.  They're not.  And we must look in the mirror ourselves and acknowledge that we are likely part of the problem.

2.  Acknowledge the urgency to improve.  Stop pretending that leadership is a secondary concern.  Sub-par leadership has a negative impact on our lives every day.  Improving the quality of that leadership is one of the most critical things we can do as a society.

3.  Publicly commit to change, and actually follow through.  Specifically describe what you will change, and when, and provide regular status updates.

4.  Define leadership in measurable terms and behaviors.  Here's just a sample list of what real leaders  do (and not a particularly good one);

= continuously improve operations
= champion and reward innovation
= hold their people accountable for meeting SMART goals
= continually seek feedback and signs of their own success and failure
= create and sustain a culture that attracts high performers and dissuades poor fits
= make hiring and promoting the most qualified people THE most important part of their job

5.  Hire and promote those with leadership competencies, not the best technicians.  While knowledge of the work being performed is important, it is far from the most important competency.

6.  Make the topic of leadership a core activity for every management team.  Eliminate "information sharing" meetings and replace them with discussions on how to be better leaders.

7.  Set clear goals of leaders up front, and hold them accountable.  What does this mean?

= consequences for hiring poor fits
= consequences for poor morale on their team
= consequences for not setting and meeting SMART goals
= recognition for doing all of the above well

8.  Measure leadership success and make the results transparent.  Develop plans to address gaps and follow through.

9.  Instill a culture of boldness and innovation.  Banish fear, often borne of laboring under layers of red tape.  Encourage risk-taking, and learn from mistakes rather than punishing them.

10.  Relentlessly seek out and banish inefficiencies, especially related to the use of time.  Critically evaluate how email and meetings are used; establish rules regarding their use.

11.  Stop pretending that all of this applies only to first-line supervisors.  If anything, they're more important the higher you go in the organizational chart.

12.  When it comes to recruiting, stop focusing on low relative salaries, and capitalize on the enormous benefit of the PS an employer--namely the mission of public service.

13.  View leadership as a competency, not a position.  Leadership behaviors can be found everywhere in an organization--they should be recognized and promoted.


My intent here is not to be a downer, but to emphasize how much more focus needs to be placed on leadership in the public sector.  The current state of affairs is unacceptable.  And for those of us familiar with research and best practices in organizational behavior, it's painful.

So I apologize for the decidedly un-Thanksgivingy nature of this post.  But I am thankful for free speech and open minds.  Thanks for reading.

Monday, October 27, 2014

Just kidding...more research update!

Seriously?  Just yesterday I did my research update, ending with a note that the December 2014 issue of the International Journal of Selection and Assessment should be out soon.

Guess what?  It came out today.

So that means--you guessed it--another research update!  :)

- First, a test of Spearman's hypothesis, which states that the magnitude of White-Black mean differences on tests of cognitive ability vary with the test's g loading.  Using a large sample of GATB test-takers, these authors found support for Spearman's hypothesis, and that reducing g saturation lowered validity and increased prediction errors.

So does that mean practitioners have to choose between high-validity tests of ability or increasing the diversity of their candidate pool?  Not so fast.  Remember...there are other options.

- Next, international (Croatian) support for the Conditional Reasoning Test of Aggression, which can be used to predict counterproductive work behaviors.  I can see this increasingly being something employers are interested in.

- Applicants that do well on tests have favorable impressions of them, while those that do poorly don't like them.  Right?  Not necessarily.  These researchers found that above and beyond how people actually did on a test, certain individual differences predict applicant reactions, and suggest these be taken into account when designing assessments.

- Although personality testing continues to be one of the most popular topics, concerns remain about applicants "faking" their responses (i.e., trying to game the test by responding inaccurately but hopefully increase the chances of obtaining the job).  This study investigates the use of blatant extreme responding, consistently selecting the highest or lowest response option, to detect faking, and looked at how this behavior correlated with cognitive ability, other measures of faking, and demographic factors (level of job, race, and gender).

- Next, a study of assessment center practices in Indonesia.

- Do individuals high in neuroticism have higher or lower job performance?  Many would guess lower performance, but according to this research, the impact of neuroticism on job performance is moderated by job characteristics.  This supports the more nuanced view that the relationship between personality traits and performance is in many cases non-linear and depends on how performance is conceptualized.

- ...which leads oh so nicely into the next article!  In it, the authors studied air traffic controllers and found results consistent with previous studies--ability primarily predicted task performance while personality better predicted citizenship behavior.  Which raises an interesting question: which version of "performance" are you interested in?  My guess is for many employers the answer is both--which suggests of course using multiple methods when assessing candidates.

- Last but not least, an important study of using cognitive ability and personality to predict job performance in a three studies of Chilean organizations.  Results were consistent with studies conducted elsewhere, namely ability and personality significantly predicted performance.

Okay, I think that's it for now!

Saturday, October 25, 2014

Research update

Okay, so it been a couple months, huh?  Well, what say we do a research update then.

But before I dive in, I discovered something interesting and important.  Longtime readers know that one of my biggest pet peeves is how difficult research articles are to get a hold of.  And by difficult I mean expensive.  Historically, unless you were affiliated with a research institution or were a subscriber, you had to pay exorbitant (IMHO) fees to see research articles.  So imagine my pleasure when I discovered that at least one publisher--Wiley, who publishes several of the research journals in this area--now allows you to read-access for an article for as low as $6.  Now that's only for 48 hours and you can't print it, but hey--that's a heck of a lot better than something like $30-40, which historically has been the case!  So kudos.

Moving on.

Let's start with a bang with an article from the Autumn 2014 issue of Personnel Psych.  A few years back several researchers argued that the assumption that performance is distributed normally was incorrect; and it got a bit of press.  Not so fast, say new researchers, who show that when defined properly, performance is in fact more normally distributed.

For those of you wondering, "why do I care?"  Whether we believe performance is normally distributed or not significantly impacts not only the statistical analyses performed on selection mechanisms but theories and practices surrounding HRM.


Moving to the July issue of the Journal of Applied Psychology:

- If you're going to use a cognitively-loaded selection mechanism (which in many cases has some of the highest predictive validity), be prepared to accept high levels of adverse impact.  Right?  Not to fast, say these researchers, who show that by weighting the subtests, you can increase diversity decisions without sacrifice validity.

- Here's another good one.  As you probably know, the personality trait of conscientiousness has shown value in predicting performance in certain occupations.  Many believe that conscientiousness may in fact have a curvilinear relationship with performance (meaning after a certain point, more conscientiousness may not help)--but this theory has not been consistently supported.  According to these researchers, this may have to do with the assumption that higher scores equal more conscientiousness.  In fact, when using an "ideal point" model, results were incredibly consistent in terms of supporting the curvilinear relationship between conscientiousness and performance.

- Range restriction is a common problem in applied selection research, since you only have performance data on a subset of the test-takers, requiring us to draw inferences.   A few years back, Hunter, Schmidt, and Le proposed a new correction for range restriction that requires less information.  But is it in fact superior?  According to this research, the general answer appears to be: yes.


Let's move to the September issue of JAP:

- Within-person variance of performance is an important concept, both conceptually and practically.  Historically short-term and long-term performance variance have been treated separately, but these researchers show the advantage of integrating the two together.

- Next, a fascinating study of the choice of (and persistence in) STEM fields as a career, the importance of both interest and ability, and how gender plays an important role.  In a nutshell, as I understand it, interest and ability seem to play a more important role in predicting STEM career choices for men than for women, whereas ability is more important in the persistence in STEM careers for women.


Let's take a look at a couple from recent issue of Personnel Review:

- From volume 43(5), these researchers found support for ethics-based hiring decisions resulting in improved work attitudes, include organizational commitment.

- From 43(6), an expanded conceptual model of how hiring supervisors perceive "overqualification", which I would love to see more research on.


Last but not least, for you stats folks, what's new from PARE?

- What happens when you have missing data on multiple variables?

- Equivalence testing: samples matter!

- What sample size is needed when using regression models?  Here's one suggestion on how to figure it out.


The December 2014 issue of IJSA should be out relatively soon, so look for a post on that soon!



Wednesday, August 06, 2014

Research update

I can't believe it's been three months since a research update.  I was waiting until I got critical mass, and with the release of the September issues of IJSA, I think I've hit it.

So let's start there:

- Experimenting with using different rating scales on SJTs (with "best and worst" response format doing the best of the traditional scales)

- Aspects of a semi-structured interview added incremental validity over cognitive ability in predicting training performance

- Studying the use of preselection methods (e.g., work experience) prior to assessment centers in German companies

- The proposed general factor of personality may be useful in selection contexts (this one was a military setting)

- Evidence that effective leaders show creativity and political skill

- Investigating the relationship (using survey data) between personality facets and CWBs (with emotional stability playing a key role)

- Corrections for indirect range restriction boosted the upper end of structured interview validity substantially

- A method of increasing the precision of simulations that analyze group mean differences and adverse impact

- A very useful study that looked at the prediction of voluntary turnover as well as performance using biodata and other applicant information, including recruitment source, among a sample of call center applicants.  Reuslts?  Individuals who had previously applied, chose to submit additional information, were employed, or were referrals had significantly less voluntary turnover.



Moving on...let's check out the May issue of JAP; there are only two articles but both worth looking at:

- First, a fascinating study of the firm-level impact of effective staffing and training, suggesting that the former allow organizations greater flexibility and adaptability (e.g., to changing financial conditions).

- Second, another study of SJT response formats.  The researchers found, using a very large sample, the "rate" format (e.g., "rate each of the following options in terms of effectiveness") to be superior in terms of validity, reliability, and group differences.


Next, the July issue of JOB, which is devoted to leadership:

- You might want to check out this overview/critique of the various leadership theories.

- This study suggests that newer models proposing morality as an important component of leadership success have methodological flaws.

- Last, a study of why Whites oppose affirmative action programs


Let's move to the September issue of Industrial and Organizational Psychology:

- The first focal article discusses the increasing movement of I/O psychology to business schools.  The authors found evidence that this is due in large part to some of the most active and influential I/O researchers moving to business schools.

- The second is about stereotype threat--specifically its importance as a psychological construct and the paucity of applied research about it.


Coming into the home stretch, the Summer issue of Personnel Psych:

- The distribution of individual performance may not be normal if, as these researchers suggest, "star performers" have emerged

- Executives with high levels of conscientiousness and who display transformational leadership behavior may directly contribute to organizational performance


Rounding out my review, check out a few recent articles from PARE:

- I'm not even gonna attempt to summarize this, so here's the title: Multiple-Group confirmatory factor analysis in R – A tutorial in measurement invariance with continuous and ordinal indicators

- Improving exploratory factor analysis for ordinal data

- Improving multidimensional adaptive testing


Last but not least, it's not related to recruitment or assessment, but check out this study that found productivity increases during bad weather :)

That's all folks!

Sunday, June 22, 2014

Are job ads a relic?


Lately at my day job we've been working with a few customers on replacing their traditional "one-shot" job ads with continuously posted career opportunities.  Why?

- It helps capture qualified candidates regardless of where they are in the search process (i.e., it helps solve the "I didn't see the ad" problem).

- It gives hiring supervisors a persistent, fresh pool of applicants that they can immediately draw from.

- It saves a ton of time that gets wasted in the traditional model due to requesting to fill a position, tailoring the duty statement, determining the assessment strategy, etc.

- It changes the focus--importantly and appropriately--from filling a single position to career opportunities.

- It presents an opportunity to critically review the way we advertise our jobs, which too often are boring and uninspired.

- With the appropriate technology, it can create another community of minds; for businesses this means customers, for public sector, it means solution generators.

- With the appropriate technology, connections can be potentially tapped to increase reach.

Apparently we're not alone in going down this road.  As this article describes, online retailer Zappos has created Zappos Insider, with the goal being to create more of a talent community than a one-time transactional relationship.  This move toward "candidate relationship management" is not new but seems to be gaining steam, which is also reflected in HR technology as vendors build this approach into their products.


So what are some challenges associated with the model?

- Without specific application dates, it becomes more critical that applicants can determine their status at any time.

- It may dissuade applicants who are actively seeking for work, who may see this model as too slow.

- It requires significant up-front work to design and determine the administration (but pays dividends on the back-end).

- Hiring supervisors may be skeptical of the change.


Here are some related issues that moving to this model doesn't automatically solve:

- Engaging in a timely manner with candidates so they understand the status of their application/interest.

- Communicating effectively with those not selected.

- Giving applicants a real person to contact if they have questions (Zappos makes these contacts very clear).

- Creating attractive yet realistic descriptions of positions in the organization.

- Focusing on the KSAOs that are most strongly linked to job performance.

- Developing an assessment strategy that most effectively measures those KSAOs.


Until there is a free worldwide talent pool that matches high quality candidate assessment with realistic job profiles (yes, that's my dream of how to replace the current extremely wasteful job matching process), things like this may have the best shot of streamlining and updating a process that is holding us back rather than helping both applicants and organizations achieve their goals.

Saturday, May 10, 2014

WRIPAC's 35 Anniversary Meeting: Something old, something new, something borrowed, something to do


Over the last two days I had the pleasure of attending the Western Region Intergovernmental Personnel Assessment Council's (WRIPAC) 35th anniversary meeting in San Francisco.  I thought I would share with you some of my observations, as it's been a while, unfortunately, since I attended one of their events.

For the uninitiated, WRIPAC was founded in 1979 and is one of several regional associations in the United States devoted to personnel assessment and selection in the public sector.  Other examples include PTC-NC, PTC-SC, PTC/MW, and IPAC.  I have been involved with several of these organizations over the years and they provide a tremendous benefit to members and attendees, from networking to best practice to the latest research.

WRIPAC serves public agencies in Arizona, California, Nevada, and Oregon.  They have a somewhat unique membership model in that there is no membership fee and the meetings are free, but in order to become a member you have to attend two consecutive meetings.  They maintain their energy in large part through a commitment to their committees and providing excellent training opportunities.

So, about the 35th anniversary WRIPAC meeting, hosted by the City and County of San Francisco:

Something old:  One of my observations over the day-and-a-half meeting was the remarkable number of issues that seem to be the same, year after year.  For example:

* how to effectively screen large numbers of applicants
* how to handle increased workload with a reduced number of HR staff
* how to convince supervisors and managers to use valid assessment methods
* which assessment vendors provide the best products
* how to design selection systems that treat candidates fairly but don't unduly burden the agency

We also were treated to a retrospective by three previous WRIPAC presidents, which highlighted some really stark ways the workforce and the assessment environment has changed over the years.  This included everything from sexual harassment in the workplace being commonplace to agencies trying to figure out how to comply with the new (at the time) Uniform Guidelines.

Something new:  Dr. Scott Highhouse from Bowling Green State University presented results from several fascinating studies.

The first looked at "puzzle interview questions"--those bizarre questions like "Why are manhole covers round?" made famous by companies like Microsoft and Google.  Using a sample of participants from Amazon's Mechanical Turk (the first time I've heard this done for a psychological study), he was particularly interested in whether individual "dark side" traits such as machiavellianism and narcissism might explain why certain people prefer these types of questions.

Results?  First, men were more likely to endorse these types of items.  Why?  Well it may have to do with the second finding that respondents higher in narcissism and sadism were more likely to endorse these types of questions.  From what I can tell from my massive (ten minute) search of the internet, men are more likely to display both narcissism and sadism.  Maybe more importantly, the common denominator seemed to be callousness, as in being insensitive or cruel toward others.

So what does this mean?  Well I had two thoughts: one, if you like to ask these types of questions you might ask yourself WHY (because there's no evidence I know of to support using them).  Second, if you work with supervisors who show high levels of callousness, they might need additional support/nudging to use appropriate interview questions.  Looking forward to these results getting published.

The second study Dr. Highhouse described looked at worldviews and whether they impact beliefs about the usefulness of testing--in particular cognitive ability and personality tests.  This line of research basically is trying to discover why people refuse to entertain or use techniques that have proven to work (there's a separate line of study about doctors who refuse to use decision aids--it's all rather disturbing).

Anyway, in this study, also using participants from Mechanical Turk, the researchers found that individuals that had strong beliefs in free will (i.e., people have control of their choices and should take personal responsibility) were more open to using conscientiousness tests, and people with strong beliefs in scientific determinism (i.e., behavior stems from genetics and the environment) were more open to using cognitive ability tests.  This adds valuable insight to why certain supervisors may be resistant to using assessment methods that have a proven track record--they're not simply being illogical but it may be based on their fundamental beliefs about human behavior.

The last study he talked about looked at whether presenting people with evidence of test validity would change their related world views.  You wouldn't expect strong effects, but they did find a change.  Implication?  More support for educating test users about the strengths and weaknesses of various assessments--something we do routinely but for good reason!

Last but not least, Dr. Highhouse introduced a journal that will hopefully be coming out next year, titled Journal of Personnel Assessment and Decisions, that will be sponsored by IPAC and Bowling Green State University, that aside from the excellent subject matter, will have two key features: it will be free and open to everyone.  I'm very excited about this, and long-time readers will know I've railed for years about how difficult it is for people to access high-quality research.

Something borrowed:  one of the big benefits of being involved with professional organizations--particularly ones like WRIPAC that are practitioner-focused--is it gives you access to others facing similar challenges that have come up with great solutions.  There was a lot of information shared at the roundtable about a wide variety of topics, including:

- reasonable accommodation during the testing process (e.g., armless chairs for obese individuals)
- how to use automated pre-screening to narrow candidate pools
- how agencies will adjust to the new requirements in California related to criminal background information (i..e, "ban the box" and other similar legislation)
- how to efficiently assess out-of-state applicants (e.g., video interviewing, remote proctors)
- how and when to verify a drivers license if required for the job
- how to effectively use 360-degree evaluations
- how MQs (the subject of June's free PTC-NC meeting) should best be placed in the selection process
- cloud vs. on-premise IT solutions
- centralization vs. decentralization of HR functions
- use of item banks (e.g., WRIB, CODESP)

In addition, there was an excellent entire afternoon session devoted to succession and leadership planning that featured three speakers describing the outstanding programs they've managed to administer at their agencies.  I took a ton of information away from these and it came at exactly the right time as we're looking at implementing these exact same programs.

Something to do:  One of the main things I took away from this conference is how important it is to maintain your participation in professional associations.  It's so easy to get sucked into your daily fires and forgot how valuable it is, both personally and professionally, to meet with others and tackle our shared challenges.  I plan on sharing what I learned back in the office and upping my expectation that as HR professionals we need to be active in our professional community.  I encourage you to do the same!

Sunday, April 27, 2014

Mobile assessment comes of age + research update

The idea of administering employment tests on mobile devices is not new.  But serious research into it is in its infancy.  This is to be expected for at least two reasons: (1) historically it has taken a while with new technologies to have enough data to analyze (although this is changing), and (2) it takes a while for researchers to get through the arcaneness of publishing (this, to my knowledge, isn't changing, but please prove me wrong).

Readers interested in the topic have benefited from articles elsewhere, but we're finally at a point where good research is being published on this topic.  Case in point: the June issue of the International Journal of Selection and Assessment.

The first article on this topic in this issue, by Arthur, Doverspike, Munoz, Taylor, & Carr, studied data from over 3.5 million applicants who completed unproctored internet-based tests (UIT) over a 14-month period.  And while the percentage that completed them on mobile devices was small (2%), it still yielded data on nearly 70,000 applicants.

Results?  Some in line with research you may have seen before, but some may surprise you:

- Mobile devices were (slightly) more likely to be used by women, African-Americans and Hispanics, and younger applicants.  (Think about that for a minute!)

- Scores on a personality inventory were similar across platforms.

- Scores on a cognitive ability test were lower for those using mobile devices.  Without access to the entire article, I can only speculate on proffered reasons, but it's interesting to think about whether this is a reflection of the applicants or the platform.

- Tests of measurement invariance found equivalence across platforms (which basically means the same thing(s) appeared to be measured).

So overall, in terms of using UITs, I think this is promising in terms of including a mobile component.


The next article, by Morelli, Mahan, and Illingworth, also looked at measurement variance of mobile versus non-mobile (i.e., PC-delivered) internet-based tests, with respect to four types of assessment: cognitive ability, biodata, a multimedia work simulation, and a text-based situational judgment test.  Data was gathered from nearly 600,000 test-takers in the hospitality industry who were applying for maintenance and customer-facing jobs in 2011 and 2012 (note the different job types).  Nearly 25,000 of these applicants took the assessment on mobile devices.

Results?  The two types of administrations appeared be equivalent in terms of what they were measuring.  However, interestingly, mobile test-takers did worse on the SJT portion.  The authors reasonably hypothesize this may be due to the nature of the SJT and the amount of attention it may have required compared to the other test types.  (btw this article appears to be based on Morelli's dissertation, which can be found here--it's a treasure trove of information on the topic)

Again, overall these are promising results for establishing the measurement equivalence of mobile assessments.  What does this all mean?  It suggests that unproctored tests delivered using mobile devices are measuring the same things as tests delivered using more traditional internet-based methods.  It also looks like fakability or inflation may be a non-issue (compared to traditional UIT).  This preliminary research means researchers and practitioners should be more confident that mobile assessments can be used meaningfully.

I agree with others that this is only the beginning.  In our mobile and app-reliant world, we're only scratching the surface not only in terms of research but in terms of what can be done to measure competencies in new--and frankly more interesting--ways.  Not to mention all the interesting (and important) associated research questions:

- Do natively developed apps differ in measurement properties--and potential--compared to more traditional assessments simply delivered over mobile?

- How does assessment delivery model interact with job type?  (e.g., may be more appropriate for some, may be better than traditional methods for others)

- What competencies should test developers be looking for when hiring?  (e.g., should they be hiring game developers?)

- What do popular apps, such as Facebook (usage) and Candy Crush (score), measure--if anything?

- Oh, and how about: does mobile assessment impact criterion-related validity?


Lest you think I've forgotten the rest of this excellent issue...

- Maclver, et al. introduce the concept of user validity, which uses test-taker perceptions to focus on ways we can improve assessments, score interpretation, and the provision of test feedback.

- Bing, et al. provide more evidence that contextualizing personality inventory items (i.e., wording the items so they more closely match the purpose/situation) improves the prediction of job performance--beyond noncontexual measures of the same traits.

- On the other hand, Holtrop, et al. take things a step further and look at different methods of contextualization.  Interestingly, this study of 139 pharmacy assistants found a decrease in validity compared to a "generic" personality inventory!

- This study by Ioannis Nikolaou in Greece of social networking websites (SNWs) that found job seekers still using job boards more than SNWs, that SNWs may be particularly effective for passive candidates (!), and that HR professionals found LinkedIn to be more effective than Facebook.

- An important study of applicant withdrawal behavior by Brock Baskin, et al., that found withdrawal tied primarily to obstructions (e.g., distance to test facility) rather than minority differences in perception.

- A study of Black-White differences on a measure of emotional intelligence by Whitman, et al., that found (N=334) Blacks had higher face validity perceptions of the measure, but Whites performed significantly better.

- Last, a study by Vecchione that compared the fakability of implicit personality measures to explicit personality measures.  Implicit measures are somewhat "hidden" in that they measure attitudes or characteristics using perceptual speed or other tools to discover your typical thought patterns; you may be familiar with project implicit, which has gotten some media coverage.  Explicit measures are, as the name implies, more obvious items--in this case, about personality aspects.  In this study of a relatively small number of security guards and semiskilled workers, the researchers found the implicit measure to be superior in terms of fakability resistance.  (I wonder how the test-takers felt?)


That's it for this excellent issue of IJSA, but in the last few months we also got some more great research care of the March issue of the Journal of Applied Psychology:

- An important (but small N) within-subjects study by Judge, et al. of the stability of personality at work.  They found that while traits exhibited stability across time, there were also deviations that were explained by work experiences such as interpersonal conflict, which has interesting implications for work behavior as well as measurement.  In addition, the authors found that individuals high in neuroticism exhibited more variation in traits over time compared to those who were more emotionally stable.  You can find an in press version here; it's worth a read, particularly the section beginning on page 47 on practical implications.

- Smith-Crowe, et al. present a set of guidelines for researchers and practitioners looking to draw conclusions from tests of interrater agreement that may assume conditions that are rarely true.

- Another interesting one: Wille & De Fruyt investigate the reciprocal relationship between personality and work.  The researchers found that while personality shapes occupational experiences, the relationship works in both directions and work can become an important source of identity.

- Here's one for you assessment center fans: this study by Speer, et al. adds to the picture through findings that ratings taken from exercises with dissimilar demands actually had higher criterion-related validity than ratings taken from similar exercises!

- Last but not least, presenting research findings in a way that is understandable to non-researchers poses an ongoing--and important--challenge.  Brooks et al. present results of their study that found non-traditional effect size indicators (e.g., a common language effect size indicator) were perceived as more understandable and useful when communicating results of an intervention.  Those of you that have trained or consulted for any length of time know how important it is to turn correlations into dollars or time (or both)!

That's it for now!