Wednesday, August 06, 2014

Research update

I can't believe it's been three months since a research update.  I was waiting until I got critical mass, and with the release of the September issues of IJSA, I think I've hit it.

So let's start there:

- Experimenting with using different rating scales on SJTs (with "best and worst" response format doing the best of the traditional scales)

- Aspects of a semi-structured interview added incremental validity over cognitive ability in predicting training performance

- Studying the use of preselection methods (e.g., work experience) prior to assessment centers in German companies

- The proposed general factor of personality may be useful in selection contexts (this one was a military setting)

- Evidence that effective leaders show creativity and political skill

- Investigating the relationship (using survey data) between personality facets and CWBs (with emotional stability playing a key role)

- Corrections for indirect range restriction boosted the upper end of structured interview validity substantially

- A method of increasing the precision of simulations that analyze group mean differences and adverse impact

- A very useful study that looked at the prediction of voluntary turnover as well as performance using biodata and other applicant information, including recruitment source, among a sample of call center applicants.  Reuslts?  Individuals who had previously applied, chose to submit additional information, were employed, or were referrals had significantly less voluntary turnover.

Moving on...let's check out the May issue of JAP; there are only two articles but both worth looking at:

- First, a fascinating study of the firm-level impact of effective staffing and training, suggesting that the former allow organizations greater flexibility and adaptability (e.g., to changing financial conditions).

- Second, another study of SJT response formats.  The researchers found, using a very large sample, the "rate" format (e.g., "rate each of the following options in terms of effectiveness") to be superior in terms of validity, reliability, and group differences.

Next, the July issue of JOB, which is devoted to leadership:

- You might want to check out this overview/critique of the various leadership theories.

- This study suggests that newer models proposing morality as an important component of leadership success have methodological flaws.

- Last, a study of why Whites oppose affirmative action programs

Let's move to the September issue of Industrial and Organizational Psychology:

- The first focal article discusses the increasing movement of I/O psychology to business schools.  The authors found evidence that this is due in large part to some of the most active and influential I/O researchers moving to business schools.

- The second is about stereotype threat--specifically its importance as a psychological construct and the paucity of applied research about it.

Coming into the home stretch, the Summer issue of Personnel Psych:

- The distribution of individual performance may not be normal if, as these researchers suggest, "star performers" have emerged

- Executives with high levels of conscientiousness and who display transformational leadership behavior may directly contribute to organizational performance

Rounding out my review, check out a few recent articles from PARE:

- I'm not even gonna attempt to summarize this, so here's the title: Multiple-Group confirmatory factor analysis in R – A tutorial in measurement invariance with continuous and ordinal indicators

- Improving exploratory factor analysis for ordinal data

- Improving multidimensional adaptive testing

Last but not least, it's not related to recruitment or assessment, but check out this study that found productivity increases during bad weather :)

That's all folks!

Sunday, June 22, 2014

Are job ads a relic?

Lately at my day job we've been working with a few customers on replacing their traditional "one-shot" job ads with continuously posted career opportunities.  Why?

- It helps capture qualified candidates regardless of where they are in the search process (i.e., it helps solve the "I didn't see the ad" problem).

- It gives hiring supervisors a persistent, fresh pool of applicants that they can immediately draw from.

- It saves a ton of time that gets wasted in the traditional model due to requesting to fill a position, tailoring the duty statement, determining the assessment strategy, etc.

- It changes the focus--importantly and appropriately--from filling a single position to career opportunities.

- It presents an opportunity to critically review the way we advertise our jobs, which too often are boring and uninspired.

- With the appropriate technology, it can create another community of minds; for businesses this means customers, for public sector, it means solution generators.

- With the appropriate technology, connections can be potentially tapped to increase reach.

Apparently we're not alone in going down this road.  As this article describes, online retailer Zappos has created Zappos Insider, with the goal being to create more of a talent community than a one-time transactional relationship.  This move toward "candidate relationship management" is not new but seems to be gaining steam, which is also reflected in HR technology as vendors build this approach into their products.

So what are some challenges associated with the model?

- Without specific application dates, it becomes more critical that applicants can determine their status at any time.

- It may dissuade applicants who are actively seeking for work, who may see this model as too slow.

- It requires significant up-front work to design and determine the administration (but pays dividends on the back-end).

- Hiring supervisors may be skeptical of the change.

Here are some related issues that moving to this model doesn't automatically solve:

- Engaging in a timely manner with candidates so they understand the status of their application/interest.

- Communicating effectively with those not selected.

- Giving applicants a real person to contact if they have questions (Zappos makes these contacts very clear).

- Creating attractive yet realistic descriptions of positions in the organization.

- Focusing on the KSAOs that are most strongly linked to job performance.

- Developing an assessment strategy that most effectively measures those KSAOs.

Until there is a free worldwide talent pool that matches high quality candidate assessment with realistic job profiles (yes, that's my dream of how to replace the current extremely wasteful job matching process), things like this may have the best shot of streamlining and updating a process that is holding us back rather than helping both applicants and organizations achieve their goals.

Saturday, May 10, 2014

WRIPAC's 35 Anniversary Meeting: Something old, something new, something borrowed, something to do

Over the last two days I had the pleasure of attending the Western Region Intergovernmental Personnel Assessment Council's (WRIPAC) 35th anniversary meeting in San Francisco.  I thought I would share with you some of my observations, as it's been a while, unfortunately, since I attended one of their events.

For the uninitiated, WRIPAC was founded in 1979 and is one of several regional associations in the United States devoted to personnel assessment and selection in the public sector.  Other examples include PTC-NC, PTC-SC, PTC/MW, and IPAC.  I have been involved with several of these organizations over the years and they provide a tremendous benefit to members and attendees, from networking to best practice to the latest research.

WRIPAC serves public agencies in Arizona, California, Nevada, and Oregon.  They have a somewhat unique membership model in that there is no membership fee and the meetings are free, but in order to become a member you have to attend two consecutive meetings.  They maintain their energy in large part through a commitment to their committees and providing excellent training opportunities.

So, about the 35th anniversary WRIPAC meeting, hosted by the City and County of San Francisco:

Something old:  One of my observations over the day-and-a-half meeting was the remarkable number of issues that seem to be the same, year after year.  For example:

* how to effectively screen large numbers of applicants
* how to handle increased workload with a reduced number of HR staff
* how to convince supervisors and managers to use valid assessment methods
* which assessment vendors provide the best products
* how to design selection systems that treat candidates fairly but don't unduly burden the agency

We also were treated to a retrospective by three previous WRIPAC presidents, which highlighted some really stark ways the workforce and the assessment environment has changed over the years.  This included everything from sexual harassment in the workplace being commonplace to agencies trying to figure out how to comply with the new (at the time) Uniform Guidelines.

Something new:  Dr. Scott Highhouse from Bowling Green State University presented results from several fascinating studies.

The first looked at "puzzle interview questions"--those bizarre questions like "Why are manhole covers round?" made famous by companies like Microsoft and Google.  Using a sample of participants from Amazon's Mechanical Turk (the first time I've heard this done for a psychological study), he was particularly interested in whether individual "dark side" traits such as machiavellianism and narcissism might explain why certain people prefer these types of questions.

Results?  First, men were more likely to endorse these types of items.  Why?  Well it may have to do with the second finding that respondents higher in narcissism and sadism were more likely to endorse these types of questions.  From what I can tell from my massive (ten minute) search of the internet, men are more likely to display both narcissism and sadism.  Maybe more importantly, the common denominator seemed to be callousness, as in being insensitive or cruel toward others.

So what does this mean?  Well I had two thoughts: one, if you like to ask these types of questions you might ask yourself WHY (because there's no evidence I know of to support using them).  Second, if you work with supervisors who show high levels of callousness, they might need additional support/nudging to use appropriate interview questions.  Looking forward to these results getting published.

The second study Dr. Highhouse described looked at worldviews and whether they impact beliefs about the usefulness of testing--in particular cognitive ability and personality tests.  This line of research basically is trying to discover why people refuse to entertain or use techniques that have proven to work (there's a separate line of study about doctors who refuse to use decision aids--it's all rather disturbing).

Anyway, in this study, also using participants from Mechanical Turk, the researchers found that individuals that had strong beliefs in free will (i.e., people have control of their choices and should take personal responsibility) were more open to using conscientiousness tests, and people with strong beliefs in scientific determinism (i.e., behavior stems from genetics and the environment) were more open to using cognitive ability tests.  This adds valuable insight to why certain supervisors may be resistant to using assessment methods that have a proven track record--they're not simply being illogical but it may be based on their fundamental beliefs about human behavior.

The last study he talked about looked at whether presenting people with evidence of test validity would change their related world views.  You wouldn't expect strong effects, but they did find a change.  Implication?  More support for educating test users about the strengths and weaknesses of various assessments--something we do routinely but for good reason!

Last but not least, Dr. Highhouse introduced a journal that will hopefully be coming out next year, titled Journal of Personnel Assessment and Decisions, that will be sponsored by IPAC and Bowling Green State University, that aside from the excellent subject matter, will have two key features: it will be free and open to everyone.  I'm very excited about this, and long-time readers will know I've railed for years about how difficult it is for people to access high-quality research.

Something borrowed:  one of the big benefits of being involved with professional organizations--particularly ones like WRIPAC that are practitioner-focused--is it gives you access to others facing similar challenges that have come up with great solutions.  There was a lot of information shared at the roundtable about a wide variety of topics, including:

- reasonable accommodation during the testing process (e.g., armless chairs for obese individuals)
- how to use automated pre-screening to narrow candidate pools
- how agencies will adjust to the new requirements in California related to criminal background information (i..e, "ban the box" and other similar legislation)
- how to efficiently assess out-of-state applicants (e.g., video interviewing, remote proctors)
- how and when to verify a drivers license if required for the job
- how to effectively use 360-degree evaluations
- how MQs (the subject of June's free PTC-NC meeting) should best be placed in the selection process
- cloud vs. on-premise IT solutions
- centralization vs. decentralization of HR functions
- use of item banks (e.g., WRIB, CODESP)

In addition, there was an excellent entire afternoon session devoted to succession and leadership planning that featured three speakers describing the outstanding programs they've managed to administer at their agencies.  I took a ton of information away from these and it came at exactly the right time as we're looking at implementing these exact same programs.

Something to do:  One of the main things I took away from this conference is how important it is to maintain your participation in professional associations.  It's so easy to get sucked into your daily fires and forgot how valuable it is, both personally and professionally, to meet with others and tackle our shared challenges.  I plan on sharing what I learned back in the office and upping my expectation that as HR professionals we need to be active in our professional community.  I encourage you to do the same!

Sunday, April 27, 2014

Mobile assessment comes of age + research update

The idea of administering employment tests on mobile devices is not new.  But serious research into it is in its infancy.  This is to be expected for at least two reasons: (1) historically it has taken a while with new technologies to have enough data to analyze (although this is changing), and (2) it takes a while for researchers to get through the arcaneness of publishing (this, to my knowledge, isn't changing, but please prove me wrong).

Readers interested in the topic have benefited from articles elsewhere, but we're finally at a point where good research is being published on this topic.  Case in point: the June issue of the International Journal of Selection and Assessment.

The first article on this topic in this issue, by Arthur, Doverspike, Munoz, Taylor, & Carr, studied data from over 3.5 million applicants who completed unproctored internet-based tests (UIT) over a 14-month period.  And while the percentage that completed them on mobile devices was small (2%), it still yielded data on nearly 70,000 applicants.

Results?  Some in line with research you may have seen before, but some may surprise you:

- Mobile devices were (slightly) more likely to be used by women, African-Americans and Hispanics, and younger applicants.  (Think about that for a minute!)

- Scores on a personality inventory were similar across platforms.

- Scores on a cognitive ability test were lower for those using mobile devices.  Without access to the entire article, I can only speculate on proffered reasons, but it's interesting to think about whether this is a reflection of the applicants or the platform.

- Tests of measurement invariance found equivalence across platforms (which basically means the same thing(s) appeared to be measured).

So overall, in terms of using UITs, I think this is promising in terms of including a mobile component.

The next article, by Morelli, Mahan, and Illingworth, also looked at measurement variance of mobile versus non-mobile (i.e., PC-delivered) internet-based tests, with respect to four types of assessment: cognitive ability, biodata, a multimedia work simulation, and a text-based situational judgment test.  Data was gathered from nearly 600,000 test-takers in the hospitality industry who were applying for maintenance and customer-facing jobs in 2011 and 2012 (note the different job types).  Nearly 25,000 of these applicants took the assessment on mobile devices.

Results?  The two types of administrations appeared be equivalent in terms of what they were measuring.  However, interestingly, mobile test-takers did worse on the SJT portion.  The authors reasonably hypothesize this may be due to the nature of the SJT and the amount of attention it may have required compared to the other test types.  (btw this article appears to be based on Morelli's dissertation, which can be found here--it's a treasure trove of information on the topic)

Again, overall these are promising results for establishing the measurement equivalence of mobile assessments.  What does this all mean?  It suggests that unproctored tests delivered using mobile devices are measuring the same things as tests delivered using more traditional internet-based methods.  It also looks like fakability or inflation may be a non-issue (compared to traditional UIT).  This preliminary research means researchers and practitioners should be more confident that mobile assessments can be used meaningfully.

I agree with others that this is only the beginning.  In our mobile and app-reliant world, we're only scratching the surface not only in terms of research but in terms of what can be done to measure competencies in new--and frankly more interesting--ways.  Not to mention all the interesting (and important) associated research questions:

- Do natively developed apps differ in measurement properties--and potential--compared to more traditional assessments simply delivered over mobile?

- How does assessment delivery model interact with job type?  (e.g., may be more appropriate for some, may be better than traditional methods for others)

- What competencies should test developers be looking for when hiring?  (e.g., should they be hiring game developers?)

- What do popular apps, such as Facebook (usage) and Candy Crush (score), measure--if anything?

- Oh, and how about: does mobile assessment impact criterion-related validity?

Lest you think I've forgotten the rest of this excellent issue...

- Maclver, et al. introduce the concept of user validity, which uses test-taker perceptions to focus on ways we can improve assessments, score interpretation, and the provision of test feedback.

- Bing, et al. provide more evidence that contextualizing personality inventory items (i.e., wording the items so they more closely match the purpose/situation) improves the prediction of job performance--beyond noncontexual measures of the same traits.

- On the other hand, Holtrop, et al. take things a step further and look at different methods of contextualization.  Interestingly, this study of 139 pharmacy assistants found a decrease in validity compared to a "generic" personality inventory!

- This study by Ioannis Nikolaou in Greece of social networking websites (SNWs) that found job seekers still using job boards more than SNWs, that SNWs may be particularly effective for passive candidates (!), and that HR professionals found LinkedIn to be more effective than Facebook.

- An important study of applicant withdrawal behavior by Brock Baskin, et al., that found withdrawal tied primarily to obstructions (e.g., distance to test facility) rather than minority differences in perception.

- A study of Black-White differences on a measure of emotional intelligence by Whitman, et al., that found (N=334) Blacks had higher face validity perceptions of the measure, but Whites performed significantly better.

- Last, a study by Vecchione that compared the fakability of implicit personality measures to explicit personality measures.  Implicit measures are somewhat "hidden" in that they measure attitudes or characteristics using perceptual speed or other tools to discover your typical thought patterns; you may be familiar with project implicit, which has gotten some media coverage.  Explicit measures are, as the name implies, more obvious items--in this case, about personality aspects.  In this study of a relatively small number of security guards and semiskilled workers, the researchers found the implicit measure to be superior in terms of fakability resistance.  (I wonder how the test-takers felt?)

That's it for this excellent issue of IJSA, but in the last few months we also got some more great research care of the March issue of the Journal of Applied Psychology:

- An important (but small N) within-subjects study by Judge, et al. of the stability of personality at work.  They found that while traits exhibited stability across time, there were also deviations that were explained by work experiences such as interpersonal conflict, which has interesting implications for work behavior as well as measurement.  In addition, the authors found that individuals high in neuroticism exhibited more variation in traits over time compared to those who were more emotionally stable.  You can find an in press version here; it's worth a read, particularly the section beginning on page 47 on practical implications.

- Smith-Crowe, et al. present a set of guidelines for researchers and practitioners looking to draw conclusions from tests of interrater agreement that may assume conditions that are rarely true.

- Another interesting one: Wille & De Fruyt investigate the reciprocal relationship between personality and work.  The researchers found that while personality shapes occupational experiences, the relationship works in both directions and work can become an important source of identity.

- Here's one for you assessment center fans: this study by Speer, et al. adds to the picture through findings that ratings taken from exercises with dissimilar demands actually had higher criterion-related validity than ratings taken from similar exercises!

- Last but not least, presenting research findings in a way that is understandable to non-researchers poses an ongoing--and important--challenge.  Brooks et al. present results of their study that found non-traditional effect size indicators (e.g., a common language effect size indicator) were perceived as more understandable and useful when communicating results of an intervention.  Those of you that have trained or consulted for any length of time know how important it is to turn correlations into dollars or time (or both)!

That's it for now!

Saturday, March 29, 2014

Facial analysis for selection: An old idea using new technology?

Selecting people based on physical appearance is as old as humankind.  Mates were selected based in part on physical features.  People were hired because they were stronger.

This seems like an odd approach to selection for many jobs today because physical characteristics are largely unrelated to the competencies required to perform the job, although there are exceptions (e.g., firefighters).  But employers have always been motivated to select based on who would succeed (and/or make them money), and many have been interested in the use of gross physical characteristics to help them decide: who's taller, whose head is shaped better (phrenology), etc.  The general name for this topic is physiognomy.

Of course nowadays we have much more sophisticated ways of measuring competencies that are much more related to job success, including things like online simulations of judgment.  But this doesn't mean that people have stopped being interested in physical characteristics and how they might be related to job performance.  This is due, in part I think, to the powerful hold that visual stimuli has on us as well as the importance of things like nonverbal communication.  We may be a lot more advanced in some ways, but parts of our brain are very old.

The interest in judgment based on physical appearance has been heightened by the introduction of different technologies, and perhaps no better example of this lies with facial feature analysis.  With the advent of facial recognition technology and its widespread adoption in major cities around the globe, in law enforcement, and large sporting events, a very old idea is once again surfacing: drawing inferences from relatively* stable physical characteristics--specifically, facial features.  In fact this technology is being used for some very interesting applications.  And I'm not sure I want to know what Facebook is planning on doing with this technology.

With all this renewed interest, it was only a matter of time until we circled back to personnel selection, and sure enough a new website called FaceReflect is set to open to the public this year and claims to be able to infer personality traits from facial features, already drawing a spotlight.  But have we made great advances in the last several thousand years or is this just hype?  Let's look deeper.

What we do know is that certain physical characteristics reliably result in judgment differences.  Attractiveness is a great example: we know that individuals considered to be more attractive are judged more positively, and this includes evaluative situations like personnel selection.  It even occurs with avatars instead of real people.  And the opposite is true: for example it has been shown that applicants with facial stigmas are viewed less favorably.

Another related line of research has been around emotional intelligence, with assessments such as the MSCEIT including a component of emotional recognition.

More to the point, there's research suggesting that more fine-tuned facial features such as facial width may be linked to job success in certain circumstances.  Why?  The hypothesis seems to be two-fold: certain genes and biological mechanisms associated with facial features (e.g., testosterone) are associated with other characteristics, such as assertiveness or aggression.  This could mean that men with certain facial features (such as high facial width-to-height ratio) are more likely to exhibit these behaviors, or--and this is a key point--they are perceived that way. (By the way, there is similar research showing that voice pitch is also correlated with company success in certain circumstances)

Back to FaceReflect.  This company claims that by analyzing certain facial features, they can reliably draw inferences about personality characteristics such as generosity, decision making, and confidence.

What seems to be true is that people reliably draw inferences about characteristics based on facial features.  But here's the key question: are these inferences correct?  That's where things start to break down.

The problem is there simply isn't much research showing that judgments about job-relevant characteristics based on facial features are accurate--in fact we have research that at best the accuracy is low, and at worst shows the opposite.  To some extent you could argue this doesn't matter--what matters is whether people are reliably coming to the same conclusion.  But this assumes that what drives performance is purely other peoples' perceptions, and this is obviously missing quite a lot of the equation.

In addition, even if it were true that peoples' perceptions were accurate, it would apply only to a limited number of characteristics--i.e., those that could logically be linked to biological development through a mechanism such as testosterone.  What about something like cognitive ability, obviously a well-studied predictor of performance for many jobs?  The research linking testosterone and intelligence is complicated, some indicating the reverse relationship (e.g., less testosterone leading to higher cognitive ability), and some showing no relationship between facial features and intelligence in adults--and again, this is primarily men that have been studied.  (While estrogen also impacts facial characteristics, its impact has been less studied)

Finally, the scant research we do have indicates the link between facial features and performance is true only in certain circumstances, such as organizations that are not complex.  This is increasingly not true of modern organizations.  Circling back to the beginning of this article, you could liken this to selection based on strength becoming less and less relevant.

One of the main people behind FaceReflect has been met with skepticism before.  Not to mention that the entire field of physiognomy (or the newer term "personology") is regarded with skepticism.  But that hasn't stopped interest in the idea, including from the psychological community.

Apparently this technology is being used by AT&T for assessment at the executive levels, which I gotta say makes me nervous.  There are simply much more accurate and well-supported methods for assessing managerial potential (e.g., assessment centers).  But I suspect the current obsession with biometrics is going to lead to more interest in this area, not less.

At the end of the day, I stand by my general rule: there are no shortcuts in personnel selection (yet**).  To get the best results, you must determine job requirements and you must take the time required to get an accurate measurement of the KSAOs that link to those requirements.  It's easy to be seduced by claims that seem attractive but unfortunately lack robust research support, after all we're all susceptible to magical thinking, and there is a tendency to think that technology can do everything.  But when it comes to selection, I vote less magic, more logic.

* Think about how plastic surgery or damage to the face might impact this approach.

** As I've said many times before, we have the technology to create a system whereby a database could be created with high-quality assessment scores of many individuals that would be available for employers to match to their true job requirements.  The likelihood--or wisdom--of this idea is debatable.

Thursday, February 27, 2014

Workday unveils recruiting platform

Note: I posted this a while back but had to take it down because it was pre-release. Now that its available I'm re-posting it.

This post is going to be a bit different from my normal ones.  I'm not going to talk about research, but instead focus on technology.  Long time readers know HR technology is another passion of mine, and as recruitment/assessment professionals I think it behooves us to know "what's out there."

Recently in my day job we've been looking at automated HR systems, primarily to replace our manual time and attendance process, but it's impossible to not consider other applications once you start looking.  For the uninitiated, these systems go by various names like HCM (Human Capital Management), HRMS (Human Resource Management System) or HRIS (Human Resource Information System).

In my opinion, now is a very exciting time to be looking at automated HR systems.  Why?  Because unlike years past, when using these systems was about as pleasant as reading FMLA regulations, recent applications have taken a decidedly more "consumer" approach, borrowing heavily from popular websites like Amazon and Facebook.

One of the companies that has been the most trailblazing in this regard is Workday.  Workday was founded in 2005 by the former CEO of PeopleSoft along with its former Chief Strategist following Oracle's hostile takeover.  Workday provides cloud-based SaaS software for a variety of functions, primarily around finance, HR, and analytics.  One of Workday's big differentiators is it uses a single line of code, meaning every customer is using the same version all the time (again, just like a website).  Those of you that are used to being on Release x.2 while others are on x.6, and planning on how to upgrade, know what a big deal this is.

(If you're thinking "cloud-based whatnow?" this basically this means delivering software over the web rather than relying on locally hosted systems; obvious benefits include a potential massive reduction in local IT support, particularly attractive I think for the public sector)

For me, considering a large IT project implementation, I've seen enough to know that the user experience is essential.  Obviously the product has to work as advertised, but if users (including HR) don't like using the system--usually because it's unintuitive or overly complicated--chances of ultimate success are slim.  At best people will tolerate it.  I certainly don't want my name attached to that project.

That leads me to why companies like Workday are adding so much value to HR software.  Because their interface looks like this:

Not like this:

Up until now, Workday's HR offerings have focused on things like benefits, time tracking, and internal talent management.  Their recruiting module, announced back in 2012 and eagerly anticipated, has just been rolled (GA, or general availability, to Workday customers).  Several weeks ago I had the opportunity to see a pretty-much-finished version, and here are my observations:

1.  It's clean.  As evidenced by the screenshot above, Workday prides itself on a clean UI, and the recruiting module is no exception.  I don't have any shots to share with you because, well, I couldn't find any.  But there's plenty of white space, the eye knows where to go, and you won't get overwhelmed by sub-menu upon sub-menu.  Candidates are displayed using a "baseball card"-like interface, with key stats like years of job experience, skills, social feeds, and attachments.

2.  It's mobile- and social-friendly.  These were clear marching orders to the developers, and it shows.  Workday's mobile app is great, and SNWs like Facebook, LinkedIn, and Twitter are consistently integrated.  One feature they consistently stressed (for good reason) is how easy it is for candidates to upload their info from their LinkedIn account, saving a ton of time.

3.  At this time it's basically an ATS (applicant tracking system).  This isn't a bad thing, but don't expect qualified candidates to magically jump out of your monitor.  It's a very clean way to manage applicants for requisitions, and it's integrated into their core HR.  For many long-time users of other ATS products, this is a big deal.  Additional features, such as being able to quickly change candidate status and do mass emails, will also be popular.  Finally, you can easily search your candidate pool by competency, location, etc., similar to the employee search function in their HCM product.

4.  It will be particularly useful for organizations with dedicated recruiters.  I commented in the demo that in many organizations (including my own), we don't have dedicated recruiters; rather recruiting happens locally, driven by the hiring supervisor and their staff.  So anything these systems can do to engage and reward proper behavior (dare I say gamification here?) will pay huge dividends, and I think this is a development opportunity.  On the other hand, organizations with full-time recruiters will immediately "get it".

5.  It's a work in progress.  The career portal of the system wasn't up and running yet, although I was assured it would be by GA.  To me this is a huge missing piece, and I look forward to seeing how they integrate this with the back end.  There were also clearly plans for future features like assessments (e.g., video interviewing), job board aggregation, and CRM.  Definitely features to watch.

So at the end of the day, it wouldn't solve all our problems, but it offers an enormous potential for us, as HR, to get a better handle on what our hiring supervisors are doing.  Not only will this help with compliance, it will allow us to gather information to make more strategic decisions about resources.  The built-in business intelligence functions have the potential to transform our practices. You can get more details here:

Now lest I leave you thinking that I'm a Workday shill, its not the only game out there, there are plenty of competitors, including newer players like Ultimate as well as more established ones like Oracle, both having lots of satisfied customers.  But Workday is--at this point--one of our finalists and has been on a crazy growth spurt over the last few years.

Want to know more about this technology?  I've found CedarCrestone's annual report to be extremely helpful, as well as HRE's technology articles.  The HR tech industry is huge (see my earlier post about one of the conferences) and you can very easily spend your entire career in this space.

I can honestly say it's technology like this that has the potential to evolve much of HR from unpredictable and frustrating to exciting and engaging.  I'm ready.>

Thursday, February 20, 2014

March '14 IJSA

In my last research update just a couple days ago, I mentioned that the new issue of IJSA should be coming out soon.

I think they heard me because it came out literally the next day.

So let's take a look:

- This study adds to our (relatively little) knowledge of sensitivity reviews of test items and finds much room for improvement

- More evidence that the utility of UIT isn't eliminated by cheating, this time with a speeded ability test

- Applicant motivation may be impacted by the intended scoring mechanism (e.g., objective vs. ratings).

- The validity of work experience in predicting performance is much debated*, but this study found support for it among salespersons, with personality also playing a moderating role.

- A study of the moderating effect of "good impression" responding on personality inventories

- This review provides a great addition to our knowledge of in-baskets (a related presentation can be found through IPAC)

- Another excellent addition, this time a study of faux pas on social networking websites in the context of employer assessment

- According to this study, assessors may adjust their decision strategy for immigrants (non-native language speakers)

- Letters of recommendation, in this study of nonmedical medical school graduate students, provided helpful information in predicting degree attainment

- Interactive multimedia simulations are here to stay, and this study adds to our confidence that these types of assessments can work well

Until next time!

* Don't forget to check out the U.S. MSPB's latest research study on T&Es!