Saturday, March 29, 2014

Facial analysis for selection: An old idea using new technology?

Selecting people based on physical appearance is as old as humankind.  Mates were selected based in part on physical features.  People were hired because they were stronger.

This seems like an odd approach to selection for many jobs today because physical characteristics are largely unrelated to the competencies required to perform the job, although there are exceptions (e.g., firefighters).  But employers have always been motivated to select based on who would succeed (and/or make them money), and many have been interested in the use of gross physical characteristics to help them decide: who's taller, whose head is shaped better (phrenology), etc.  The general name for this topic is physiognomy.

Of course nowadays we have much more sophisticated ways of measuring competencies that are much more related to job success, including things like online simulations of judgment.  But this doesn't mean that people have stopped being interested in physical characteristics and how they might be related to job performance.  This is due, in part I think, to the powerful hold that visual stimuli has on us as well as the importance of things like nonverbal communication.  We may be a lot more advanced in some ways, but parts of our brain are very old.

The interest in judgment based on physical appearance has been heightened by the introduction of different technologies, and perhaps no better example of this lies with facial feature analysis.  With the advent of facial recognition technology and its widespread adoption in major cities around the globe, in law enforcement, and large sporting events, a very old idea is once again surfacing: drawing inferences from relatively* stable physical characteristics--specifically, facial features.  In fact this technology is being used for some very interesting applications.  And I'm not sure I want to know what Facebook is planning on doing with this technology.

With all this renewed interest, it was only a matter of time until we circled back to personnel selection, and sure enough a new website called FaceReflect is set to open to the public this year and claims to be able to infer personality traits from facial features, already drawing a spotlight.  But have we made great advances in the last several thousand years or is this just hype?  Let's look deeper.

What we do know is that certain physical characteristics reliably result in judgment differences.  Attractiveness is a great example: we know that individuals considered to be more attractive are judged more positively, and this includes evaluative situations like personnel selection.  It even occurs with avatars instead of real people.  And the opposite is true: for example it has been shown that applicants with facial stigmas are viewed less favorably.

Another related line of research has been around emotional intelligence, with assessments such as the MSCEIT including a component of emotional recognition.

More to the point, there's research suggesting that more fine-tuned facial features such as facial width may be linked to job success in certain circumstances.  Why?  The hypothesis seems to be two-fold: certain genes and biological mechanisms associated with facial features (e.g., testosterone) are associated with other characteristics, such as assertiveness or aggression.  This could mean that men with certain facial features (such as high facial width-to-height ratio) are more likely to exhibit these behaviors, or--and this is a key point--they are perceived that way. (By the way, there is similar research showing that voice pitch is also correlated with company success in certain circumstances)

Back to FaceReflect.  This company claims that by analyzing certain facial features, they can reliably draw inferences about personality characteristics such as generosity, decision making, and confidence.

What seems to be true is that people reliably draw inferences about characteristics based on facial features.  But here's the key question: are these inferences correct?  That's where things start to break down.

The problem is there simply isn't much research showing that judgments about job-relevant characteristics based on facial features are accurate--in fact we have research that at best the accuracy is low, and at worst shows the opposite.  To some extent you could argue this doesn't matter--what matters is whether people are reliably coming to the same conclusion.  But this assumes that what drives performance is purely other peoples' perceptions, and this is obviously missing quite a lot of the equation.

In addition, even if it were true that peoples' perceptions were accurate, it would apply only to a limited number of characteristics--i.e., those that could logically be linked to biological development through a mechanism such as testosterone.  What about something like cognitive ability, obviously a well-studied predictor of performance for many jobs?  The research linking testosterone and intelligence is complicated, some indicating the reverse relationship (e.g., less testosterone leading to higher cognitive ability), and some showing no relationship between facial features and intelligence in adults--and again, this is primarily men that have been studied.  (While estrogen also impacts facial characteristics, its impact has been less studied)

Finally, the scant research we do have indicates the link between facial features and performance is true only in certain circumstances, such as organizations that are not complex.  This is increasingly not true of modern organizations.  Circling back to the beginning of this article, you could liken this to selection based on strength becoming less and less relevant.

One of the main people behind FaceReflect has been met with skepticism before.  Not to mention that the entire field of physiognomy (or the newer term "personology") is regarded with skepticism.  But that hasn't stopped interest in the idea, including from the psychological community.

Apparently this technology is being used by AT&T for assessment at the executive levels, which I gotta say makes me nervous.  There are simply much more accurate and well-supported methods for assessing managerial potential (e.g., assessment centers).  But I suspect the current obsession with biometrics is going to lead to more interest in this area, not less.

At the end of the day, I stand by my general rule: there are no shortcuts in personnel selection (yet**).  To get the best results, you must determine job requirements and you must take the time required to get an accurate measurement of the KSAOs that link to those requirements.  It's easy to be seduced by claims that seem attractive but unfortunately lack robust research support, after all we're all susceptible to magical thinking, and there is a tendency to think that technology can do everything.  But when it comes to selection, I vote less magic, more logic.

* Think about how plastic surgery or damage to the face might impact this approach.

** As I've said many times before, we have the technology to create a system whereby a database could be created with high-quality assessment scores of many individuals that would be available for employers to match to their true job requirements.  The likelihood--or wisdom--of this idea is debatable.

Thursday, February 20, 2014

March '14 IJSA

In my last research update just a couple days ago, I mentioned that the new issue of IJSA should be coming out soon.

I think they heard me because it came out literally the next day.

So let's take a look:

- This study adds to our (relatively little) knowledge of sensitivity reviews of test items and finds much room for improvement

- More evidence that the utility of UIT isn't eliminated by cheating, this time with a speeded ability test

- Applicant motivation may be impacted by the intended scoring mechanism (e.g., objective vs. ratings).

- The validity of work experience in predicting performance is much debated*, but this study found support for it among salespersons, with personality also playing a moderating role.

- A study of the moderating effect of "good impression" responding on personality inventories

- This review provides a great addition to our knowledge of in-baskets (a related presentation can be found through IPAC)

- Another excellent addition, this time a study of faux pas on social networking websites in the context of employer assessment

- According to this study, assessors may adjust their decision strategy for immigrants (non-native language speakers)

- Letters of recommendation, in this study of nonmedical medical school graduate students, provided helpful information in predicting degree attainment

- Interactive multimedia simulations are here to stay, and this study adds to our confidence that these types of assessments can work well

Until next time!

* Don't forget to check out the U.S. MSPB's latest research study on T&Es!

Monday, February 17, 2014

Research update

Okay, past time for another research update, so let's catch up!

Let's start with the Journal of Applied Social Psychology (Dec-Feb):

- Cultural intelligence plays a key role in multicultural teams

- Theory of Planned Behavior can be used to explain intent to submit video resumes

- More on weight-based discrimination, including additional evidence that this occurs more among women (free right now!)

- Does the physical attractiveness bias hold in same-sex evaluative situations?  Not so much, although it may depend on someone's social comparison orientation

- "Dark side" traits play a role in predicting career preference

- Evidence that efficacy beliefs play a significant role not only in individual performance, but in team performance

Next up, the January issue of JAP:

- The concept of differential validity among ethnic groups in cognitive ability testing has been much debated, and this study adds to the discussion by suggesting that the effects are largely artifactual due to range restriction.

- Or are they?  This study on the same topic found that range restriction could not account for observed differential validity findings.  So the debate continues...

- A suggestion for how to increase the salience of dimension ratings in assessment centers

- Ambition and emotional stability appear related to adaptive performance, particularly for managers

Spring Personnel Psych (free right now!)

- First, a fascinating study of P-E fit across various cultures.  Turns out relational fit may be more important in collectivistic and high power distance cultures (e.g., East Asia), whereas rational fit may be more important in individualistic and lower power distance cultures (e.g., the U.S.).

- Next, a study of recruitment messaging for job involving international travel.

- Last but definitely not least, a narrative and quantitative extensive review of the structured interview

Not quite done: One from Psychological Science on statistical power in testing mediation, and just in case you needed more evidence, the Nov/Dec issue of HRM has several research articles supporting the importance of line manager behavior and HRM practices on things like employee engagement.

The Spring issue of IJSA should be out soon, so see ya soon!

Saturday, February 01, 2014

MQs: An idea whose time has passed?

For better or worse, I've spent nearly my entire career working under merit systems.  For the uninitiated, these systems were created many years ago to combat employment decisions based on favoritism, familial relation, or other similarly non-job related factors.  For example, California's civil service system was originally created in 1913 (and strengthened in 1934) to combat the "spoils" system, whereby hiring and promotion was too often based on political affiliation and patronage.

Part of most merit systems is the idea of minimum qualifications, or MQs.  Ideally, MQs are true minimum amount of experience and/or education (along with any licenses/certifications) required for a job.  They set the requirement to participate in civil service exams, and scale up depending on the level (or "classification").  For an entry-level attorney, for example, one would need to have a Bar license.  For a journey-level attorney, you might be required to have several years of experience before being allowed to examine and be appointed.  The idea is that MQs force hiring and promotion decisions to be based on job-related qualifications rather than who you know or what political party you belong to.  Makes sense, right?

But recently, I've had the opportunity to be involved in a task force looking at minimum qualifications and it spurred a lot of discussion and thought.  I'd like to spend just a moment digging into the concept a bit more and asking: are they still the right approach?

This task force was formed because of a recent control agency decision that places increased importance on applicants meeting MQs and reduces the ability of employees to obtain positions by simply transferring from one classification to another based on similarity in level, salary, etc.  Because this will result in fewer options for employees--and hiring supervisors--the discussion around this decision has been rigorous, and at times heated, but without a doubt intellectually stimulating.

As part of my participation in this task force, I reached out to my colleagues in IPAC for their thoughts, and got a ton of thoughtful responses.  While there were arguments for and against MQs, the overall sense seemed to be that they are a necessary evil.  Perhaps most importantly, though, I was reminded how important they are and thus the amount of attention that should be paid while establishing them.

So where does this lead me?  To play my hand, over time I've become less and less of a fan of MQs, and my participation on this task force has cemented some of the reasons why, however well intentioned:

- They are overly rigid and inflexible.  If an MQ states you must have 2 years as an Underwater Basketweaver, it doesn't matter than you have 1 year and 11 months and you just attended the Basketweaver Olympics, sorry, you don't qualify to test for the next level.

- They are often difficult to apply, resulting in inconsistencies.  What exactly is a four-year degree in "Accounting"?  What is "clerical" work?  If someone worked overtime, does that count as additional experience?  How shall we consider education from other countries?  And what about fake degrees and candidates who, shall  we say, elaborate their experience?

- They serve as barriers to talented individuals.  This results in fewer opportunities for people as well as a smaller talent pool for supervisors to draw from (ironically actually cannibalizing the very concept of the merit system).

- They serve as barriers to groups that have a history of discrimination, such as women and ethnic minorities.  Take a look at any census study of education, for example, and look at the graduation rates of different groups.  Implication?  Any job requiring a college degree has discrimination built into the selection process.

- Most were likely not developed as rigorously as they should have been.  Like any other selection mechanism, MQs are subject to laws and rules (e.g., the Civil Rights Act and the Uniform Guidelines in the U.S.) that require them to be based on job analytic information and set based on data, not hunches or guesses.

- Without a process to update them quickly, they rapidly become outdated, becoming less and less relevant.  Many classification in the California state system, for example, haven't been effectively updated in thirty years (or longer).  This becomes particularly painful in jobs like IT, where educational paths and terminology change constantly.

- They require an enormous amount of resources to administer.  At some point someone, somewhere, needs to validate that the applicant has the qualifications required to take the exam.  You can imagine what this looks like for an exam involving hundreds (sometimes thousands) of applicants--and the costs associated with this work.

- From an assessment perspective, MQs are a very blunt instrument--and not a particularly good one at that.  As we know, experience and education are poor predictors of job performance.  Experience predicts best at low levels but quickly becomes irrelevant.  Education typically shows very small correlations with performance.  As anyone that has experience hiring knows, a college degree doth not an outstanding employee make.  So basically what you're doing is front-loading your "select out" decisions with a tool that has very low validity.  Sound good?

- The ultimate result of all this is employers with MQs systems are often unable to attract, hire, and promote the most qualified candidates, while spending an enormous amount of time and energy administering a system that does little to identify top talent.  This becomes particularly problematic for public sector employers as defined benefit plans are reduced or eliminated and salaries fail to keep pace, resulting in these organizations becoming less and less attractive.

Recognizing these limitations, some merit systems (the State of Washington comes to mind) have recently moved away from MQs, instead evolving into things like desirable or preferred qualifications.  This presumably still outlines the approximate experience and education that should prepare someone for the position, but relies on other types of assessments to determine someone's true qualifications, abilities, and competitiveness.  I like this idea in concept as long as an effective system is put in place to deal with the likely resulting increase in applications to sift through.

The private sector, of course, does not operate under merit system rules, and have had to deal with the challenges--as well as reaping the benefits--associated with of a lack of rigid MQs.  They do this through increased use of technology and, frankly, significantly more expenditure on HR to support the recruitment and assessment function (particularly larger employers).  Of course some private sector employers adhere to strict MQs as a matter of course, and they would do well to think about the challenges I outlined above.

So where does this leave us?  Do MQs still serve a valuable perhaps?  Perhaps.  They hypothetically prevent more patronage, although anyone that has worked in a merit system can tell you this still happens.  Perhaps the strongest argument is that as more employers move to online training and experience measures (another example of an assessment device with little validity but quick, and cheap), MQs serve as a check, presumably helping to ensure that at least some of the folks that end up on employment lists are qualified.

But I would argue that any system that still employs MQs is basically fooling itself, doing little to control favoritism and ultimately contributing to the inability of hiring supervisors to get the best person--which is what a system of merit is ultimately about.  Particularly with what we know about the effectiveness of a properly administered system of internet testing, MQs are an antiquity, serving as a barrier to more job-related assessments and simply not worth the time we spend on them.  If we don't reform these systems in some way to modernize the selection process, we will wake up some day and wonder why fewer and fewer people are applying for jobs in the public sector, and why the candidate pools seem less and less qualified.  That day may already be here and we just haven't realized it.

Sunday, December 15, 2013

Top 10 Assessment and Recruitment Research of 2013

So I'm gonna try something a little different this year.  I'm going to present the research from 2013 that I think has the best chance of fundamentally changing research directions, has the biggest implication for practice, or is just plain interesting.

So without further ado, and in no particular order, here are my choices for the Best Assessment and Recruitment Research of 2013:

1) Murphy, et al. call into question one of the fundamental assumptions of test development: that judgments of subject matter experts have a direct relationship to test utility.

2) Kim, et al. demonstrate a real value of age diversity in work groups: better emotional regulation.

3) Ghumman and Barnes with a simple but elegant study that demonstrated how important sleep is in preventing a persistent thorn: prejudicial assessments by raters.

4) Konradt, et al. showed that perceptions of fairness matter in web-based assessments too.

5) Personality research continued to dominate in 2013, and one of the best studies was by Shaffer and Postlewaite.  In it, they demonstrate that conscientiousness is best used as a predictor of performance in highly routinized jobs.

6) Mrazek et al. focused on a topic near and dear to my heart: mindfulness.  They showed that training in mindfulness increased GRE scores.  The implication for employment testing is clear, we just need more research in that direction.

7) Early in the year, Bobko and Roth gave us one of those "I better print this out" articles, showing that assessment methods historically assumed to result in lower levels of adverse impact, like biodata and work samples, may be more prone to d than we thought.  Side note: this article is still free in its entirety!

8) Kuhn, et al. presented the results of an elegantly simple experiment illustrating that the impact of a minor resume embellishment depended on the pre-existing perception of the applicant.

9) Discrimination, sadly, knows no demographic boundaries.  In this study, Conley found that non-Whites described White women as attractive, blonde, ditsy, shallow, privileged, sexually available, and appearance focused.

10) In a study of M.B.A. program admission judgments that has implications for employment selection, Simonsohn and Gino found that as the day progressed, fewer applicants were rated as highly recommended if several were recommended earlier in the day.

There were, of course, many other well-done and interesting studies in 2013, but these were some of my favorites.  Here's to a productive, stimulating, and successful 2014!

Sunday, November 24, 2013

Research update

Well, it's that time of year again.  No, not the holidays.  No, not winter (or summer, depending on where you are!).  Research update time!  And I think you will agree with me that there is a lot of interesting research being reported, on traditional topics as well as emerging ones.

First, the November issue of JOB:

- Do transformational leaders increase creative performance and the display of OCBs?  Well, that may depend on how much trait affectivity they had to begin with. A reminder to not make blanket statements like "X type of leadership causes Y type of behavior."

- There is seemingly endless debate about the utility of personality inventories.  This study reminds us--again--that in assessment research there are few simple answers.  The authors describe how a particular combination of personality measures correlated with task performance among professional employees, but not non-professionals.  (yes, I said task performance)

Next, the Winter issue of Personnel Psychology (free right now!), much of which is devoted to corporate social responsibility (CSR):

- Do perceptions of CSR drive job pursuit intentions?  It may depend on the applicant's previous justice experiences and their moral identity.

- Oh, and it may also depend on the extent to which applicants desire to have an impact through their work.

- There is a debate in the assessment center literature about whether competency dimensions are being measured or if it's purely a function of the assessment type.  This study suggests that previous research has been hamstrung by a methodological artifact and that measured properly, assessment centers do in fact assess dimensions.

Let's switch to the November issue of the Journal of Applied Social Psychology:

- Engagement is all the rage, having seemingly displaced the age-old concept of job satisfaction (we'll see).  This study reminds us that personality plays an important role in predicting engagement (so by extension our ability to increase engagement may be bounded).

- Here's another good one and it's related to internal motivations.  The authors developed an instrument that helps organizations measure the "perception of the extant motivational climate."  What does that mean?  As I understand it, it's essentially whether most people are judging their performance against their peers or their own internal standards.  It seems the latter may result in better results, such as less burnout.

- On to something more closely tied to assessment: letters of recommendation (LORs).  There's surprisingly little research on these, but this study adds to our knowledge by suggesting that gender and racial bias can occur in their review, but requiring a more thorough review of them may reduce this (I don't know how likely this is for the average supervisor).

- Finally, a study looking at the evaluation of job applicants who voluntarily interrupted their college attendance.  Unfortunately this does not appear to have been perceived as a good thing, and the researchers found a gender bias such that women with interrupted attendance had the lowest evaluations.

Next, the November issue of Industrial and Organizational Psychology, where the second focal article focus on eradicating employment discrimination.  This article looks pretty juicy.  I haven't received this one yet in the mail, so I may have more to say after digesting it.  There are, as always, several commentaries following the focal article, on topics including background checks, childhood differences, and social networks.

Okay, let's tackle the 800-pound gorilla: the December issue of IJSA:

- Are true scores and construct scores the same?  According to this Monte Carlo study, it seems how the scales were constructed makes a difference.

- Can non-native accents impact the evaluation of job applicants?  Sure seems that way according to this study.  But the effect was mediated by similarity, interpersonal attraction, and understandability.

- Here's a fascinating one.  A study of applicants for border rangers in the Norwegian Armed Forces showed that psychological hardiness--particularly commitment--predicted completion of a rigorous physical activity above and beyond physical fitness, nutrition, and sensation seeking.

- Psst....recruiters...make sure when you're selling your organization you stay positive.

- Spatial ability.  It's a classic KSA that's been studied for a long time, for various reasons including its tie to military assessments and the finding that measures can result in sex differences.  But not so fast, spatial ability is not a unitary concept.

- Another study of assessment centers, this time in Russia and using a consensus scoring model.

- And let's round it out with one that should rock some worlds: the authors presents results that suggest that subject matter expert judgment of ability/competency importance bore little relation to test validity!  Okay, I'm really curious about what the authors say about the implications, so if anyone reads this one, let us know!

Last but not least, the November issue of the Journal of Applied Psychology:

- Another on personality testing, this one underlining the important distinction between broad and narrow traits.  This is another article I'm very curious about.

- Here's on one leadership: specifically, on the impact of different power distance values between leader and subordinates on team effectiveness

- And another on nonnative speakers!  This one found discriminatory judgments made against nonnative speakers applying for middle management positions as well as venture funding.  Interestingly, it appears to be fully mediated by perceptions of political skill--a topic that is hot right now.

- Okay, let's leave on a big note.  This meta-analysis found an improvement in performance prediction of 50% when a mechanical combination of assessment data was used rather than a holistic (judgment-based) method.  BOOM!  Think about that the next time a hiring supervisor derides your spreadsheet.

Until next time!

Sunday, November 03, 2013

Will robots replace assessment professionals?

Technology and assessment have had a close relationship for years.  From the earliest days of computers, we were using them to calculate statistics, store items, and put applicants into spreadsheets.

Over time as computers advanced, we used them for more advanced tasks, such as multiple regression, applicant tracking, and computer-based testing.

With the advent of the Internet, a whole new area of opportunity opened for us: web-based recruitment and testing.  People began "showing off for the world" by creating personal webpages, commenting on articles, writing blogs, and living their lives through online social networks.  We developed Internet testing, allowing applicants to examine more conveniently.  And new forms of assessment opened up, such as advanced simulations.

We now find ourselves evolving yet again to take advantage of another significant technology advance: the social web.  As millions and billions of people began living their lives publicly on the web, they began developing a web identity and leaving footprints all over the place.  It was only a matter of time before recruiters (historically some of the first in HR to embrace technology) figured out how to harvest this information.  

One of the hottest trends now in HR technology is scouring the web to seek out digital footprints and making this information readily available to recruiters.  It's the latest iteration of Big Data applied to HR, and it's a creative way to make Internet recruiting more efficient.  Companies like IdentifiedTalentBinGild, and Entelo offer solutions that purport to lay qualified applicants at your doorstop, without all the hassle of spending hours manually searching the web.  They claim an additional benefit of targeting passive job seekers, who are obviously more challenging to attract.

But just exactly how big of an evolutionary step is this?  How big of a solution will this be?  Will this next evolutionary step result in us working ourselves out of a job?

I don't think so.  And let me explain why.

Fundamentally, assessment is about measuring--in a valid, reliable way--competencies key for successful performance in a field, job and/or organization.  Assessment can be performed using a number of different methods, the biggest ones being:

- Ability testing.  Measuring things like critical thinking, reading comprehension, and physical agility.  These tests seek the boundaries of individuals, the maximum they are capable of demonstrating related to a variety of constructs.  When properly developed and used, these tests have been shown to be highly predictive of performance, although some can result in adverse impact.

- Interviews.  One of the oldest forms of assessment and still probably the most popular.  Like ability tests, interview questions can seek "maximum" performance (i.e., knowledge-based), but they can also be used to probe creativity (i.e., situational) as well as gain a better understanding of someone's background and accomplishments (i.e., behavioral).  Interviews have also been shown to be valid predictors of performance, although they rely heavily on potentially unrelated competencies such as memory and verbal skills.

- Knowledge testing.  SAT or GRE anyone?  Multiple-choice tests have been around a long time, and with newer technologies like computer adaptive testing, don't show any signs of going away any time soon.  While these used to be quite common in employment testing, they have fallen out of favor in many places, which is odd given that they too have been shown to be successful predictors of performance (I suspect it is due to their "unsexy" nature and the fact that they require a significant amount of time to prepare)

- Personality inventories.  While these haven't been used nearly as much as the others above, there is an enormous interest in measuring personality characteristics related to job performance.  While they sometimes suffer from a lack of face validity (although contextualizing them seems to help), they have been shown to be useful, and typically demonstrate low adverse impact.

- Applications.  Also extremely popular, and the most relevant for this topic.  The assumption here is that qualifications and (like behavioral questions) past accomplishments predict future performance.  There is potential truth here, but as we know relying on applications (and resumes) is fraught with risks, from irrelevant content to outright lies.

An important thing that all of these assessment types have in common is that they are employer-generated.  One of the fundamental changes society has seen in the last ten years is an enormous shift to user-generated content at the grass roots level.  Anyone can have a blog, regardless of qualifications, and many of questionable veracity are read more than those written by people who actually know what they're talking about.  Content has become, if it wasn't already, king/queen.

But therein lies the fundamental challenge for aggregating digital footprints/content for use in assessment.  Relying on user-generated content, whether from social networks, blogs, comments, or other sources, is predicated on the assumption that qualified candidates are leaving digital versions of themselves.  In places that you have access to.  And that it is accurate.  And predicts performance.  This may work decently in certain industries, like IT, where it may be nearly universal--and expected--that professionals live their lives publicly on the web.  But for many people in many different professions, they may have neither the time nor the inclination to reveal their qualifications online.  In contrast, you can always test someone's ability, and a significant advantage of ability testing is it gives candidates an opportunity to demonstrate what they can do even if they haven't had the chance to do it yet.

I should note that using this information for recruitment is a different--but related--animal.  In this context, concerns about replacing tried-and-true assessment methods are moot.  However, we should carry the same concerns about content generation, both frequency and veracity.

As I've said before, taking technology to its logical endpoint would result in a massive database of everyone on the planet and their competency levels.  This database would empower users to generate and control their content, but allow organizations the widest possible field of qualified candidates.  At this point I'm aware of only one thing that comes close, and honestly I don't see anything approaching this scope anytime soon, particularly with more and more concerns over digital privacy.

Which leaves us...where exactly?  Will robots replace assessment professionals?  Not anytime soon.  At least not if we want hiring to work.  But we should be active observers of these trends, looking both for opportunities as well as pitfalls.  We shouldn't fear technology, but rather the way it's used.  Any important endeavor that requires human analysis should use technology as an assistive tool, not a sexy replacement.

I also want to give props to these companies for taking advantage of user-generated content.  It's a much more efficient way of assessing (i.e., it doesn't require applicants to in some sense double their efforts by completing a separate assessment).  And it's not surprising that these companies have sprouted up, given the trend in HR to automate user-initiated activities that lend themselves to automation, such as leave requests, benefit changes, and training.  But importantly, the science of whether digital footprints predict real-world job performance is in its infancy.  With something as important--operationally as well as legally--as hiring, we have to be careful that our addiction to technology doesn't outstrip our evidence that it works.