Tuesday, December 27, 2011

Final research update of 2011


Welcome to the last HR Tests research update of 2011! This year has been filled with research developments, building on previous thinking as well as venturing out into new areas. Let's see what the end of the year brings us (hint: a lot)...

We'll start with the Winter 2011 issue of Personnel Psychology:

- If you're looking for executives who excel at strategic thinking, you'll want to pay attention to not only their cognitive ability but their accumulated work history, according to Dragoni, et al.

- I tend to think of realistic job previews (RJPs) as occurring pre-hire, but research by Earnest, et al. suggests that an effective technique is to conduct an oral and written RJP post-hire--and be as honest as possible.

- Meta-analyses are relied upon heavily as summaries of large bodies of research. Roth, et al. point out ways we can make them better, particularly with respect to estimates of validity and group differences.

- One of the downsides of cognitive ability tests is they're not always perceived very well by applicants. Sumanth & Cable show how this perception is influenced by the status of the organization as well as individual status.

- Next, a fascinating description of a scale to measure the idea of calling by Dobrow and Tosti-Kharas. What I found most interesting was how the measure was associated with different criteria for different domains (e.g., art, music, management). You can read a draft version here.


Let's turn now to the January issue of JOB:

- Political skill is a hot topic, and Gentry, et al. demonstrate how the perception of promotability related to this skill varies between bosses, peers, and direct reports. Long story short: it differs, and also depends on behavior (attitude will only take you so far, right?). You can read the full version (at least right now!) here.

- Anyone that's worked with (or has in their family) engineers or scientists knows that they often share some strong traits. This suggests leading these groups that engage in creative activity may require specific attributes, which is what Robledo, et al. set out to describe.


On to the January issue of JOM:

- Core self-evaluation (CSE) is another hot topic, and Chang, et al. provide an important review of 15 years of research, including the meta-analytic support for CSE predicting in-role and extra-role performance.

- Boswell, et al. provide an integrative review of the concept of job search across different situations (e.g., following job loss, while employed).

- One area that deserves more attention is selection into the highest positions within organizations. Withers, et al. provide a review of the process of selecting a director for a board.


Let's not forget v41(2) of Personnel Review:

- Hoglund delves into the topic of talent management as a strategic HRM practice. A fascinating topic that reinforces the importance of HRM as an influence over employee perceptions and behavior.

- The wording of job ads can have important impacts on applicant perceptions and behavior. De Cooman and Pepermans analyze the differences between for-profit and non-profit job ads, and show how only a fraction of the information potentially relevant for job-person match is published.

- Another topic that deserves additional attention is the motivation to expatriate. Altman and Baruch describe results of a qualitative study that may be useful to organizations thinking about attracting and selecting for positions that require this substantial move.


Don't forget this one:

- Johnson, et al. with an important reminder that when looking at the issue of discrimination, using single categories to define groups is probably not the best strategy.


Whew! And last but certainly not least, in the December issue of IOP, Michael McDaniel and colleagues present an argument (similar to ones made elsewhere) that the Uniform Guidelines are outdated and, worse than that, a detriment to the field of selection. The commentaries are many and range from support to passionate disagreement, with a healthy dose of caution (and dare I say...intransigence?) thrown in. Worth a read, particularly for those following the Guidelines literally and those engaged in related litigation. You can read a draft version here.


I hope everyone has a great New Years; here's to a wonderful 2012!

Saturday, December 10, 2011

Forget leadership assessment centers: Use the fat face test


All that time your organization spends on leadership assessment? Throw it out. New research indicates that CEOs with fat faces perform better, so just get out your measuring tape.

Okay, I'm oversimplifying. But this research by Wong and her colleagues is fascinating.

They looked at 55 photos of male CEOs and linked facial width to firm performance. Results? CEOs with higher "facial ratios" (face width relative to face height) led organizations that had "significantly greater firm financial performance."

Why facial width? Previous research has indicated this characteristic is related to aggression and sense of power. People who feel powerful tend to focus more on the big picture rather than little details, and tend to be better at staying on task.

Why men? The relationship between face ratio and behavior has been found to be important only in men. Something to do with testosterone.

But wait, don't throw out your work sample tests quite yet. The researchers also found that the relationship between facial ratio and firm performance was moderated by the style of the leadership team. The relationship was stronger in teams that were "cognitively simple" and saw things in black-or-white terms, apparently due to deference to authority. Not exactly something most organizations dream of. Presumably leadership teams with more nuanced views of the world rely on things beyond what shape their leader's head is.

Yes, I'm being tongue-in-cheek here. But if nothing else, this type of research reinforces the complex and important relationships between genetics, perceptions, and behavior.

Wednesday, November 30, 2011

Bad hires in hindsight


Lately I've been thinking about hires I've seen that went bad. What seemed like the right selection at the time turned into a huge disappointment, or worse--a total nightmare. This is particularly the case when selecting for supervisors and managers.

Looking back at those decisions, what happened? So far I've identified several classic cases:

1) Smallville Syndrome. While the individual may have had success in a smaller role, they were completely unfit to take on the bigger scope of the new job. The strategies that worked before are no longer adequate in the new position. This is similar to thinking your strongest specialist will make the best supervisor, but it can happen even with people that are currently in a similar role.

2) The Loch Ness Monster Effect. On the surface, everything seems fine with the applicant. But lurking below is a monster, just waiting to rear its head. Once in the right position, you will see a side of them you weren't anticipating--and it won't be good.

3) Gumby Complex. A supreme lack of flexibility leads the individual to spurn suggestions to do things differently or be open to new ideas. This often results in a toxic combination of alienating their bosses as well as their subordinates.

4) Prima donna-itis. A hard-charging and competitive individual contributor who consistently performs at the top of their group can yield big returns. But this typically isn't what you're looking for in a supervisor or manager. Instead what can happen is they build their own island where subordinates either conform or get kicked out.

5) Iwannajob condition. The individual wants a job--any job. And they don't care what they do as long as it pays. Except they do. And sooner or later they will realize this, start disengaging, and you've lost valuable training, relationships, and resources.

Turning these around, we see some ideas for improving the selection process:

1) Dig, dig, dig. Resist the urge to hire quickly. Do your homework and gather as much information as you can about potential hires--particularly ones going into leadership positions. Have the courage to not hire anyone if you don't like your candidate pool.

2) Pay attention to the minor details. Something may seem like a minor personality quirk that is overshadowed by the person's strengths. But that quirk may end up making their strengths irrelevant if their new environment causes the quirk to grow into a full-blown condition. Think about how a personality trait might exhibit itself in a new environment.

3) Focus on the job at hand. It's trendy to focus on someone's performance history. And many times this can predict performance in a new job. But when the new position requires new competencies--or a whole new level of them--what happened in the past may not be so instructive. Use a variety of strong assessment devices tied to the position, not the applicants.

4) Consider personality inventories. Yes, they can be challenging to adopt, administer, and interpret. But a lot of organizations use them successfully to help get underneath the shiny exterior. Just be very careful when selecting the particular tool to use.

5) When hiring for a supervisor or manager position, spend 80% of your time looking at personality and communication style, 20% on technical competencies. People want to be led, motivated, and engaged, not micromanaged by someone who feels like they can do the job better than you can.

6) Try to find out why the person wants the job. This isn't easy, and generally won't be had by simply asking, "So...why do you want this job?" You can start with that question, but follow it up with a bunch more that get at whether the person has really thought about their fit and what they hope to accomplish in the position.

This is of course just a sample of what can go wrong and some quick suggestions. As those of you that have been doing this a while know, hiring the right person is usually not quick and easy--particularly for leadership positions. But boy is the effort spent up front worth it. Just ask anyone who's ever had a bad boss.

Sunday, November 20, 2011

Research update #583: Impression management and a lot more

Okay, I've got a lot of ground to cover this time, so buckle up...

Let's start with the December issue of IJSA:

- Looks like how much applicants try to make themselves look good varies by country

- Is applicant faking behavior related to job performance? Kinda depends on your definitions.

- Research has found that emotional intelligence can be related to work attitudes. This appears to be in part because of an increased situational judgment effectiveness.

- Speaking of situational judgment...in terms of job knowledge, knowing what to do is different than knowing what not to do

- What impact does a resume have on a recruiter? Depends on what assumptions they make about you after reading it.

- How to people select--and continue with--an executive coach? By looking at things like their ability to forge a partnership.

- How do Canadian firms do in terms of using tests other than interviews? Not so well, it turns out.


Let's move to the October issue of JASP, where there's just one article but it's a good one. Researchers continued the (depressing) finding that applicant names impact pre-interview impressions. Specifically, the more a name was Anglicized, the more favorable the impression was when hiring for an outside sales job.


Next comes the November issue of JAP:

- A new meta-analysis of the FFM of personality and its relationship to OCBs and task performance.

- Measures of interest haven't gotten a lot of love as selection devices. Looks like we need to tease out the constructs a little because they could be more helpful than we thought.

- Applicants trying to create a certain image during an interview are better off doing this after an initial flub or relying solely on self-promotion rather than making up an image.


A few from the November issue of JPSP:

- Another on impression management (not selection-specific) that goes into more detail about the topic (e.g., how many tactics people use, their accuracy)

- A caution about using the Revised NEO-PI in different cultures due to DIF.


Next, a call for more transparency in false-positive findings.


Last but not least, those of you interested in the potential of social ratings of performance being used for selection might be interested in this study of RateMyProfessors.com, which found student ratings are likely to be useful measures of teacher quality.

Sunday, November 06, 2011

How important is assessment, really?


Prepare for a little blasphemy.

Over the last few months my job--and focus--has changed dramatically. Historically I've been a "testing" guy. Question about job analysis? Item writing? I'm there.

Then, a few years ago, I started managing a team that did a more than assessment--a lot more. In fact, even though assessment is in their job description, the team spends most of their time counseling supervisors on performance management. Of course some of this is because the testing workload is down, but it's also a function of demand for advice in this area.

In July we found out our department's budget was being cut; to the tune of about 10%. We adjusted and tightened our belts, but in the end it wasn't enough and we had to plan for layoffs. I was recruited to be one of the coordinators of said layoff, and thus began the dramatic work shift.

That's all a really long way of saying that my focus lately has not been on recruitment and hiring. I've been thinking a lot more about what keeps people going in difficult times. Sure, the KSAOs they bring to the table are important, but other things raise in importance during times of uncertainty and lack of control.

Which got me to thinking: how important IS assessment really? Even at our best, we can predict only about a third of individual job performance. What's going on with that other 2/3rds?

You're probably familiar with models of job performance, so I won't bore you. Suffice to say that a lot goes into job performance. So that person you hired that aced your assessments? Not guaranteed to be super star. If they end up being supervised by an incompetent manager, their inner greatness may never reach the surface. If they have a death in the family, you better believe their focus is not going to be on work for a while and job performance may not be at maximum.

Let's think about job performance as a pie. Top-notch assessment can predict about a third of that pie. What else is in that pie--and more importantly, how big are the slices? Things like:

- motivation
- supervision style
- role clarity
- co-worker support
- mood
- resources
- performance feedback
- stress

(I tried with no luck to track down a comprehensive path model, maybe one of you can point one out)

Now we can't control all of these things (although as HR professionals we certainly can consult on a lot of them--clear duty statements, supervisor training and accountability, engagement surveys, etc.), but what we CAN do is take the rigor we bring to the study of assessment and apply it to other aspects related to job performance. If you look at HR research outside of assessment I think you'll find that the level of analysis is, shall we say, sometimes lacking.

Don't get me wrong: assessment will always be important. The legal rationale is IMHO the least compelling. Instead, there is proven, substantial, utility in implementing best practices for employee assessment.

But lately I can't help but thinking: Are we spending too much time thinking about--and studying--how we bring people in to an organization, and not enough time thinking about what happens once they get there?

Sunday, October 16, 2011

Advice to bballstud_23@mail.com: Apply using a different email address


It's not that I don't appreciate your desire to be different, but I worry about your chances of getting a job.

Why? Because if you apply for a job using that email address, you're going to look:

a) anti-social
b) unprofessional
c) careless
d) inexperienced

This may seem obvious, but according to a little gem tucked away in the latest TIP by Sachau and colleagues, not only are people with, shall we say, "casual" email addressed perceived negatively, they actually score lower on preemployment tests.

The authors used data from over 15,000 actual job applicants who had completed various tests from SHL. Email addresses were coded on a variety of scales such as craziness/insanity, drugs/alcohol, and sci-fi/geeky/nerdy. Then they analyzed the differences in test scores between people with more professional emails with the more casual ones.

Not only did raters agree on the lack of professionalism these types of addresses exhibited, individuals with these addresses actually performed lower on measures of conscientiousness, professionalism, work experience, and total score.

And while the effect sizes weren't huge (there was about a 10% difference in group means between high and low groups), it should give you pause that even if your address is questionable as opposed to outright silly, you will be perceived as unprofessional and may be assumed to have lower test scores.

...unless that's what you're going for.

Saturday, September 24, 2011

Research update + happy anniversary...to me!

Two things this time: we've got a lot of research to go over, and then a bit of a celebration! First the research.

The September issue of the Journal of Applied Psychology is out. Let's see what it has to offer:

- Using performance ratings as an assessment or a criteria? You'll want to look at Ng, et al.'s study of leniency and halo errors among superiors, peers, and subordinates of a sample of military officers.

- Speaking of criteria, you may be interested in Northcraft et al.'s study of how characteristics of the feedback environment influence resource use among competing tasks. Interesting stuff.

- Okay, let's turn to something more traditional. Berry, et al. look at correlations between cognitive ability tests and performance among different ethnic groups. Not surprising to those of you familiar with the research, the largest difference found was between White and Black samples.

- Another traditional (but always interesting) topic: designing Pareto-optimal selection systems when applicants belong to a mixture of populations. Check out De Corte, et al.'s piece. Oh, you might be interested in the in-press version.

- Dr. Lievens (a co-author on the previous study) has been busy. He and Fiona Patterson collaborate on a study of the incremental validity of simulations, both low fidelity (SJTs in this case) and high fidelity (assessment centers), beyond knowledge tests. Yes, both had incremental validity, and interestingly ACs showed incremental validity beyond SJTs. Check out the in press version as well.

- Wondering whether re-testing degrades criterion-related validity or impacts group differences? You're in luck because Van Iddekinge, et al. present the results of a study of just that. Short version? Re-testing actually did a lot of good.

- I know what you're thinking: "Might Lancaster's mid-P correction to Fischer's exact test improve adverse impact analysis?" Check out Biddle & Morris' study for an answer.

- And now that you've had your fill of that statistical analysis, you find your mind wandering to effect size indices for analyzing measurement equivalence. I'm right there with ya. So are Nye & Drasgow.


Let's turn now to the October issue of the Journal of Personality and Social Psychology because there are a few articles I think might interest you...

- First, Lara Kammrath with a fascinating study of how people's understanding of trait behaviors influence their anticipation of how others will react.

- Speaking of fascinating, George, et al. present the results of a 50-year longitudinal study of personality traits predicting career behavior and success among women. It makes you realize again how much has changed since the 1960s!

- I tell ya, this issue is chalk full of goodness. Carlson et al. demonstrate that people can make valid distinctions between how they see themselves and how others see them--potentially informing the debate on personality inventories.

- Lastly, a piece by Specht et al. on how personality changes over the life span and why this might be. Fascinating implications for using personality inventories for selection.

Bonus article: remember how I mentioned using performance ratings above? Well you might be interested in an article by Lynn & Sturman in the most recent Journal of Applied Social Psychology where they found that restaurant customers sometimes rated the performance of same-race servers as better than those of different races--but it depended on the criterion.



FINALLY, I'm proud to announce that this blog has officially been going strong for five years. My first post (now incredibly hard to read) was in September of 2006. Back then the only other similar blog was Jamie Madigan's great (but now sadly defunct) blog, Selection Matters. My first email subscriber (from DDI if you're curious) came on a month later. Now I have almost 150 email subscribers and at least a couple hundred more who follow the feed. Around 3,000 individuals visit the site each month from over a hundred countries/territories (U.S., India, and Canada are 1-2-3). It's a labor of love and I thank you for reading!

Saturday, September 10, 2011

One last time: It's all subjective


While reading news about a court decision recently I was struck again by how the U.S. Court system continues to make a false and largely unhelpful distinction between "objective" and "subjective" assessment processes, and they're certainly not the only ones. Presumably this is meant to highlight how some are based on judgment while others are free from it.

One last time: it's all subjective.

I challenge you to come up with a single aspect of the personnel assessment process that is not based on judgment. Not only that, "degree of judgment" is not a useful metric in determining the validity of a selection process--for many reasons including but not limited to whose judgment is being used.

Here is a sample of assessment components that are based on judgment:

- how to conceptualize the job(s) to be filled

- how to study the job(s)

- how to recruit for the job(s)

- which subject matter experts to use

- how to select the KSAs to measure

- how to measure those KSAs

- how items are created or products are selected

- how to set a pass point, if used

- how to administer the assessments

- how to score the assessments

- how to combine assessments

- how to make a final selection decision

- what type of feedback to give the candidates

And this is just the tip of the iceberg. The entire process is made up, like all complex decisions, of smaller decisions that can impact the selection process (what software do I use to analyze the data? Is this one KSA or two?).

So what WOULD be helpful in describing how an assessment process is developed and administered? I can think of a few yardsticks:

1. Extent to which processes and decisions are based on evidence. To what extent is the assessment and selection process based on methods that have been shown scientifically to have significant utility?

2. Degree of structure. To what extent are the assessment methods pre-determined? How flexible are the processes during the assessment?

3. Multi-dimensionality. How many KSAs are being measured? Is it sufficient to predict performance?

4. Measurement method. How many assessment methods are being used? Do they make sense given the targeted KSAs?

5. Transparency. Is the entire process understandable and documented?

I'm not inventing the wheel here. I'm not even reinventing it. I'm pointing out how most cars have four of them. It's obvious stuff. But it's amazing to me that some continue to perpetuate artificial distinctions that fail to point out the truly important differences between sound selection and junk.

Saturday, August 27, 2011

August Research update

Okay, past time for a research update.

- The September issue of IJSA has several articles on job discrimination, including pieces by Anderson, Anseel, and Patterson & Zibarras. Looks like the focus is not only on actual job discrimination but perceived job discrimination.

- Krause, et al. describe assessment center practices in South Africa

- Ree and Carretta get serious about incremental validity versus unique prediction

- What do you need to be a top notch recruiter? Well if your measure of job performance is revenue, don't dismiss emotional intelligence, according to Downey, et al.

- Chen, et al. help us understand how applicant personality interacts with impression management tactics to influence interviewer perceptions

- What type of student succeeds at a campus recruitment drive? Gokuladas studied nearly 600 engineering graduates in South India and found that success correlated with engineering GPA and English proficiency. Interestingly female graduates outperformed their male counterparts for jobs in the software services industry.

- Trying to predict counterproductive work behaviors? Bowling et al. suggest that the interaction between conscientiousness, agreeableness, and neuroticism is important and complex.

- One of the focal articles in the September 2011 Industrial and Organizational Psychology is all about individual psychological assessment--something I don't write about very often. Silzer and Jeanneret focus on executive assessment but cover a lot of ground on the broader topic, including why IPA has its critics and how we can advance the science. It is, as with all the focal articles, followed by several commentaries.

- Son Hing, et al. present a fascinating study of meritocracy and how it differs from other hierarchy-legitimizing ideologies

- Stats fans: Bonett and Wright present a method for obtaining the proper sample size for obtaining a certain confidence interval when conducting multiple regression.

- van Vianen, et al. describe a study that contributes to our understanding of the important of person-organization and person-supervisor fit

Sorry there haven't been more updates lately. New baby + potential layoffs at work = little time for blogging!

Saturday, August 06, 2011

How can assessment help in a downturn?


Exam workload is down at my job. Not "pack up your bags and go home" down, but we're running significantly fewer exams than we were 5-6 years ago. And I doubt we're the only organization in this situation.

Some employers are hiring. But many organizations are still running very lean, and the public sector in particular continues to face budget cuts and layoffs.

So if you have staff with skills in assessment that aren't being used to their full capacity (a dangerous recipe for several reasons, budgetary and morale-wise), what's an organization to do?

Well, personnel assessment is all about measuring people. So anything you do in your organization that involves measuring people hypothetically has a built-in staff with the competencies you need to get the job done.

Let's look at some examples of how assessment can help organizations during a downturn:

1) Help determine who to let go (if possible). If the organization is able to use competency levels to determine who to keep, this is obviously the preferred route over, say, letting go those with the most (or least) seniority. The tests should measure KSAs relevant for the critical work the organization needs done now--and in the future.

2) Help internal employees objectively assess their skill levels--both those being let go as well those staying who wish to enhance their career mobility and stability. A well-designed and scored assessment should be able to give employees some insight into their strengths and areas for improvement.

3) Help organizations get a better sense of their talent. Sometimes called a "talent inventory", assessment can be used strategically to help organizational leadership conduct workforce planning and identify areas of skill discrepancies. This includes succession planning. What percentage of key leadership positions in your organization could be backfilled tomorrow--successfully? What skills do up-and-comers need to develop to help them get ready for the next step? Assessment can help answer that.

4) For those that are hiring, many hiring supervisors are still receiving large volumes of applications. This is where assessment shines--help them identify who are truly the most qualified for the position.

5) Use this opportunity to make sure your competency models and/or job analyses are complete and updated. Jobs change--are you still using a description from 10 years ago? Are you still hiring based on outdated duty statements?

Now let's think a bit more broadly, beyond traditional personnel assessment into some other areas where the skills your team has developed are just as relevant:

6) Engagement surveys. Getting a better sense of the attitudes and emotions of your workforce helps you do a variety of things, including avoiding turnover of high performers, targeting organizational sub-units that need improving, and identifying under-performing supervisors. It also--when done properly--gives your employees a sense of voice, which can be key in times of anxiety and uncertainty. Just make sure you do something with the results.

7) Organizational change initiatives. Assessments can be used for a variety of purposes during change initiatives, such as measuring the "pre" and "post" states, identifying key sources of resistance, keeping track of success measures, and accurately measuring outcomes.

8) New product implementation and user satisfaction surveys. Do you have key pieces of technology you've implemented recently? How is that working out for the users? Do the users have additional ideas for products or services that would help them get their jobs done and be innovative? I'm thinking of technology here, but you can see how we could go beyond that.

9) Team building. Most people in organizations depends on others to get their jobs done thoroughly. This can be invigorating or frustrating--do you have a good measure of team satisfaction? Are there assessments that can help team members interact more effectively? You betcha.

10) Program evaluation. Whether formative, summative, theory-based, or some other type, assessment can help identify needs, clarify paths, and determine whether money and time invested is giving the organization the outcomes it hand in mind. It can also help them uncover unanticipated consequences.

11) Entrance and exit surveys. When resources are scarce, it's even more important to maximize the return on the investment we make in hiring and make sure those entering the organization have the resources they need. On the other end, capturing good data from those leaving the organization can help identify key areas of weakness or provide insight for hiring the next time.

I'm sure I missed a few. But you get the idea. Any time you have a slow down in a part of the organization, use it as an opportunity to expand the scope of your HR strategy. Test your ability to be flexible and innovative. Whatever you do, don't waste your resources.

Saturday, July 23, 2011

July Research Update

Okay, it's past time for a research update. Let's see what's going on out there:

- Looking to get a better salary offer? Research by Thorsteinson suggests that you shouldn't be afraid to aim high...

- Trying to hire for creativity? You'll be interested in Madjar et al.'s research that indicated that depending on the type of creativity you're after (i.e., radical or incremental) both personal and environmental factors play a role. For example, if you're after radical creativity, look for a willingness to take risks and career commitment.

- You stats guys and gals out there will want to check out Johnson et al.'s study that investigated common method variance (CMV). Specifically, they found that by applying different types of remedies for CMV, they altered the relationship between two variables (core self-evaluation and job satisfaction). Something to consider when investigating criterion-related validity.

- Speaking of core self-evaluation, Chen disputes the inclusion of several factors in the concept, and argues that problems exist with its convergent and discriminant validity.

- Observer ratings of personality are hot. And Oh, Wang, and Mount get into the action with their study that found (as other have) that observer ratings yield higher predictive validities than self-reports. Now if we only had an easily-accessible database of observer ratings...

- Do hiring managers really discriminate against obese individuals? Yep, as Agerstrom and Rooth show. Specifically, scores on the implicit association test predicted whether hiring managers were likely to invite obese applicants for an interview.

- Looking to increase the number of staff that exhibit organizational citizenship behaviors? (ya know, things like helping out a co-worker when they don't have to) You might look first to the supervisor, as Yaffe and Kark demonstrate.

- A recent study that found that more women on a team increased group performance garnered some press, but the real story was the predictive role played by social competence (which women happened to score higher on). The authors pondered whether social competence could be trained, thus increasing group performance. Well, a new study by Kotsou et al. seems to suggest just that, showing that a group-format intervention increased emotional competence in adults (it also resulted in lower cortisol secretion and better subjective and physical well-being).

- Speaking of trying to maximize team performance, a study by Humphrey et al. found that both short- and long-term team performance was highest when variance in conscientiousness scores was lowest but variance in extroversion was highest. What does this mean? It suggests that people perform best when around others with similar levels of conscientiousness but perform better when working with people that vary in their levels of extroversion. Fascinating, and something to consider when hiring for or building teams.

- And speaking (again) of emotional competence, Seal et al. describe the development of a new measure of social and emotional development.

- Finally there is Hee et al. with an important study of prejudice. Specifically, the authors found that prejudice against out-group members increases with in-group size and perceptions of homogeneity among both the in-group and out-group. In addition to validating the importance of intergroup contact, this research suggests prejudice may be reduced when dealing with small groups and increasing an understanding of the differences within each group. Makes a lot of sense.


Last but not least, for you IPAC members don't forget that presentations from the just-concluded conference (which I heard was a rousing success) are starting to be posted in the members-only portion of the website. Good stuff!

Tuesday, July 12, 2011

Is Google+ what we've been waiting for?

No doubt by now you've heard about Google+ ("Google plus"). It's essentially Google's latest stab at trying to topple Facebook as the global social networking leader. While Facebook holds a lot of promise in the way of recruiting, selection, and other core HR functions, its use has been sporadic due to a number of issues. Can Google+ give us the functionality we've always wanted--and stick around long enough to be a major player?

If you haven't heard of Google+ (or haven't read enough), you can get more information here or here or here or here. Or heck, just watch their intro video. By some estimates it already has around 10 million users, which is pretty amazing considering it only came out a few weeks ago, although it's not anywhere near Facebook which claims it has 750 million.

But let's not get ahead of ourselves. Let's first review why Facebook isn't the holy grail we once thought it could be.

At first blush, a public social networking site holds a heck of a lot of promise. It allows organizations to learn more about potential candidates, things beyond a test score. It allows people to network, potentially increasing the speed and efficiency of information sharing. And it allows applicants to learn more about organizations--faster and more informally than a career webpage.

This all sounds good in theory. And some organizations have made Facebook work for them, by using things like fan pages. But for many, the promise has never been fulfilled. Let's look at the main reasons why, and how Google+ may be the answer.

The main problem is that on Facebook, there's just one you. When you post something on Facebook, it goes to all your connections. Friends, family, co-workers, acquaintances--everyone, unless they've "hidden" you. (Yes, you can post to a smaller group but it's a pain and who actually does this?) This has several implications:

- You watch what you say. Do really want to say the same thing to your friends that would to your family? Do you really want to post that picture for everyone to see?

- It increases "lurkers", who simply read but never post, thus not being true participants in the online social interaction.

- This hyper-openness serves as a barrier to some users, who are simply uncomfortable sharing their life with people.

- You have friend request anxiety. You dread getting an email that your boss or someone you don't really know wants to be "friends" on Facebook. If you say no you feel like a jerk, if you say yes then you're openness goes down a notch. What fun is that?

Google+ deals with all this by allowing you to create something called "circles" where you add individuals to certain groups (e.g., friends, colleagues, family) which thereby allows you to post things to only those groups. It even allows you to view your own profile as if you were someone else (IMHO one of the most innovative features).

What are the implications of these circles? There are many:

1. People are more likely to create profiles. Along with the increased attention to security attracting new people, once it gets out that you can manage your social identity much better (and easier) with Google+, it's more likely that someone will create a profile to begin with.

2. Individuals will be more likely to accept invitations to connect from organizations and recruiters. Why? Because they'll be able to manage what information gets shared with those individuals.

3. The information employers have access to will be less--but of higher quality. Assuming people think before posting (yes I'm giving people the benefit of the doubt here), while there may be less information on a profile for an employer to view, it will be more relevant. Instead of posts about children or parties, it will be opinions or accomplishments--things that might actually be job related and much less likely to get employers into legal hot water.

4. It could help with referrals, as individuals will feel more comfortable sharing information about jobs--or their own interest in jobs--without fear of what their management might think.

5. It could give potential applicants a more realistic job preview. With the concern about tanking your job lessened, people will be more likely to be open about the good--and the bad--things about where they work.

You can envision how Google+ has broader applicability for organizations. It allows organizations to create employee-only groups. It allows employees to create informal social groups--or more formal interest groups. It adds another way for colleagues to share knowledge. And it helps create that intangible bond that connects co-workers in a way that meetings and off-sites never can.

So there is quite a bit of promise, but there are still many questions. Could Facebook add design elements to mimic aspects of Google+? Absolutely (and I strongly suspect they are in the process of doing just that). Could Google fail to attract enough followers to its new site to make it the killer app that Facebook is? Sure; in fact there's a good example of this in Orkut (although it is quite popular in Brazil and India). Are there examples of many social networking sites that have flared and fizzled? You bet (heard of MySpace?). And the juggernaut that is Facebook is not to be ignored.

But there are reasons to believe that this could be the real deal. Google spent a lot of time testing this thing out and appears to be listening intently to users on issues from design to privacy--something Facebook has been grilled about for as long as I can remember. And I didn't even touch on the other features of Google+, such as real-time group video chat.

The bottom line is when it comes to websites, most of us are followers. All it takes is your friends and colleagues to start posting somewhere else (heck, it's just another bookmark), and before you know it Facebook could start looking a lot like another casualty in the hyper-competitive web wars. Fortunately, organizations will be the better for it.

Saturday, July 02, 2011

Should you hire more women for your teams?


Should you hire more women for the teams within your organization? You might think so after reading an article in the June 2011 Harvard Business Review. It's an interview with the authors of some research that came out last year in Science. In fact this hiring strategy has even been suggested based on this research.

But let's take a deeper look.


The takeaways from the HBR article (and the published study) suggest:

- there is a "collective intelligence" factor (c) that is related to team success

- this (c) factor out-predicts team success compared to the average team intelligence score, the highest intelligence score among the team members, or other logical factors such as group cohesion and satisfaction

- the (c) factor is primarily related to the average social sensitivity of the team members, the equality of distribution of turn-taking during team conversation, and...(drumroll please) the proportion of females in the group

In the studies, the authors had nearly 700 individuals (one assumes students? the subjects aren't described) participate in teams of two to five on a variety of tasks, such as completing puzzles, brainstorming, and negotiating. At the end of the session they had them complete the criterion task--in the first study a video game of checkers against a computer opponent, in the second, an architectural design task.

So what did they find? As Kai Ryssdal would say, let's do the numbers:

- There did seem to be some general factor that predicted a significant amount of variance in the criteria (43% and 44% respectively).

- (c) seems to be related, at least to a small amount, with both average individual intelligence (r=.15) and maximum member intelligence (r=.19), but the authors stress higher correlations with average social sensitivity of group members (r=.26), variance in the number of speaking turns (r=-.41), and proportion of females in a group (r=.23), although the latter was largely a result of the women scoring much higher on the measure of social sensitivity, which the authors stress came out on top in terms of unique prediction power.

- the instrument used to measure social sensitivity, the "Reading the mind in the eyes" test, has subjects identify the emotion being displayed by a set of eyes (reminiscent of some emotional intelligence tests I've seen). It would be interesting to see how well other measures of social sensitivity (e.g., body language, tone of voice) predicted team decision making, and tie this with other research that has shown emotional intelligence measures predicting team performance.

- The standardized regression coefficients (betas) for (c) were .51 and .36 for the two criteria, substantially above average member intelligence (.08, .05) and maximum member intelligence (.01, .12).

- (c)'s relationship with performance on the various tasks in Study 1 varied pretty widely, from .38 to .86. This, combined with the differential prediction of the criteria, suggests (c) as conceptualized may be more useful for predicting performance on certain group tasks. It's worth noting that the lowest correlation was with brainstorming--a task that requires less team interaction.

- The authors do not say what instrument was used to measure individual intelligence. This may or may not matter.


There are some important lessons here:

1. As is often the case, the farther we get from the actual publication, the more important it is to view the interpretation with caution. In this case, I believe some writers have over-emphasized and over-played the "flashy" result (more women on team -> better decisions) and failed to consider things like effect sizes or relationships among variables. What I'm more interested in is why the women scored higher.

2. From this research the concept of a collective intelligence factor does seem promising (and has been the subject of other recent popular publications). In reality this line of research is old as well as thriving, and includes such well-researched concepts as groupthink as well as several lines of research around what makes an effective team.

3. It is important to remember that job performance is multi-faceted. We know this from (among other things) previous research that has shown intelligence tests do a better job predicting task performance than contextual performance, where non-cognitive tests are at their best (this fact has interesting implications for the study that is the subject of this post). The results of this study remind us to carefully consider what behaviors we're hiring for.

4. It's studies like this that, when improperly analyzed, muddy the waters of our profession. Using this research to say you should hire more women is like saying you should hire more Whites than Blacks because they tend to score higher on intelligence tests. Aside from the obvious discriminatory intent, this is just plain bad decision making: it over-emphasizes differences at the group level and assumes that you have clear evidence that intelligence tests are highly correlated with performance in the job you are hiring for (and similarly valid tests with smaller mean group differences are unavailable).

I have to give the authors credit for going beyond gender as a causal factor in predicting team performance and looking for root relationships, and for not leaping to conclusions like organizations should hire more women, but instead focusing our attention on the implications for team development (they suggest electronic collaboration tools may increase collective intelligence).

This type of press is great for getting us to talk more about what matters. Let's just make sure when we do so we start with the research and consider all the important angles.

Monday, June 20, 2011

New emotional intelligence meta-analysis: Now with 65% more studies!

Emotional intelligence (EI) continues to be a hot topic in the I/O and HR communities. Some are big fans, some (including me) are more skeptical.

Last year's article in IOP by Cherniss, et al. and the accompanying commentaries provided a great overview of the current situation, with the bottom line that while many seem to agree on a definition of EI, measurement is all over the place, with different methods measuring different things and a distinct lack of convergent and discriminant validity.

EI has been shown to have important relationships with a variety of criteria, but many researchers and practitioners have been left wondering, what exactly is being measured?

In the latest Journal of Organizational Behavior, O'Boyle et al. attempt to shed light on the situation, primarily by looking at whether measures of EI add predictive validity above and beyond cognitive ability and five-factor personality measures.

You may remember a similar study published back in early 2010 by Joseph and Newman. So why the need for this one? The current authors point out that in addition to updating the data and using more current estimates of other relationships, "our data set includes 65 per cent more studies that examine the relationship between EI and job performance, with an N that is over twice as large." I don't know why I found this so funny--there's nothing wrong with trying to get a better handle on things--I guess it just reminded me of packaging on dishwasher detergent or something. In case you're curious, their final sample included 43 effect sizes relating EI to job performance.

ANYWAY, what did they find? In the words of the authors: "We found that all three streams of EI correlated with job performance. Streams 2 and 3 incrementally predicted job performance over and above cognitive intelligence and the FFM. In addition, dominance analyses showed that when predicting job performance, all three streams of EI exhibited substantial relative importance in the presence of the FFM and intelligence." [Stream 1 = ability measures, Stream 2 = self report, Stream 3 = mixed models]

Let's do the numbers: the correlation between EI and job performance varied depending on which "stream" of EI research was analyzed, from about .24 to about .30. So if you're considering using a measure of EI, any of the reputably-built measures should add validity to your process (and in fact these numbers may be low since the job performance measures were primarily task-based). Since the numbers were pretty similar, it seems they support a similar construct, right? Not so fast.

When they looked at how the streams related to personality measures and cognitive ability, they were all over the place. In their words, "For all six correlates (FFM and cognitive ability), we found significant Q-values indicating that the three streams relate to other personality and cognitive ability measures differently...These differences in how the EI streams related to other dispositional traits provide a contrasting perspective to the assertion that the various measures of EI assess the same construct."

What about the incremental validity? It ranged from pretty much nothing (stream 1) to a respectable .07 (stream 3) with stream 2 in the middle at .05. The stream 1 measures seem to have much more overlap with cognitive ability measures, and since ability measures dominate the prediction of performance, well...there you go.

So does that mean employers should avoid stream 1 measures (e.g., MSCEIT)? Again, not so fast. The authors point out that this type of EI measure may be more resistant to social desirability and faking effects--it primarily seems to lose its value when you're already using a cognitive ability measure. (Although again, we're talking task performance here)

All of which leaves me still wondering: what exactly is being measured? More to come I suppose; after all, EI is a newbie in the history of I/O constructs. The authors themselves point out that "much more work is still needed" on the construct validaty of EI.

On the other hand, the more practical among you may be thinking, "who cares? it works!" And to them, I would only say, remember what Kurt Lewin said: "There is nothing so practical as a good theory." Without the theory...

You can see an in-press version here. It's good stuff.

Oh, and on a totally unrelated point, I highly recommend upgrading your Adobe Reader to version X if you haven't already. Highlighting with bookmarking at the click of a button.

Sunday, June 12, 2011

Mega research update

Time for a big research update. There's a lot to catch up on, so I'll go quickly and give you just enough for you to follow up on if you're interested.

- More support for the attraction-selection-attrition model

- A 25-year review of the study of leadership outcomes

- Faces with stereotype-relevant features are more open to prejudice

- Early socioeconomic experience plays big role in future risk-taking behavior

- Taking the perspective of another may be key to reducing racial bias

- Generational differences in Big 5 personality factors

- How to improve management research

- Pros and cons of using social networking sites to make HR decisions

- Research directions for talent management

- Organizational branding and "best employer" surveys

- Using social networking sites for hiring decisions (if we just had a single database...)

- How computer adaptive testing aids in delivering unproctored internet tests

- How broad dimension factors may improve assessment centers

- Practical intelligence predicts success as an entrepreneur

- Psychological capital (efficacy, hope, optimism, resilience) is related to job performance

- Expressions of personality factors varies with the situation

Last but not least, this gem, a meta-analysis of the efficacy of simulation games for instruction. Relevance for us? As active engagement went up, so did learning. Lessons? Consider interactive simulations for recruitment and selection, but make sure the viewer is truly involved and not just a spectator.

Sunday, June 05, 2011

Can an applicant be TOO qualified? And...does it matter?


I've worked with people that seemed overqualified--you probably have too. They brought a bazooka when all that was needed was a squirt gun. Cognitively speaking.

They work amazingly fast, are incredibly innovative, but often seem bored or unhappy with their jobs. It's made me wonder what organizations can do with these folks to (1) get the most out of their skills, and (2) help them be satisfied with their work life.

Of course it's not just me who has ruminated about this issue. Recruiters and screeners are often faced with this question: do we take a chance on someone who seems to have much greater education, training, or competencies than the job calls for? Are they likely to simply turn around and leave? This issue takes on increased importance in this day and age with higher unemployment and more people simply looking for *A* job.

There have even been lawsuits about this issue. You may remember the Jordan v. New London case, where a law enforcement applicant sued the City for age discrimination after they failed to hire him because he scored too high (yes, you're reading that right) on a cognitive ability exam. (BTW, the City won)

But despite this case and recruiter perceptions, we're still left with a legitimate question: can an applicant be too qualified in ways that matter for an organization?

In the June 2011 issue of Industrial and Organizational Psychology, Erdogan, et al. provide an overview of this issue and argue that this situation deserves more attention than the scant it has received so far--particularly in comparison to other I/O topics.

What the research does seem to indicate is this: individuals that are overqualified (either objectively through, for example, having "too much" education, or through subjective perceptions) have more negative job attitudes. Specifically, they have been shown to have:

- lower job satisfaction
- lower life and career satisfaction
- lower organizational commitment
- higher turnover intentions

This makes intuitive sense. If you believe your qualifications far out-strip those required for the job, you're likely to feel underutilized and under-challenged, which in turn is likely to make you feel unsatisfied and cause you to look elsewhere.

What about actual turnover? Are overqualified applicants actually more likely to leave? This is one of the primary research questions and there is some research that supports this. However, the authors point out several problems with this line of thinking, including its cross-occupational nature.

In addition, the authors argue (and I agree) that more important than turnover is job performance. I think we'd all take a chance on someone that would have a huge positive impact, even if their stay was short.

So what about the performance of the overqualified; what does the research say? To quote the authors:

"There is...a small, but growing literature suggesting that overqualified individuals perform their jobs better than their less-qualified coworkers...whether turnover is good or bad for the organization needs to be considered within the context of how well individuals are performing their jobs." [emphasis added]

The authors also point out several additional advantages to hiring overqualified workers:

- they may be prime candidate for future roles, including leadership positions

- even if their tenure is short, their impact may be significant

They also point out that this is the tip of the iceberg of this research: much remains to be done in terms of defining overqualification, in studying how recruiters make these determinations, situational factors, measurement, and other important issues.

So bottom line: yes, there may be downsides to hiring someone that appears to be overqualified--but how certain are you that this is the case? And even if it's true, would the benefits outweigh the risks?

If organizations choose to go with someone they perceive as overqualified, it becomes even more important to communicate openly and frequently with this applicant about job expectations, as well as possibilities. And management needs to realize that they may be getting more than their money's worth.


(Interestingly, the other focal article in this issue has to do with performance management, and how difficult it is to get right. One of the points the authors make loud and clear is it's important for supervisors to offer frequent, informal feedback. The common thread between the two articles: the importance of engaged, communicative supervisors who have an accurate view of their team's competencies.)

Tuesday, May 31, 2011

Recipe for losing a lawsuit

Ingredients

One large, diverse candidate pool
One cognitive ability OR physical agility test
One protective services job (police or fire if in season, otherwise corrections)

Optional: large, aggressive employee union
Optional: history of litigation


Instructions

1. Begin by deciding what type of exam--cognitive ability or physical agility--you feel like giving; don't worry about performing a job analysis first as these are time consuming and boring. If you must, select a small sample of current employees (preferably the poor performers) and provide minimal instruction. Don't worry about whether they are "true" job experts, and whatever you do, don't link tasks to KSAs--everyone hates doing it.

2. Select what you will be measuring. Base your decision on what you feel like, or whatever's easiest. Usually this is just doing what you did last time.

3. Have untrained analysts prepare the exams. Because anyone can do hiring, select whoever has time on their hands. Optional: search the Internet for a test that catches your eye. Rule of thumb is one question per content area (if you have more than one question, you're wasting applicant time).

4. Make sure the reading level of the exam is graduate school-level. After all, isn't reading an important part of any job? And don't you want the best?

5. Next, choose weighting of your exam components either randomly or based on gut feeling. When in doubt, place the largest weight on the test that is most related to cognitive ability.

6. Select a pass point. It should either be: (a) 70 percent; (b) based on administrative convenience; or (c) chosen at random.

7. Administer the exam, preferably with limited advertising. If you must advertise, give applicants a very short amount of time to prepare--after all, this isn't grade school. Do not pre-test the exam--if you've followed these instructions, it should be fine.

8. Score the exam--if you can, avoid "right/wrong" questions and go with ones where you can personally judge the quality of the answer. Don't worry about a boring "benchmark"--you know a good response when you see it.

9. Keep all scoring results and details regarding the process to yourself. Candidates don't need to know (and won't understand).

10. Make final selection decisions. Do not administer yet another test before making your selection, or if you must because of boring rules, make it an unstructured interview. Ask lots of questions like, "If you were a book, what would your title be?" If you have any women or minorities, ALWAYS ask questions about their ability to perform the job.

11. Do not document any of this process. Everyone involved will be with the organization for a long time, and people have really good memories.

Above all: have fun! After all, it's just people's livelihood.


A good example of how this type of thing plays out (albeit in a less horrific manner) is the recent decision in Easterling v. Connecticut Dept. of Corrections.

And for those of you that want to read more about how the law applies to selection, look for an upcoming IPAC monograph written by yours truly!

Saturday, May 21, 2011

Shameless plug: The 2011 IPAC conference

And now for something completely different. A man with a marketing video.

It's getting toward the end of May, so if you haven't made up your mind about attending this year's premier event for practitioners of assessment and selection methods, you might want to do it soon.

Details: July 17-20, Washington, D.C. Dupont Hotel. So many great presentations I can't even begin to summarize. Check out the details here.

If that doesn't convince you, I doubt amateurish marketing tactics will work, but since you're still reading and I have you captive...

Sunday, May 15, 2011

IJSA v.19 #2: Personality, personality, personality (and more)


The June 2011 issue of the International Journal of Selection and Assessment (IJSA, volume 19, issue 2) is out. And it's chalk full of articles on personality measurement, but includes other topics as well, so let's jump in! Warning: lots of content ahead.

- O'Brien and LaHuis analyzed applicant and incumbent responses to the 16PF personality inventory and found differential item functioning for over half the items (but of those only 20% were in the hypothesized direction!).

- Reddock, et al. report on an interesting study of personality scores and cognitive ability predicting GPA among students. "At school" frame-of-reference instructions increased validity and, even more interesting, within-person inconsistency on personality dimensions increased validity beyond conscientiousness and ability.

- Fein & Klein introduce a creative approach: using combinations of facets of Five-Factor Model traits to predict outcomes. Specifically, the authors found that a combination (e.g., assertiveness, activity, deliberation) did as well or better in predicting behavioral self-regulation compared to any single facet or trait.

- Think openness to experience is the runt of the FFM? Mussel, et al. would beg to disagree. The authors argue that subdimensions and facets of openness (e.g., curiosity, creativity) are highly relevant for the workplace and understudied--and demonstrate differential criterion-related and construct validity.

- So just when you're thinking to yourself, "hey, I'm liking this subdimension/facet approach), along comes van der Linden, et al. with a study of the so-called General Factor of Personality (GFP) that is proposed to occupy a place at the top of the personality structure hierarchy. The authors studied over 20,000 members of the Netherlands armed forces (fun facts: active force of 61,000, 1.65% of GDP) and found evidence that supports a GFP and value in its measurement (i.e., it predicted dropping out from military training). Unsurprisingly, not everyone is on the GFP bus.

- Next, another fascinating study by Robie et al. on the impact of the economy on incumbent leaders' personality scores. In their sample of US bank employees, as unemployment went up, so did personality inventory results. Faking or environmental impact? Fun coffee break discussion.

- Recruiters, through training and years of experience, are better at judging applicant personality than laypersons, right? Sort of. Mast, et al. found that while recruiters were better at judging the "global personality profile" of videotaped applicants as well as detecting lies, laypeople (students in this case) were better at judging specific personality traits.

- Last one on the personality front: Iliescu, et al. report the results of a study of the Employee Screening Questionnaire (ESQ), a well-known covert, forced-choice integrity measure. Scores showed high criterion-related validity, particularly for counterproductive work behaviors.

- Okay, let's move away from personality testing. Ziegler, et al. present a meta-analysis of predicting training success using g, specific abilities, and interviews. The authors were curious whether the dominant paradigm that g is the single best predictor would hold up in a single sample. Answer? Yep. But specific abilities and structured interviews were valuable additions (unstructured interviews--not so much), and job complexity moderated some of the relationships.

- Given their popularity and long history, it's surprising that there isn't more research on role-players in assessment centers (ACs). Schollaert and Lievens aim to rectify this by investigating the utility of predetermined prompts for role-players during ACs. Turns out there are advantages for measuring certain dimensions (problem solving, interpersonal sensitivity). Sounds promising to me. Fortunately you can read the article here.

- What's the best way to combine assessment scores into an overall profile? Depends who you ask. Diab, et al. gathered information from a sample of adults and found that those in the U.S. preferred holistic over mechanical integration of both interview and other test scores, whereas those outside the U.S. preferred holistic for interview scores only.

- Still with me? Last but not least, re-testing effects are a persistent concern, particularly on knowledge-based tests. Dunlop et al. looked at a sample of firefighter applicants and found the largest practice effects for abstract reasoning and mechanical comprehension (both timed)--although even those were only two-fifths of a standard deviation. Smaller effects were found for a timed test of numerical comprehension ability and an untimed situational judgment test. For all four tests, practice effects diminished to non-significance upon a third session.

Sunday, May 08, 2011

Hiring HR professionals: What are we thinking?

When you hire someone for your Accounting department, what do you look for? Accounting experience, undoubtedly, but presumably you look for someone with some college-level accounting training as well as basic competencies such as facilities with numbers, conscientiousness, etc.

What about IT support? Again, in most cases you're probably looking for experience with specific hardware or software or general support experience, but in many cases you're searching that resume for formal education/training in IT-related topics.

Connection? For many organizational "support" functions, we look not only for experience but educational experiences that would give the individual a grounding in the basics of the field and (hopefully) train their mind to recognize historical developments as well as connections between concepts.

So why is that when we hire for HR, another support function, our brains fall out our ears and we seem to focus primarily on past experience? This weakness seems common in the public sector but I'm guessing the private sector is not immune.

Phrased another way: Why don't more organizations place value on formal HR education when hiring?

I'm not suggesting that one needs a degree in HR to be good at it, although I do think it limits people. What I'm concerned about is the apparent lack of importance placed on these degrees and what that says about the profession.

Is it because formal HR educational programs don't exist? Nope. According to the College Board, over 350 schools exist with a major in HRM.

Is it because formal education in HR isn't as important for job performance as experience? I'm not aware of any research that shows this to be true (if you are, please enlighten me).

No, I suspect the following:

1) Many HR leaders themselves do not have formal educational training in HR therefore they tend not to think of it as a screening tool (or place much value in it).

2) Similarly, there is a lack of knowledge about HR educational programs--what they offer, the advantage of having gone through one, and how to connect to the school.

3) There are relatively few candidates out there that apply for HR vacancies that have a relevant degree (either as a pure function of the number of individuals that have a degree in HR or because many applicants believe anyone can do HR).

4) HR is still seen as largely transactional and/or not a critical business function, therefore the qualifications sought have more to do with customer service than they do formal training. (I believe this is a large reason why HR outsourcing is easy to contemplate for many executives)

5) Many are simply passing through HR. Many incumbents do not see HR as a "career", but rather a stopping point on their way to...something else. But much like Lightning McQueen (or Doc Hollywood if you prefer), they find they have a hard time leaving, either because they come to like it or they find they're not as employable as they thought.

6) The professional HR organizations and HR publications focus on anecdotes, opinion, and news bits rather than formal study and analysis. SHRM is not SIOP.

So why do I care about this topic? Because I see HR stagnating until it truly becomes a profession and not a loose collection of people who vaguely care about things relating to people management. And part of becoming a true profession is placing formal structure around the path from education to employment.

I'm also concerned because of the relationship between I/O and HR. Ultimately much of what is researched in I/O gets practiced through HR, and there is a close relationship in many people's minds--in fact I would wager most managers haven't the foggiest idea what the difference is. So what impacts HR ultimately impacts I/O.

Maybe it's just not there yet. Maybe I need to be patient. HR's a relatively new field and maybe it just needs time to develop, and to figure out questions like its relationship to I/O.

But given what I've seen, I'm not feeling optimistic. I see HR shops being outsourced or automated, resulting in more IT skills being required than knowledge about research on human behavior. Inevitably this will lead many organizations to lose out on important efficiencies they could be gaining (not to mention improvements in the work environment).

What can be done? I don't have all the answers, just some suggestions:

1) A wider promotion of the value of formal HR education. SHRM, I'm looking at you, as well as the other HR professional organizations.

2) More research on the connection between formal HR education and job performance.

3) Effort on the part of HR leaders to at least consider the potential importance of HR education when hiring for their teams.

4) More effort on the part of HR leaders to establish connections to schools that offer HR degrees and begin programs like internships and formal recruiting.

5) More organizational support (e.g., tuition reimbursement) for staff to obtain HR degrees.


To read more about this issue, I highly recommend starting with the 2007 piece by Sara Rynes and her colleagues.

Hat tip to this HR Examiner article, which helped me crystallize something that's been bothering me for a long time.

Sunday, May 01, 2011

Tech tools: Brainshark and Greenshot

Brainshark and Greenshot. Sounds like a kids' cartoon about a pair of superheroes.

But no, this post is all about a pair of simple tools that you can use in a variety of ways to enhance your recruitment and selection efforts and just plain make your life easier.

Brainshark is a remarkably simple, and free, tool you can use to create slideshows or videos with audio in a matter of minutes. Did I mention it's free?

Simply upload your file, add audio (if it's a PowerPoint) using either your computer microphone or phone, name your slides, and you're good to go! You can see a simple example of a promotional spot I whipped up below--remember, my specialty is information filtering, not multimedia! Make sure to check out the "Contents" menu to the right of the fast-forward button.





This took maybe thirty minutes to develop and record, and I used my computer's built-in microphone, hence the less-than-stellar audio quality. But I think you get the idea and see some of the possibilities:

- realistic job preview
- job advertising
- instructions for applying

And so on. For $10 or $20 a month, respectively, you can upgrade to Brainshark Pro or Brainshark Pro Trainer, which includes options like private sharing, capturing leads, testing, and integration with LMS. Just pretty darn cool all around, I say.


The next tool is simpler but no less useful on a day-to-day basis. I know I'm not the only one out there who likes SnagIt. It's an easy way to take screenshots of parts of the screen and quickly add borders, arrows, or other accents. And it's very reasonably priced.

But I'm all about the free stuff whenever possible, which is why I was pleased to learn about Greenshot, a scaled-down tool that gives you pretty much what you'd get in SnagIt, if a little less snazzy. Simply load the software and whenever you press Print Screen, instead of taking a picture of the whole screen you can specify the region. I use this all the time for showing others what I'm talking about, presentations, and user guides. You can also capture just the window, or the whole screen.

One feature unique (to my knowledge) to Greenshot is "obfuscate", which allows you to blur parts of the picture (e.g., name, SSNs) you may wish to hide. See the screenshot below for an example where I obfuscated part of the blog post title:



The one feature that I'd still use SnagIt for is capturing a rolling webpage that includes the links. Very handy. But other than that, Greenshot would do ya just fine.

So there you have it, two simple tools that have the potential to add tremendous value to your life. Hope you enjoy.

Hat tip to my colleagues at CODESP for turning me on to Brainshark, and my friends at Biddle for Greenshot.