Good reading for this October 31, 2007--enough to frighten us into paying renewed attention to our recruiting and assessment:
Testing without analysis...now that's scary!
It's scary, all the things that get in the way of job performance
What's that sound? Oh, just ATS reports running.
Be afraid...very afraid...of drop-down menus!
Hope your career portal isn't frighteningly bad
The number of job boards is truly monstrous
Celebrating 10 years of the science and practice of matching employer needs with individual talent.
Wednesday, October 31, 2007
Monday, October 29, 2007
October '07 ACN
The October 2007 issue of the Assessment Council News is out with two great articles:
First, Dr. Mike Aamodt tackles the issue of validity co-efficients with Beauty May Be in the Eye of the Beholder, but is the Same True of a Validity Coefficient? (begins page 2)
In the article, Dr. Aamodt gathers data from experts in the assessment community on questions such as:
- Is there a minimum value for a validity coefficient that would generally be accepted by testing experts? If so, what is it?
(includes a great table summarizing where certain validity coefficient values have been referenced)
- What is the lowest uncorrected validity coefficient that you believe would indicate that an inference from a test has acceptable criterion validity?
- If a validity coefficient is statistically significant, is that enough to imply job relatedness?
The second article is by Natasha Riley and covers a topic near and dear to our hearts--Unproctored Internet Testing--The Technological Edge: Panacea or Pandora's Box? (page 11)
If the title seems a little odd and/or familiar, it's because it's a combination of various presentation titles from the 2007 IPMAAC conference where unproctored Internet testing was a hot topic. In the article, Natasha covers some pros and cons of this type of testing and describes how Riverside County in California is having some success with it.
Remember, internet-based testing does not have to be about selecting out. It can be about giving candidates tools they can use to determine whether they would be a good fit. Cheating is removed as an obstacle when we eliminate the motivation. Consider giving the applicant the "exam" and have them determine whether they want to move forward given their results and how they compare to successful job incumbents.
Anyway, kudos to IPMAAC for another illuminating issue of ACN. Keep 'em comin', and folks watch out for the call for proposals for the 2008 IPMAAC conference! (to be announced shortly)
First, Dr. Mike Aamodt tackles the issue of validity co-efficients with Beauty May Be in the Eye of the Beholder, but is the Same True of a Validity Coefficient? (begins page 2)
In the article, Dr. Aamodt gathers data from experts in the assessment community on questions such as:
- Is there a minimum value for a validity coefficient that would generally be accepted by testing experts? If so, what is it?
(includes a great table summarizing where certain validity coefficient values have been referenced)
- What is the lowest uncorrected validity coefficient that you believe would indicate that an inference from a test has acceptable criterion validity?
- If a validity coefficient is statistically significant, is that enough to imply job relatedness?
The second article is by Natasha Riley and covers a topic near and dear to our hearts--Unproctored Internet Testing--The Technological Edge: Panacea or Pandora's Box? (page 11)
If the title seems a little odd and/or familiar, it's because it's a combination of various presentation titles from the 2007 IPMAAC conference where unproctored Internet testing was a hot topic. In the article, Natasha covers some pros and cons of this type of testing and describes how Riverside County in California is having some success with it.
Remember, internet-based testing does not have to be about selecting out. It can be about giving candidates tools they can use to determine whether they would be a good fit. Cheating is removed as an obstacle when we eliminate the motivation. Consider giving the applicant the "exam" and have them determine whether they want to move forward given their results and how they compare to successful job incumbents.
Anyway, kudos to IPMAAC for another illuminating issue of ACN. Keep 'em comin', and folks watch out for the call for proposals for the 2008 IPMAAC conference! (to be announced shortly)
Wednesday, October 24, 2007
October '07 TIP: Alternatives and Title VII
You legal buffs out there know that under Title VII of the Civil Rights Act of 1964 (as amended in 1991) there exists a "burden shifting" framework that lays out how an employment discrimination case (hypothetically) proceeds:
1 - The plaintiff must show that the employer is using a particular employment practice (e.g., a selection test) that results in disparate (or adverse) impact against a legally protected group; if successful,
2 - The employer must show that the practice was/is job related for the position in question and consistent with business necessity; if successful,
3 - The plaintiff must show that there is an alternative employment practice (e.g., a different type of test) that would serve the employer's needs, be equally valid, and have less adverse impact and the employer refuses to adopt it. The classic case is plaintiffs suing over a written knowledge test and suggesting a personality or performance test should have been used.
You may also know that plaintiffs rarely win employment lawsuits (for many reasons, but one of which is employers are getting better at #2 above), and there seems to be a shift toward the third prong of the case--showing that there are alternative testing mechanisms out there that are equally as valid and with less adverse impact.
The October issue of the Industrial-Organizational Psychologist (TIP) contains two articles (both by individuals who have served as expert witnesses in discrimination cases) that touch on this subject and are worth a read:
Slippery slope of "alternatives" altering the topography of employment testing? by James Sharf
and
Less adverse alternatives: Making progress and avoiding red herrings by James Outtz
Also in this issue, a great analysis of the recent U.S. Supreme Court ruling in Parents v. Seattle School District by Art Gutman and Eric Dunleavy that reviews in detail the current status of affirmative action.
1 - The plaintiff must show that the employer is using a particular employment practice (e.g., a selection test) that results in disparate (or adverse) impact against a legally protected group; if successful,
2 - The employer must show that the practice was/is job related for the position in question and consistent with business necessity; if successful,
3 - The plaintiff must show that there is an alternative employment practice (e.g., a different type of test) that would serve the employer's needs, be equally valid, and have less adverse impact and the employer refuses to adopt it. The classic case is plaintiffs suing over a written knowledge test and suggesting a personality or performance test should have been used.
You may also know that plaintiffs rarely win employment lawsuits (for many reasons, but one of which is employers are getting better at #2 above), and there seems to be a shift toward the third prong of the case--showing that there are alternative testing mechanisms out there that are equally as valid and with less adverse impact.
The October issue of the Industrial-Organizational Psychologist (TIP) contains two articles (both by individuals who have served as expert witnesses in discrimination cases) that touch on this subject and are worth a read:
Slippery slope of "alternatives" altering the topography of employment testing? by James Sharf
and
Less adverse alternatives: Making progress and avoiding red herrings by James Outtz
Also in this issue, a great analysis of the recent U.S. Supreme Court ruling in Parents v. Seattle School District by Art Gutman and Eric Dunleavy that reviews in detail the current status of affirmative action.
Monday, October 22, 2007
Looking far and wide
When it comes to finding talented individuals, how far and wide do you look? Are you as creative as you could be?
In a recent article James E. Challenger, of the outsourcing firm Challenger, Gray, and Christmas, described the results of a new study in which half of the 100 HR executives polled stated their companies work informally with former employees; 23% considered stay-at-home parents to be valuable recruiting targets. The goal? Finding folks that are experienced and easy to train.
What does your organization do when people leave? Does it go beyond getting a forwarding address? How about following a structured approach to keep track of talented individuals in case their next job fizzles?
Challenger cites Lehman Brothers as a leader in this area with its Encore program, which according to the website is "designed to facilitate networking and professional development opportunities for individuals interested in resuming their careers in the financial services industry. Ideal candidates are women and men, preferably with industry-related experience at a vice-president level or above, who have voluntarily left the workforce for a year or more."
Does your organization actively recruit people that have been out of the workforce for a year or more? Or are these people seen as "stale"?
The article also includes "resources for returning parents", including:
UCEAdirectory.org (searchable database of continuing education courses)
Meetup.com (real-world social networking)
Modernmom.com (advice on activities and work-life balance)
Showmomthemoney.com (money tips, degree links, and more)
Ladies Who Launch (networking and entrepreneurial advice)
Has your organization considered recruiting efforts that target these types of groups? Or is it hoping that qualified applicants find you?
Some things to think about as we all work on being more creative with reaching out to all qualified candidates. I bet there are a lot of folks out there that would love to see a list of employers willing to hire returning workers (as well those that are open to part-time arrangements).
In a recent article James E. Challenger, of the outsourcing firm Challenger, Gray, and Christmas, described the results of a new study in which half of the 100 HR executives polled stated their companies work informally with former employees; 23% considered stay-at-home parents to be valuable recruiting targets. The goal? Finding folks that are experienced and easy to train.
What does your organization do when people leave? Does it go beyond getting a forwarding address? How about following a structured approach to keep track of talented individuals in case their next job fizzles?
Challenger cites Lehman Brothers as a leader in this area with its Encore program, which according to the website is "designed to facilitate networking and professional development opportunities for individuals interested in resuming their careers in the financial services industry. Ideal candidates are women and men, preferably with industry-related experience at a vice-president level or above, who have voluntarily left the workforce for a year or more."
Does your organization actively recruit people that have been out of the workforce for a year or more? Or are these people seen as "stale"?
The article also includes "resources for returning parents", including:
UCEAdirectory.org (searchable database of continuing education courses)
Meetup.com (real-world social networking)
Modernmom.com (advice on activities and work-life balance)
Showmomthemoney.com (money tips, degree links, and more)
Ladies Who Launch (networking and entrepreneurial advice)
Has your organization considered recruiting efforts that target these types of groups? Or is it hoping that qualified applicants find you?
Some things to think about as we all work on being more creative with reaching out to all qualified candidates. I bet there are a lot of folks out there that would love to see a list of employers willing to hire returning workers (as well those that are open to part-time arrangements).
Wednesday, October 17, 2007
Increasing response rates to web surveys
I'm guessing I'm not the only one that has been using web surveys more and more (I'm a big fan of SurveyMonkey), and we'll probably all do more in the future. That's why this recent bit of research by Dr. Thomas Archer is so valuable. It's titled, "Characteristics associated with increasing the response rates of web-based surveys" and it's based on results from 99 various web-based surveys over a more than two-and-a-half year period using Zoomerang.
The results were somewhat surprising (to me at least). The length of the questionnaire wasn't particularly important in terms of response rate. This included both the number of open-ended questions and the length of rating scales. Instead the challenge is getting people to the survey in the first place.
How do we do that? The author recommends several strategies:
1 - Leave the questionnaire open for a while (say, three weeks), and send out a couple reminders along the way.
2 - Pay attention to how you write your survey invitations. They should be written at a low grade level in terms of readability.
3 - Make it clear to survey participants "what's in it for them" (e.g., you'll get a copy of the results).
If you haven't played around with web-based surveys, I'd encourage you to. They're very easy to learn and typically inexpensive.
I'd be interested in seeing if anyone out there is using web-based surveys as part of their recruitment/assessment process? Seems like a natural fit.
The results were somewhat surprising (to me at least). The length of the questionnaire wasn't particularly important in terms of response rate. This included both the number of open-ended questions and the length of rating scales. Instead the challenge is getting people to the survey in the first place.
How do we do that? The author recommends several strategies:
1 - Leave the questionnaire open for a while (say, three weeks), and send out a couple reminders along the way.
2 - Pay attention to how you write your survey invitations. They should be written at a low grade level in terms of readability.
3 - Make it clear to survey participants "what's in it for them" (e.g., you'll get a copy of the results).
If you haven't played around with web-based surveys, I'd encourage you to. They're very easy to learn and typically inexpensive.
I'd be interested in seeing if anyone out there is using web-based surveys as part of their recruitment/assessment process? Seems like a natural fit.
Saturday, October 13, 2007
Depressed workers
A new report by the U.S. Department of Health and Human Services' Substance Abuse and Mental Health Services Administration (SAMHSA; say that five times fast) states that an average of 7% of U.S. workers age 18 to 64 experienced a major depressive episode (MDE) in the past year. The data is based on surveys conducted of full-time workers in 2004 and 2006.
Importantly, the rate of MDEs varied by occupation, age, and gender:
- The highest rates of depression were found among personal care and service occupations (10.8%) and food preparation and serving related occupations (10.3%).
- The highest rate among female workers was in personal care and service (a whopping 14.8%). The highest rate among men was in arts, design, entertainment, sports, and media occupations (a much lower 6.7%).
- Lowest overall rates were found in engineering, architecture, and surveyors (4.3%), life, physical, and social sciences (4.4%), and installation, maintenance, and repair (4.4%).
- The highest rate among those age 18-25 was in healthcare practitioners and technical (11.9%); lowest was in life, physical, and social sciences (4.3%).
- The highest rates among those age 50-64 were in financial (9.8%) and personal care and service (9.7%) while the lowest rates were in sales and related (3.6%) and production (3.7%).
(Keep in mind when looking at occupational differences that women typically report higher levels of depression, and depression rates decrease with age)
Implications? Keep in mind these statistics if you have people in these occupations in your organization. Depression will impact productivity and your recruitment efforts, let alone any other processes or initiatives that require high levels of employee energy and involvement. Wellness efforts specifically targeting individuals in these occupations may pay bigger dividends.
The report includes the fact that U.S. companies lose an estimated $30-40 billion dollars a year due to employee depression.
Importantly, the rate of MDEs varied by occupation, age, and gender:
- The highest rates of depression were found among personal care and service occupations (10.8%) and food preparation and serving related occupations (10.3%).
- The highest rate among female workers was in personal care and service (a whopping 14.8%). The highest rate among men was in arts, design, entertainment, sports, and media occupations (a much lower 6.7%).
- Lowest overall rates were found in engineering, architecture, and surveyors (4.3%), life, physical, and social sciences (4.4%), and installation, maintenance, and repair (4.4%).
- The highest rate among those age 18-25 was in healthcare practitioners and technical (11.9%); lowest was in life, physical, and social sciences (4.3%).
- The highest rates among those age 50-64 were in financial (9.8%) and personal care and service (9.7%) while the lowest rates were in sales and related (3.6%) and production (3.7%).
(Keep in mind when looking at occupational differences that women typically report higher levels of depression, and depression rates decrease with age)
Implications? Keep in mind these statistics if you have people in these occupations in your organization. Depression will impact productivity and your recruitment efforts, let alone any other processes or initiatives that require high levels of employee energy and involvement. Wellness efforts specifically targeting individuals in these occupations may pay bigger dividends.
The report includes the fact that U.S. companies lose an estimated $30-40 billion dollars a year due to employee depression.
Thursday, October 11, 2007
September '07 issue of J.A.P.
The September 2007 Journal of Applied Psychology is out and it's got some research we need to look at...
First, Hogan, Barrett, and Hogan present the results of a study of over 5,000 job applicants who took a 5-factor personality test on two separate occasions (6 months apart). Only 5% of applicants improved their score on the second administration, and scores were equally as likely to change in a negative direction as a positive one. The authors suggest that given these results, faking on personality measures is not a significant issue in real-world settings. Comment: This is faking defined as improving your score to match the job, not faking as misrepresenting yourself consistently over time. Also, I can't help but think of the recent article by Morgeson et al. in Personnel Psych where they argued that faking isn't the issue; it's the low validities we should be concerned about.
Next, De Corte, Lievens, and Sackett present a procedure designed to determine predictor composites (i.e., how much each testing method should be "worth") that optimize the trade-off between validity and adverse impact. The procedure is tested with various scenarios with positive results. You can actually download the executable code here and an in-press version of the article is here (thank you, Drs. De Corte and Lievens!).
Speaking of validity, the next article of interest is by Newman, Jacobs, and Bartram, and looks at the relative accuracy of three techniques for estimating validity and adverse impact (local validity studies, meta-analysis, and Bayesian analysis). The authors describe which method is optimal in different conditions, using measures of cognitive ability and conscientiousness as predictors. They even toss in recommendations for how to estimate local parameters.
Next, a fascinating and useful study of counterproductive work behaviors (CWBs; things like theft, sabotage, or assault) by Roberts, Harms, Caspi, and Moffitt that tackles the issue from a developmental perspective. Using data from a 23-year longitudinal study of 930 individuals, the authors found that individuals diagnosed with childhood conduct disorder were more likely to commit CWBs as young adults. On the other hand, criminal convictions occurring at a young age were unrelated to CWBs demonstrated later on. Job conditions and personality traits had their own impact on CWBs, above and beyond background factors. Great stuff, especially for those of you with a particular interest in biodata and/or background checks.
Last but not least, a study of person-organization (P-O) fit by Resick, Baltes, and Shantz. Using data from 299 participants in a 12-week internship program, the authors found that the relationship between P-O fit on the one hand and job satisfaction, job choice, and job offer acceptance on the other depends on the type of fit (needs-supplies vs. demands-abilities) as well as the conscientiousness of the individual. Good food for thought when thinking about P-O fit, a consistently popular concept.
Honorable mention: This meta-analysis by Humphrey, Nahrgang, and Morgeson of 259 studies that investigated work design impacts on worker attitudes and behaviors. Think behavior is determined solely by individual ability and disposition? Ya might want to take a gander at this study; it'll change your tune. A great reminder that satisfaction and performance are the result of both the individual and his/her work environment. Also available here.
First, Hogan, Barrett, and Hogan present the results of a study of over 5,000 job applicants who took a 5-factor personality test on two separate occasions (6 months apart). Only 5% of applicants improved their score on the second administration, and scores were equally as likely to change in a negative direction as a positive one. The authors suggest that given these results, faking on personality measures is not a significant issue in real-world settings. Comment: This is faking defined as improving your score to match the job, not faking as misrepresenting yourself consistently over time. Also, I can't help but think of the recent article by Morgeson et al. in Personnel Psych where they argued that faking isn't the issue; it's the low validities we should be concerned about.
Next, De Corte, Lievens, and Sackett present a procedure designed to determine predictor composites (i.e., how much each testing method should be "worth") that optimize the trade-off between validity and adverse impact. The procedure is tested with various scenarios with positive results. You can actually download the executable code here and an in-press version of the article is here (thank you, Drs. De Corte and Lievens!).
Speaking of validity, the next article of interest is by Newman, Jacobs, and Bartram, and looks at the relative accuracy of three techniques for estimating validity and adverse impact (local validity studies, meta-analysis, and Bayesian analysis). The authors describe which method is optimal in different conditions, using measures of cognitive ability and conscientiousness as predictors. They even toss in recommendations for how to estimate local parameters.
Next, a fascinating and useful study of counterproductive work behaviors (CWBs; things like theft, sabotage, or assault) by Roberts, Harms, Caspi, and Moffitt that tackles the issue from a developmental perspective. Using data from a 23-year longitudinal study of 930 individuals, the authors found that individuals diagnosed with childhood conduct disorder were more likely to commit CWBs as young adults. On the other hand, criminal convictions occurring at a young age were unrelated to CWBs demonstrated later on. Job conditions and personality traits had their own impact on CWBs, above and beyond background factors. Great stuff, especially for those of you with a particular interest in biodata and/or background checks.
Last but not least, a study of person-organization (P-O) fit by Resick, Baltes, and Shantz. Using data from 299 participants in a 12-week internship program, the authors found that the relationship between P-O fit on the one hand and job satisfaction, job choice, and job offer acceptance on the other depends on the type of fit (needs-supplies vs. demands-abilities) as well as the conscientiousness of the individual. Good food for thought when thinking about P-O fit, a consistently popular concept.
Honorable mention: This meta-analysis by Humphrey, Nahrgang, and Morgeson of 259 studies that investigated work design impacts on worker attitudes and behaviors. Think behavior is determined solely by individual ability and disposition? Ya might want to take a gander at this study; it'll change your tune. A great reminder that satisfaction and performance are the result of both the individual and his/her work environment. Also available here.
Monday, October 08, 2007
Checkster and SkillSurvey automate reference checking
When it comes to things that supervisors (and, frankly, HR) don't look forward to, reference checking probably ranks in the top 5. Checking references is time consuming and difficult to do well, because many references refuse to do more than confirm name, job title, salary, and employment dates for fear of getting sued. This is unfortunate, since lawsuits in this area are quite rare and reference checks can be a great source of information.
So it was with no small amount of excitement that I discovered two web-based services that are automating the reference check process--Checkster and SkillSurvey. The basic idea behind these services is that a brief survey consisting of both rating scales and open-ended questions is sent out electronically to references; responses to these surveys (generally at least 3) are combined by the services into an overall report for the employer. While the services are substantially similar, in this post I'll give you a brief overview of each.
Checkster
Checkster is the brainchild of CEO Yves Lermusi, formerly of Taleo (and frequent contributor to ERE). Lermusi noticed that the frequency and quality of the performance feedback most people receive drops dramatically when they move from school to work, making it difficult for people to understand their strengths and areas for development. To help remedy this, he developed Checkster to be a "personal feedback management tool"--a focus that he says distinguishes it from other services, whose bread and butter is employer-based reference checking. Applicants receive the results of the reference check, just as the employer does, with the idea that this information will be used to help people develop and make better decisions regarding their career.
With Checkster, the employer simply enters the name and e-mail address of each applicant along with the requisition and selects the type of survey to be delivered (Checkster also has a 360-degree survey). That's it. Simple, eh? The applicant takes it from there, logging into Checkster and entering reference names and contact information. References have 7 days to take the quick and confidential survey, and Checkster compiles the resulting information into a report after at least three responses have been collected. From the employer's side, a simple account screen allows you to manage your requisitions and see the status of each. You can see an overview of how it works here, and watch a demo here that includes pictures of a report.
Big bonus: Checkster also has a free employment verification feature which will send an e-mail to previous employers to verify dates of employment, reporting structure, compensation, and eligibility for rehire.
Price: $50 per requisition, which allows you to check references for up to five candidates with a maximum of 15 references per candidate (volume discounts are also available).
SkillSurvey
As I mentioned, SkillSurvey and Checkster work in a similar fashion--the employer enters candidate information, the candidate enters reference information (or the employer can), references evaluate the candidate, and SkillSurvey generates the report.
Differences between Checkster and SkillSurvey that I observed:
1) SkillSurvey allows you to choose different types of surveys depending on the job. Each includes competencies developed by SkillSurvey staff. For example, there are different surveys for sales positions, IT positions, and HR positions (click here for an example for Marketing Manager).
2) Each point on SkillSurvey's rating scale is anchored, which could potentially lead to better reliability.
3) SkillSurvey reports are not automatically available to the candidate (unlike Checkster)--this reflects the emphasis that SkillSurvey places on being primarily a tool for the employer, versus Checkster's focus on individual development.
4) SkillSurvey has a sourcing component built in--you can download a spreadsheet that contains all the information on reference-givers that you can sort and use to identify applicants (very cool).
5) Checkster's reports are a little shorter and more graphical, while SkillSurvey reports are more text-based and extensive.
6) In terms of customization, SkillSurvey offers many options for altering things like turnaround time, and even weighting questions.
7) The actual text that goes out from candidates is easier to modify using Checkster.
A SkillSurvey overview video with screens of surveys and reports is available here, and sample reports are here. They even have a blog written by Doug LaPasta, their founder and chairman.
Price: $59 for one candidate with significant discounts for volume; usually charged in units of 100 candidates. The employer controls the number of reference givers required for completion of the report (anywhere from 2-15).
By the way, SkillSurvey was selected as a Top HR Product for 2007 by Human Resource Executive.
Summary
Both products have the potential to dramatically decrease the amount of time spent checking references, and have the added benefits of standardization as well as indicating an affinity for technology. Both companies have taken steps to ensure reference givers feel comfortable giving out information. The information may also be of higher quality since the process is being handled by a third party.
Both services were extremely easy to use. I found representatives from both companies to be knowledgeable and helpful. I'm sure as both products mature we'll see great additions, including hopefully an increased ability to gather off-list checks and even more options for tailoring the surveys.
Some things to keep in mind: (1) Like all mass-mailing type services, make sure e-mails from these companies don't get blocked by firewalls; and (2) Because some candidates may provide false references, do periodic spot checks (e.g., by verifying name & e-mail address).
I hope this has peaked your interest; I suggest checking out both products to see if either would make your life easier!
So it was with no small amount of excitement that I discovered two web-based services that are automating the reference check process--Checkster and SkillSurvey. The basic idea behind these services is that a brief survey consisting of both rating scales and open-ended questions is sent out electronically to references; responses to these surveys (generally at least 3) are combined by the services into an overall report for the employer. While the services are substantially similar, in this post I'll give you a brief overview of each.
Checkster
Checkster is the brainchild of CEO Yves Lermusi, formerly of Taleo (and frequent contributor to ERE). Lermusi noticed that the frequency and quality of the performance feedback most people receive drops dramatically when they move from school to work, making it difficult for people to understand their strengths and areas for development. To help remedy this, he developed Checkster to be a "personal feedback management tool"--a focus that he says distinguishes it from other services, whose bread and butter is employer-based reference checking. Applicants receive the results of the reference check, just as the employer does, with the idea that this information will be used to help people develop and make better decisions regarding their career.
With Checkster, the employer simply enters the name and e-mail address of each applicant along with the requisition and selects the type of survey to be delivered (Checkster also has a 360-degree survey). That's it. Simple, eh? The applicant takes it from there, logging into Checkster and entering reference names and contact information. References have 7 days to take the quick and confidential survey, and Checkster compiles the resulting information into a report after at least three responses have been collected. From the employer's side, a simple account screen allows you to manage your requisitions and see the status of each. You can see an overview of how it works here, and watch a demo here that includes pictures of a report.
Big bonus: Checkster also has a free employment verification feature which will send an e-mail to previous employers to verify dates of employment, reporting structure, compensation, and eligibility for rehire.
Price: $50 per requisition, which allows you to check references for up to five candidates with a maximum of 15 references per candidate (volume discounts are also available).
SkillSurvey
As I mentioned, SkillSurvey and Checkster work in a similar fashion--the employer enters candidate information, the candidate enters reference information (or the employer can), references evaluate the candidate, and SkillSurvey generates the report.
Differences between Checkster and SkillSurvey that I observed:
1) SkillSurvey allows you to choose different types of surveys depending on the job. Each includes competencies developed by SkillSurvey staff. For example, there are different surveys for sales positions, IT positions, and HR positions (click here for an example for Marketing Manager).
2) Each point on SkillSurvey's rating scale is anchored, which could potentially lead to better reliability.
3) SkillSurvey reports are not automatically available to the candidate (unlike Checkster)--this reflects the emphasis that SkillSurvey places on being primarily a tool for the employer, versus Checkster's focus on individual development.
4) SkillSurvey has a sourcing component built in--you can download a spreadsheet that contains all the information on reference-givers that you can sort and use to identify applicants (very cool).
5) Checkster's reports are a little shorter and more graphical, while SkillSurvey reports are more text-based and extensive.
6) In terms of customization, SkillSurvey offers many options for altering things like turnaround time, and even weighting questions.
7) The actual text that goes out from candidates is easier to modify using Checkster.
A SkillSurvey overview video with screens of surveys and reports is available here, and sample reports are here. They even have a blog written by Doug LaPasta, their founder and chairman.
Price: $59 for one candidate with significant discounts for volume; usually charged in units of 100 candidates. The employer controls the number of reference givers required for completion of the report (anywhere from 2-15).
By the way, SkillSurvey was selected as a Top HR Product for 2007 by Human Resource Executive.
Summary
Both products have the potential to dramatically decrease the amount of time spent checking references, and have the added benefits of standardization as well as indicating an affinity for technology. Both companies have taken steps to ensure reference givers feel comfortable giving out information. The information may also be of higher quality since the process is being handled by a third party.
Both services were extremely easy to use. I found representatives from both companies to be knowledgeable and helpful. I'm sure as both products mature we'll see great additions, including hopefully an increased ability to gather off-list checks and even more options for tailoring the surveys.
Some things to keep in mind: (1) Like all mass-mailing type services, make sure e-mails from these companies don't get blocked by firewalls; and (2) Because some candidates may provide false references, do periodic spot checks (e.g., by verifying name & e-mail address).
I hope this has peaked your interest; I suggest checking out both products to see if either would make your life easier!
Thursday, October 04, 2007
OPM Introduces Assessment Decision Tool
The U.S. Office of Personnel Management (OPM) continues to innovate in the area of web-based assessment offerings with the introduction of the Assessment Decision Tool.
This interactive application allows you to enter competencies required for a particular position for seven occupational groups, including:
- Clerical/technical
- Human resources
- IT
- Leadership occupations
- Professional/administrative
- Science/engineering
- Trades/labor
After entering in this information, the system will create a matrix that identifies which assessment tools would be the best fit for each competency, then gives you totals so you can see which methods might be best overall. These matches were identified by a panel of OPM psychologists who specialize in personnel assessment.
The final step is the generation of a comprehensive report that summarizes the position competencies, assessment matches, and information about each assessment method. There is also a reference section for those wishing to investigate further. The report can be downloaded in HTML format and OPM is working on PDF functionality as well.
A great idea that automates competency-exam linkages that many assessment professionals do routinely. Now if they can just have the system create the actual tests...
This interactive application allows you to enter competencies required for a particular position for seven occupational groups, including:
- Clerical/technical
- Human resources
- IT
- Leadership occupations
- Professional/administrative
- Science/engineering
- Trades/labor
After entering in this information, the system will create a matrix that identifies which assessment tools would be the best fit for each competency, then gives you totals so you can see which methods might be best overall. These matches were identified by a panel of OPM psychologists who specialize in personnel assessment.
The final step is the generation of a comprehensive report that summarizes the position competencies, assessment matches, and information about each assessment method. There is also a reference section for those wishing to investigate further. The report can be downloaded in HTML format and OPM is working on PDF functionality as well.
A great idea that automates competency-exam linkages that many assessment professionals do routinely. Now if they can just have the system create the actual tests...
Monday, October 01, 2007
Links a go-go for October 1, 2007
Good reading for October 1, 2007
The new affirmative action (about schools, but lessons for employers)
2007 ILG National Conference Highlights
Don't automatically dismiss people that been fired
Court rules EEOC may proceed with discrimination case against L.A. Weight Loss
Visa and using credit scores in the hiring process
Hiring supervisors and leaders (the #1 problem of most organizations, IMHO)
Deloitte demonstrates just how creative recruiting can be
How many names does it take to get to a hire?
Who does The Gap think it is? Monster?
The new affirmative action (about schools, but lessons for employers)
2007 ILG National Conference Highlights
Don't automatically dismiss people that been fired
Court rules EEOC may proceed with discrimination case against L.A. Weight Loss
Visa and using credit scores in the hiring process
Hiring supervisors and leaders (the #1 problem of most organizations, IMHO)
Deloitte demonstrates just how creative recruiting can be
How many names does it take to get to a hire?
Who does The Gap think it is? Monster?
Subscribe to:
Posts (Atom)