Getting the Truth Into Workplace Surveys
There’s no doubt that companies can benefit from workplace surveys and questionnaires. A GTE survey in the mid-1990s, for example, revealed that the performance of its different billing operations, as measured by the accuracy of bills sent out, was closely tied to the leadership style of the unit managers. Units whose managers exercised a relatively high degree of control made more mistakes than units with more autonomous workforces. By encouraging changes in leadership style through training sessions, discussion groups and videos, GTE was able to improve overall billing accuracy by 22 percent in the year following the survey and another 24 percent the year after.
Unfortunately, not all assessments produce such useful information, and some of the failures are spectacular. In 1997, for instance, the United Parcel Service was hit by a costly strike just 10 months after receiving impressive marks on its regular annual survey on worker morale. Although the survey had found that overall employee satisfaction was very high, it had failed to uncover bitter complaints about the proliferation of part-time jobs within the company, a central issue during the strike. In other cases where failure occurs, questionnaires themselves can cause the company’s problems. Dayton Hudson Corporation, one of the nation’s largest retailers, reached an out-of-court settlement with a group of employees that had won an injunction against the company’s use of a standardized personality test that employees had viewed as an invasion of privacy.
What makes the difference between a good workplace survey and a bad one? The difference, quite simply, is careful and informed design. And it’s an unfortunate truth that too many managers and HR professionals have fallen behind advances in survey design. Although the last decade has brought dramatic changes in the field and seen a fivefold increase in the number of publications describing survey results in corporations, many managers still apply design principles formulated 40 or 50 years ago.
In this article, we’ll explore some of the more glaring failures in design and provide 16 guidelines to help companies improve their workplace surveys. These guidelines are based on peer-reviewed research from education and the behavioral sciences, general knowledge in the field of survey design and our company’s experience designing and revising assessments for large corporations. Managers can use these rules either as a primer for developing their own questionnaires or as a reference to assess the quality of work they commission. These recommendations are not intended to serve as absolute rules. But applied judiciously, they will increase response rates and popular support along with accuracy and usefulness. Two years ago, International Truck and Engine Corporation (hereafter called "International") revised its annual workplace survey using our guidelines and saw a leap in the response rate from 33 percent to 66 percent of the workforce. These guidelines—and the problems they address—fall into five areas: content, format, language, measurement and administration.
Guidelines for Content
1. Ask questions about observable behavior rather than thoughts or motives. Many surveys, particularly those designed to assess performance or leadership skill, ask respondents to speculate about the character traits or ideas of other individuals. Our recent work with Duke Energy’s Talent Management Group, for example, showed that the working notes for a leadership assessment asked respondents to rate the extent to which their project leader "understands the business and the marketplace." Another question asked respondents to rate the person’s ability to "think globally."
While interest in the answers to those questions is understandable, the company is unlikely to obtain the answers by asking the questions directly. For a start, the results of such opinion-based questions are too easy to dispute. Leaders whose understanding of the marketplace was criticized could quite reasonably argue that they understood the company’s customers and market better than the respondents imagined. More important, though, the responses to such questions are often biased by associations about the person being evaluated. For example, a substantial body of research shows that people with symmetrical faces, babyish facial features and large eyes are often perceived to be relatively honest. Indeed, inferences based on appearance are remarkably common, as the prevalence of stereotypes suggests.
The best way around these problems is to ask questions about specific, observable behavior and let respondents draw on their own, firsthand, experience. This minimizes the potential for distortion. Referring again to the Duke Energy assessment, we revised the question on understanding the marketplace so that it asked respondents to estimate how often the leader "resolves complaints from customers quickly and thoroughly." Although the change did not completely remove the subjectivity of the evaluation—raters and leaders might disagree about what constitutes quick and thorough resolution—at least responses could be tied to discrete events and behaviors that could be tabulated, analyzed and discussed.
2. Include some items that can be independently verified. Clearly, if there is no relation between survey responses and verifiable facts, something is amiss. Conversely, verifiable responses allow you to reach conclusions about the survey’s validity, which is particularly important if the survey measures something new or unusual. For example, we formulated a customized 360-degree assessment tool to evaluate leadership skill at the technology services company EDS. In order to be sure that the test results were valid, we asked (among other validity checks) if the leader "establishes loyal and enduring relationships" with colleagues and staff; we then compared these scores with objective measures, such as staff retention data, from the leader’s unit. The high correlation of these measures, along with others, allowed us to prove the assessment’s validity when we reported the results and claimed that the survey actually measured what it was designed to measure. In other assessments, we frequently also ask respondents to rate the profitability of their units, which we can then compare with actual profits.
Originally published in the Harvard Business Review. To request a full report go to www.ExpertWitnessPsychology.com.