NLPES Question of the Month

June 2000

What approaches has your office used to try to increase response rates for surveys--and do you have any benchmarks for minimally-acceptable survey response rates?


 From: Greg Rest, Virginia

Approaches our office has used to maximize response rates:

Keeping the instrument as short as possible (for example, having only the items that would fit on the front and back of a single sheet)

Follow-up efforts, depending on how central the survey is to the main analysis. They are in descending order of intensity

- Sending postcards about two weeks after the due date
- Phone calls as reminders to people receiving the survey, generally about two weeks after the due date
- Asking the mail survey questions over the telephone (turning the mail survey instrument into a telephone survey instrument) among those cases that haven't sent in the mail survey

Having a two-phased research design (first sending out the mail survey, then contacting all respondents by telephone for follow-up questions--and letting survey recipients know that we plan to call everybody)

Benchmarks for minimally-acceptable survey response rates:

Among state and local government agencies in-state: realistic target is 100%; somewhere around 95% is generally acceptable.
Among state employees surveyed on sensitive issues (e.g., morale or opinions on management): generally around 80%.
Among private or out-of-state entities: rule of thumb is about 70%. 


 From: Rick Riggs, Kansas

We try to create brief surveys, with really good cover letters, explaining what we're doing, why it's important, what benefit, if any, the respondent can expect, etc. Senior managers review all surveys before they go out. We train staff to eschew "nice to know" questions, and to tie each question to the objectives of the audit.

We don't have a benchmark per se, but we've recently begun tracking response rates. (Contact me if you want to see a spreadsheet.) We use this information to help determine, for particular types of respondents, how many surveys we'll need to send out, in order to get a certain number back.


 From: Lois Sayrs, Arizona

Surveys are a real subspecialty for methodologists. Important to consider first is a sampling component since all surveys that will have a response rate (potential) bias are from overly broad sampling designs that attempt to "capture" individuals that have no salient tie in to the subject matter. When an individual has an interest in the survey, response rates are raised. Many offices tend to survey "experts" who have a natural interest in the topic. That is the true test of whether to use a survey--do you have a focused respondent group who has a tie to this subject matter? Then response rates are high. Without a focused respondent group, whose opinions are you getting anyway, and do you want them?

I also believe that the letter accompanying the survey is in some ways MORE important than the survey. Auditors tend to spend time developing the instrument but not enough time on the letter that sets the hook. Spend time developing the hook that will make the person who should already have an interest in responding answer the survey. For example, when surveying agency personnel to triangulate case workers' opinions on a fraud rate for benefits with actual test work, we developed a survey that included a question about whether salaries were sufficient. Then, in the page accompanying the survey we told the respondent we were interested in their opinions on a number of issues including salaries. We also used e-mail for this survey and followed up with a second large sample of non-respondents to determine whether non-respondents had different opinions than respondents (this is our estimate of response rate bias).

The old school of big random sample survey is not something we do in Arizona--too time consuming for a fishing expedition. When we survey and it must be generalizable (e.g. University Administrative Support---we needed to estimate the amount of staff at all three U's in admin), we spent many many hours identifying the strata, developing a sampling plan, agonizing over the survey and then chose a phone survey with a subcontractor whom we supervised on a daily basis until the data were collected. Don't presume a specialty contractor who hires college students at 6 bucks an hour is doing a bang up job.

Short and long of it is we choose survey only when it is critical and we never use mail surveys. If the survey is that important we phone or e-mail and follow-up with phone. 


 From: Rob Krell, Washington

Approaches Used to Increase Survey Response Rates: The obvious techniques we use include:
1) pre-testing the survey to make sure the questions are appropriate, clear and "answerable;" 2) including a self-addressed, stamped envelope; and 3) sending out follow-up letters to, or conducting telephone follow-up with, those who don't respond initially. Less-obvious techniques include: 4) sending out an accompanying letter from a relevant trade or professional association--or in some cases, an appropriate state agency--that "endorses" the survey and urges cooperation; and 5) offering to send recipients an electronic version of the survey (which can really speed survey completion time).

Benchmarks For Minimal Response Rates: If the survey is based on a randomly selected sample (something we do only rarely), we would want at least an 85 percent response rate. When surveying a smaller, finite universe (for example, counties) we strive to achieve 100 percent. 


From: Joel Alter, Minnesota

A first step in getting a good response rate is having a survey that is well-designed and easy to fill out. Typically, we ask several potential respondents and the state agency with which we are working to review a questionnaire before we finalize it. This is not a formal "pretest," but it is a way to flag survey questions that are unclear or difficult to answer. In addition, surveys that are visually appealing and have a limited number of open-ended questions tend to elicit higher response rates.

Second, the letter in our initial survey mailing gives respondents a specific deadline for returning their questionnaire responses-typically about two weeks from the date they receive the questionnaire in the mail.

Third, if the survey recipient is a public official or a firm contracting for state business, we often state in our letter that state law requires their participation in our survey. This is based on our general statutory authority to compel public officials, their employees, and contractors to answer our lawful inquiries in the course of performing our duties. Some survey recipients (including a few judges) have objected to our claim that they are required to respond. We stand by our interpretation, and this language seems to have been particularly effective in getting the attention of intended respondents.

Fourth, after the initial survey deadline has passed by a few days, we send a follow-up letter to non-respondents, with a new copy of the survey. The letter often reminds the recipient of the importance of their response-and their legal requirement to comply.

Fifth, whenever practical, we send survey cover letters that have personalized addresses and that are signed personally by the Legislative Auditor.

Finally, if a survey recipient has not responded to an initial mailing and follow-up letter, we often make follow-up phone reminders, as time permits.

We generally aim to get responses from at least 70 percent of survey recipients-and much higher, if at all possible. We have often been able to get responses from 85 to 100 percent of public officials, such as county human services or corrections directors.