Survey Design and Administration for Healthcare Research
Designing Questions That Yield Reliable Data
The quality of survey data depends almost entirely on the quality of the questions asked. A well-designed question is clear, concise, and measures exactly what the researcher intends. Poorly worded questions introduce measurement error that no amount of statistical sophistication can correct after the fact.
Several principles guide effective question writing. Each item should address a single concept—double-barreled questions that ask about two things at once confuse respondents and produce uninterpretable answers. Language should be appropriate for the reading level of the target population, free of jargon, and culturally sensitive. Leading questions that suggest a preferred answer must be avoided, as they introduce systematic bias.
Pilot testing is an essential step that many novice researchers skip. Administering the survey to a small sample from the target population reveals ambiguous wording, confusing skip patterns, and unanticipated response options. Cognitive interviewing—where participants think aloud as they answer—provides especially rich feedback. Investing time in pilot testing dramatically improves data quality and is considered standard practice in rigorous healthcare survey research.
Choosing Response Formats and Scales
Response format decisions shape both the data collected and the analyses available. Closed-ended questions with predefined response options are easier to analyze statistically and ensure uniformity across respondents. Open-ended questions capture nuance and unexpected perspectives but require labor-intensive coding and are difficult to quantify.
Likert scales—typically five or seven ordinal points ranging from strongly disagree to strongly agree—are the workhorses of attitudinal measurement in healthcare research. When constructing these scales, researchers must decide whether to include a neutral midpoint, how many points to offer, and how to label them. Evidence suggests that fully labeled scales produce more reliable data than those with only endpoint labels.
Visual analog scales, numerical rating scales, and ranking questions each serve different measurement purposes. Patient-reported outcome measures often use validated multi-item scales that have undergone extensive psychometric testing. Whenever possible, students should use existing validated instruments rather than inventing new questions, as the reliability and validity evidence has already been established by prior research teams.
Selecting the Right Administration Mode
How a survey reaches respondents influences who participates, how honestly they answer, and how much the project costs. Common modes include online platforms, mailed paper questionnaires, telephone interviews, and in-person administration. Each mode carries distinct advantages and limitations that must be weighed against the study's goals and budget.
Online surveys are fast and inexpensive but may exclude populations with limited internet access, including older adults and individuals in low-resource settings. Paper surveys reach these groups more effectively but incur printing and postage costs, and data entry introduces transcription errors. Telephone surveys allow clarification of confusing items but are constrained by declining response rates as people screen unknown callers.
Mixed-mode designs—offering respondents a choice between online and paper, for example—can boost overall response rates and reduce coverage bias. However, mode effects may introduce variability if respondents answer differently depending on the format. Researchers should pilot both modes and test for equivalence before combining data. Understanding these trade-offs helps students select an administration strategy that balances feasibility, cost, and data quality for their specific population.
Maximizing Response Rates and Data Integrity
Low response rates threaten the representativeness of survey findings. If non-respondents differ systematically from respondents—for example, if healthier individuals are less motivated to complete a health survey—the resulting data will not accurately reflect the target population. Researchers employ multiple strategies to encourage participation and protect data quality.
Personalized invitations, clear explanations of the study's purpose, and assurances of confidentiality increase willingness to respond. Offering incentives—monetary, gift cards, or entry into a prize drawing—has been shown to improve response rates significantly. Sending reminders, typically two or three follow-up contacts at staggered intervals, is one of the most effective techniques available.
Data integrity extends beyond response rates. Skip logic ensures that respondents only see relevant questions, reducing fatigue and error. Range checks and forced-response settings in electronic surveys prevent illogical or missing entries. After collection, researchers should examine response patterns for evidence of straight-lining or implausibly short completion times. Flagging suspect responses and conducting sensitivity analyses with and without them protects the integrity of the final dataset.
Related topics from other weeks:
Frequently Asked Questions
Why is pilot testing a survey so important?
Pilot testing reveals confusing wording, problematic skip patterns, and missing response options before the full study launches. Fixing these issues in advance prevents large-scale data quality problems that cannot be corrected after data collection.
Should I create my own survey questions or use existing instruments?
Use validated instruments whenever possible, since their reliability and validity have already been tested. Creating new questions is appropriate only when no suitable instrument exists, and any new tool should undergo its own psychometric evaluation.
What is a good response rate for a healthcare survey?
Acceptable rates vary by mode and population, but many researchers aim for at least 60 percent. More important than hitting a specific number is demonstrating that respondents and non-respondents are similar on key characteristics.
How many reminders should I send to non-respondents?
Two to three follow-up contacts at staggered intervals are standard practice. Additional reminders yield diminishing returns and may annoy potential respondents, so balancing persistence with respect for participants is important.
What are mode effects and why do they matter?
Mode effects occur when the survey administration method influences how people respond—for example, people may report more sensitive behaviors online than in a face-to-face interview. Researchers must test for and address these differences when combining data from multiple modes.
Explore more study tools and resources at subthesis.com.