A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors
© Scott et al; licensee BioMed Central Ltd. 2011
Received: 11 October 2010
Accepted: 5 September 2011
Published: 5 September 2011
Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors.
A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost.
The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias.
Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.
Surveys of medical practitioners can provide important policy-relevant data and information that is often not captured by administrative data or registration databases. There is some suggestion that response rates for surveys of medical practitioners may be falling, with important implications for statistical inference, and for the extent to which results can be generalised and used to inform policy [1–3].
There is growing evidence in the literature about the most effective interventions to increase response rates in general population and doctor surveys. Interventions to improve response rates include incentive-based approaches (e.g. money, gifts, lottery and prize draws), design-based approaches (e.g. survey length, follow-up, content) and mode of administration (e.g. paper, internet, interview). Three key factors that influence doctors' decisions to complete a survey are the opportunity cost of their time; their trust that the results will be used appropriately; and the perceived relevance of the survey .
Although the literature about factors influencing response rates is growing, there are three important gaps that this paper aims to address: i) a lack of evidence on the use of mixed mode survey designs; ii) a lack of evidence examining response bias and item non-response, in addition to response rate, and iii) a lack of evidence on the cost-effectiveness of different strategies.
The use of online and web-based surveys is growing, including those where email is the method of contact. Web-based surveys may seem attractive as there are no printing or data-entry costs, but response bias may be an issue, particularly if older respondents are less likely to respond, and a lack of trust in the security of transmitting information over the internet may reduce response rates and increase item non-response . For doctors, emailed surveys have resulted in lower response rates than mailed surveys . This is also reflected in the use of email in non-doctor populations, where meta analyses have found that web-based surveys, mostly using email contact, have a 10-11% lower response rate compared to other modes [5, 6]. This is despite the fact that email surveys that include a weblink reduce the number of steps (and time) to complete a survey. However, evidence also shows that email contact can be impersonal and reduce response rates . For doctors, there is an issue of whether the email will reach the respondent or be initially read by administrative staff who may not forward such emails to respondents, though this may also be an issue if mailed surveys are posted to their work address.
Furthermore, the use of different types of mixed mode surveys for doctors has not yet been investigated thoroughly [4, 8]. This is important if, for example, younger respondents are more likely to respond to an online survey, whilst older respondents are more likely to respond to a mailed survey. Accounting for doctors' preferences about which survey mode to complete may be important. For example, in a survey of doctors in the United States, paper surveys were preferred to email surveys when they were given the choice , and family physicians preferred mail surveys compared to surgeons . The ability of doctors to choose their preferred mode of response to fit with their busy schedules is likely to be important [4, 11]. Evidence from non-doctor populations suggests that offering a choice of mode does not increase response rates, but that the sequencing or switching of modes (e.g. paper followed by online) may matter [12–14]. A paper examining this for US physicians showed that mail first, followed by a web survey, had a higher response rate than web followed by mail .
Different modes of administration may also influence survey response bias (whether those responding are representative of the population) and item non-response (the extent to which all questions have been completed) as well as overall survey response rates. Response rates are frequently regarded as sentinel indicators of methodological quality in general, and representativeness in particular . Although response rates are often used as a 'conventional proxy' for response bias, there is in fact no necessary relationship between response rate and response bias [16–19]. Despite this, less than half (44%) of published surveys of doctors discuss response bias, and only 18% provided some analysis of it . Item non-response is also an issue, with respondents less likely to answer sensitive questions and some skipping whole sections, depending on how the survey has been designed and administered . High item non-response was found in a web survey, compared to a face-to-face survey, of university students , whilst health professionals who were younger, male, and worked in hospitals were more likely to complete a web survey than a mailed survey .
There is also a lack of rigorous evidence on the cost-effectiveness of the many different approaches to improve response rates and reduce bias . Email and web surveys may seem cheaper than mailed surveys, and the effects on costs for mixed mode surveys are less clear. Researchers often have limited resources and adoption of all possible measures to increase response rates is usually not possible due to cost constraints and ethical considerations, especially when the study population or sample is widely dispersed. For these reasons, researchers must make choices as to which method leads to the largest increase in response rate (or other outcome) for each dollar spent. For example, up-front financial incentives may be the most effective, but are also costly compared with other approaches [7, 25–27]. Baron et al examined the effect of a lottery for GPs in Canada and found a 6.4% increase in the response rate at a cost of $CAD16 per additional returned survey . Bjertnaes et al examined the effects and costs of the number of reminders in a survey of Norwegian physicians, and found that costs per response increased dramatically with telephone follow up . Erdogan and Baker (2002) examined costs and effects of different methods of follow-up in a sample of advertising agency executives, but compared average costs and effect rather than incremental costs and effect . A study that compared the costs of a mail and email survey in a group of academics found the email survey's costs were lower but that mail had a 12% higher response rate .
online mode will result in a lower response rate and higher item non-response, compared with the two mixed modes;
sequential mixed mode will have a higher response rate than simultaneous mixed mode;
the costs of the online mode will be lower than the two mixed modes; and
the costs of the sequential mixed mode will be lower than the simultaneous mixed mode.
A randomised trial was conducted as part of the third and final pilot survey for Wave 1 of the Medicine in Australia: Balancing Employment and Life (MABEL) longitudinal cohort/panel study of the dynamics of the medical labour market in Australia, focusing on workforce participation and its determinants among Australian doctors . The first wave of data collection, establishing the baseline cohort for the study, was undertaken in 2008.
The questionnaire included eight sections: job satisfaction, attitudes to work and intentions to quit or change hours worked; a discrete choice experiment (DCE) examining preferences and trade-offs for different types of jobs; characteristics of work setting (public/private, hospital, private practice); workload (hours worked, on-call arrangements, number of patients seen, fees charged); finances (income, income sources, superannuation); geographic location; demographics (including specialty, qualifications, residency); and family circumstances (partner and children). There were four versions of the survey, each differing slightly in order to tailor them to the type of doctor: GPs, specialists, specialists-in-training, and hospital non-specialists. Although survey length also matters for response rates, the context of the survey and research questions being tested required a long questionnaire, to ensure that sufficient data were collected to adequately test study hypotheses . The length ranged from 58 questions in an eight-page booklet (for specialists-in-training), to 87 questions in a 13-page booklet (for specialists). In all modes, doctors in remote and rural areas (defined using the Rural, Remote and Metropolitan Area (RRMA) classification to doctors in RRMA 6 (Remote centre with population > 5,000) and RRMA7 (Other remote centre with population < 5,000)), mainly GPs, were given a cheque for $100 that was enclosed with the invite letter to recognise both their importance from a policy perspective, and the significant time pressures on these doctors. The purpose was to draw meaningful inferences about recruitment and retention in rural and remote areas. Pre-paid monetary incentives, not conditional on response, have been shown to double response rates . The survey described in this paper was also the main pilot survey for the main wave of MABEL, so it was important to pilot the administration of these incentives. However, they did not influence the outcome of this trial as randomisation ensured approximately equal numbers of cheques going out in each arm of the trial.
The process of logging in and completing the survey online was kept as simple as possible. Users were directed to the main web page (http://www.mabel.org.au), where they clicked on the 'Login' link, which directed them to a login page where they entered their username and password. They were then directed to the first page of the survey. Respondents could save their responses and logout, and then login again to complete the survey, and they could skip questions. Once logged in, the padlock icon was visible, indicating that the website was secure.
The primary outcomes of interest in the trial were response rates, survey response bias with respect to age, gender, doctor type and geographic location, and item-response (the percentage of completed items). A three-arm parallel trial design was used with equal randomisation across arms. The sample size for the trial was calculated to detect a difference of 5% in the response rate at the 95% level of statistical significance, and with a power of 80%. This indicated that a sample of 900 doctors in each arm of the trial would be required, 2,700 doctors in total. This represented just under 5% (2700/54,160 = 0.04985) of all doctors undertaking clinical practice on the Australian Medical Publishing Company's (AMPCo) Medical Directory, which includes all doctors in all States and Territories of Australia and formed our sampling frame. This national database is used extensively for mailing purposes (e.g. the Medical Journal of Australia). The Directory is updated regularly using a number of sources. AMPCo receives 58,000 updates to doctors' details per year, through biannual telephone surveys, and checks medical registration board lists, Australian Medical Association membership lists and Medical Journal of Australia subscription lists to maintain accuracy. The directory contains a number of key characteristics that can be used for checking the representativeness of the sample and to adjust for any response bias in sample weighting. These characteristics include age, gender, location, and job description (used to group doctors into the four types).
A 4.9% stratified random sample of doctors was therefore taken, with stratification by four doctor types (general practitioners (GPs), specialists, doctors enrolled in a specialist training program, and non-specialist hospital doctors (including interns and salaried medical officers)), and six rural/remoteness categories (Rural, Remote and Metropolitan Area (RRMA) classification). This produced a list of 2,702 doctors. Doctors in this sample were then randomly allocated to a response mode by AS using random numbers generated in STATA. The AMPCo unique identifiers for each of the three groups were sent to AMPCo who conducted the mailing of invitation letters, and survey materials were mailed in late February 2008. Survey invitation letters indicated the University of Melbourne and Monash University as responsible for the survey. AMPCo also provided individual-level data on the population of doctors so we could examine response bias. Doctors were aware of which mode they had been allocated to on receipt of the invitation letter. The survey manager (AL) recorded responses and organised data entry and was blinded to group allocation. SH analysed the data and was not blinded to group allocation. Analysis was by 'intention to treat'.
Analysis included comparisons of response rates; estimation of means and proportions of respondents by age, gender, doctor type, and geographic location compared to the doctor population; logistic regression of response bias; and comparisons of the proportion of missing values (item non-response). The statistical significance of the differences between the response rates of the three response modes was analysed using a probit model with response (0/1) as the dependent variable and two dummy variables for response mode, with online as the reference category. The difference between the sequential and simultaneous modes was tested using the restriction that their coefficients be equal. Although respondents were randomly allocated across modes, it is still important to test whether any specific/particular respondent characteristics influenced the response rate. The probit model therefore included age, gender, doctor type and geographic location. Survey response bias was examined using a multinomial logit model of respondents (= 1) and the total population of doctors (= 0), with age, gender, geographic location and doctor type as independent variables. For item non-response, a comparison of the proportion of completed items was supplemented using generalised linear models that controlled for differences due to age, gender, geographic location and doctor type . Analysis of geographic location was based on the Australian Standard Geographic Classification (AGSC) Accessibility and Remoteness Index for Australia (ARIA).
The economic evaluation compared the costs of consumables (rental of AMPCo list, printing of surveys, letters, fax forms, further information fliers and reply-paid envelopes, mail-house processing costs, postage and data entry) across each mode of survey administration. The costs of researcher and staff time were the same for each mode, as each mode required the development of both the paper and web survey, and time liaising with AMPCo and the printers. These costs were therefore not included in the comparison of costs between modes. The expected costs for the main wave 1 survey were estimated based on sending out a survey to all doctors on the AMPCo database (n = 54,168) and using the response rates from the randomised trial to estimate the number of respondents. Data on the three primary outcome measures are presented alongside data on costs.
Group characteristics of full sample
Specialist in training
Geographic location 1
< = 30
70 < =
Response rates by mode of administration
b) Useable responses (with at least one question answered)
c) Refusal (i.e. paper copy returned blank, declined)
d) No contact (return to sender)
e) No responses
f) Not eligible (i.e. retired, no longer in clinical practice)
Response rate (b/(a-f))
Contact rate ((b+c+e))/(a-f))
Response rates by mode of administration and doctor type1 (in %)
Specialist in training
The effect of mode on response rates (probit regression model)
Specialists in training
Marginal effects (standard error)
> = age70
Specialist in training
Actual mode of response by allocated survey mode
Paper returned (%)
Online completed (%)
Total number of surveys
Response bias for each mode (Odds ratios and 95% CI)1
Simultaneous mixed compared to Population
Sequential mixed compared to Population
Simultaneous mixed compared to online
Sequential mixed compared to online
Item response by mode and questionnaire section (%)
Average Percentage of items completed
Percentage with 100% completed items
Average Percentage of items completed
Percentage with 100% completed items
Average Percentage of items completed
Percentage with 100% completed items
Average Percentage of items completed
Percentage with 100% completed items
Item non-response by mode (Odds ratio, 95% CI)
Average Percentage of items completed 1
Simultaneous mixed (compared to online)
Sequential mixed (compared to online)
Whether 100% of items were completed 2
Simultaneous mixed (compared to online)
Sequential mixed (compared to online)
Although the percentage of questions completed overall was 91.4%, only 2.9% of respondents completed every question, and this was lowest for simultaneous mixed mode (1.1%), followed by sequential mixed mode (2.2%), and was highest for online mode (6.9%) (Table 7). These differences were statistically significant (second half of Table 8), with odds ratios of 0.13 for paper mode and 0.23 for mixed mode compared to online. The proportion of respondents completing all questions was similar across the modes for each section. Those using simultaneous mode were significantly more likely to complete all questions in the 'Family' section compared to those in the online mode (Table 8).
Effect of mode on survey costs (2008 prices, in $AU)
AMPCo rental of list
Web survey costs
AMPCO handling and postage
Printing of letters, further info, fax sheet, envelopes
Printing of surveys
Incremental cost effectiveness ratios (compared to online mode)
Estimated number of responses
Change in response rate compared to online
Change in number of responses compared to online
Change in total cost compared to online
Additional cost per 1% increase in response rate 1
Additional cost per additional response 2
Table 9 shows incremental cost-effectiveness ratios with respect to changes in response rate and number of responses. Comparing sequential and simultaneous mixed modes to online, sequential mixed mode was the most cost-effective relative to online. Costs were $6.07 per additional response, and $AUS3, 290 per 1% increase in the response rate, compared to online. Although the main outcomes were similar for the two mixed modes, the sequential mode was cheaper due to lower printing, mailing and data entry costs. Using the sequential mixed mode resulted in total costs which were 21% lower than for the simultaneous mode, with no detrimental impact on response rate, survey response bias, or item non-response.
This study has compared response rates, survey response bias, item non-response, and costs across three modes of conducting a doctor survey. Mailing a letter inviting respondents to complete the questionnaire online, followed by a mailed reminder letter and paper copy of the survey, was the most cost-effective mode of administration. Although online modes were less costly, due to lower printing and data entry costs, and did not exhibit evidence of increased response bias, the response rates and item completion rates were lower than for the sequential and simultaneous mixed modes. The online mode had lower item completion rates in the sections on the DCE, workload, and personal and family characteristics. Although the sequential mode is the most cost-effective with respect to response rates, whether this is chosen would depend on the weight given to the existence of response bias when compared to the population of doctors.
We find no support for the hypothesis that offering a simultaneous choice of modes results in lower response rates than a sequenced choice of mode. Literature from non-doctor populations suggests that sequencing may be better than simultaneous choice [12, 13]. Though there is a small difference in response rates for both mixed modes, this is not statistically significant.
Lower response rates in the online mode arguably reflect the population being surveyed, their familiarity with and trust in the internet, and reliability of access to the internet, especially in remote regions of Australia. Most doctors choose to fill out a paper questionnaire, possibly suggesting that they are less comfortable with doing a survey online or have issues with sending confidential information over the internet. This is reflected in a higher rate of item non-response for most sections of the survey, especially for the more personal questions. This finding occurred despite assurances about confidentiality and the fact that information was being sent over a secure internet connection. Doctors may also prefer the 'portability' of a paper-copy which they can fill out at the office, at home or whilst travelling. Online modes are also becoming more portable (i.e. not confined to the desktop PC) with the use of laptops, touch screen tablets and other mobile devices, so the preference for a paper copy may erode over time. A key issue in relation to survey response is the need to minimise the opportunity cost of survey completion for respondents. The need for internet access and the time it takes to logon needs to be balanced against filling out a paper survey that needs to be posted. A potential reason for the lower online response rate was the need by respondents to find a website and login using their username and password provided in the letter. Once at the website, they had to go to a login page, enter their details, and were then directed to the beginning of the survey. Though this takes time compared to an email survey with an embedded website link, it does provide a more secure process that may have increased the confidence of respondents in the security of the website.
Response rates in all three arms could be regarded as low, an increasing issue for surveys of doctors [1–3]. It is noteworthy that our comparative analysis with the population of Australian doctors showed that the mode with the lowest response rate (online) was the most representative, confirming the point noted in the introduction, that response rate and response bias are separate issues and should both be explicitly analysed to ensure appropriate interpretation.
Our study used a diverse sample of doctors with respect to age, specialty and geographic location, increasing the generalisability of the results. Although the trial was not designed for sub-group analysis, specialists in training allocated to the online mode had a higher response rate (17.5%) than those allocated to the simultaneous mixed mode (13.89%), and a similar response rate to those in the sequential mixed mode (18.52%). For those conducting surveys of younger doctors and doctors in training, who are more likely to be familiar with and trusting of the internet, online surveys may be a more desirable option, though item non-response may be an issue. However, specialists had the highest response rate for the simultaneous version (26.2% compared with 22.9% for mixed) and the lowest response for the online mode (11.2%). The routine use of exclusively online surveys for the population of doctors may therefore be some time off, at least until the current older cohorts have been replaced by younger cohorts.
The unit costs of printing and survey administration are likely to vary across geographic locations and companies, though they are not likely to vary across modes within geographic locations, and so should not influence our findings. Printing costs vary greatly with volume, such that for the pilot online mode the unit cost per printed questionnaire (for those requesting a paper survey) was $AUD5.90. However, the unit cost of printing 54,169 paper questionnaires (for the ensuing main wave survey) was $AUD0.32. The relationship between unit costs and volume printed is not linear. The costs of establishing the online survey will also vary across settings, although there are now many low cost survey packages available that cover most needs, some of which can be re-programmed if necessary.
Our results are in line with other research showing that lower response rates are likely to result from online surveys than mailed surveys [5, 6]. Other studies have compared mail and online mixed modes in non-doctor samples [12, 13]. However, these studies have not examined costs. There are many different types of response mode, and different combinations of mixed modes, that can potentially be used in surveys of doctors. Further research is required in a number of areas. First, comparisons are needed of modes that offer choice compared to those that do not . Second, all comparisons need to include an examination of the changes in costs. This is mentioned frequently in the literature as a motivation for using online and mixed modes, but there is hardly any evidence of the differences in costs.
Our study is the first, in the context of a large national survey of doctors, to include an economic evaluation alongside a randomised trial using standardised methods. Of the alternatives compared in our study, the sequential mixed mode had the lowest cost per response compared to online. Decisions on the appropriate response mode will ultimately be a function of the study objectives and context, but for large national surveys of the doctor population that include doctors at different stages of their career, the sequential mixed mode seems to be the preferred option.
Acknowledgements and Funding
Funding was provided from a National Health and Medical Research Council Health Services Research Grant (454799) and the Commonwealth Department of Health and Ageing. None of the funders had a role in the data collection, analysis, interpretation or writing of this paper. The views in this paper are those of the authors alone. We thank the doctors who gave their valuable time to participate in MABEL, and the other members of the MABEL team for data cleaning and comments on drafts of this paper: Terence Cheng, Daniel Kuehnle, Matthew McGrail, Michelle McIsaac, Stefanie Schurer, Durga Shrestha and Peter Sivey. The study was approved by the University of Melbourne Faculty of Economics and Commerce Human Ethics Advisory Group (Ref. 0709559) and the Monash University Standing Committee on Ethics in Research Involving Humans (Ref. CF07/1102 - 2007000291). De-identified data from MABEL are available from http://www.mabel.org.au.
- Barclay S, Todd C, Finlay I, Grande G, Wyatt P: Not another questionnaire! Maximising the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs. Family Practice. 2002, 19: 105-111. 10.1093/fampra/19.1.105.View ArticlePubMedGoogle Scholar
- Aitken C, Power R, Dwyer R: A very low response rate in an on-line survey of medical practitioners. Australian & New Zealand Journal of Public Health. 2008, 32: 288-289. 10.1111/j.1753-6405.2008.00232.x.View ArticleGoogle Scholar
- Grava-Gubins I, Scott S: Effects of various methodologic strategies Survey response rates among Canadian physicians and physicians-in-training. Canadian Family Physician. 2008, 54: 1424-1430.PubMedPubMed CentralGoogle Scholar
- VanGeest JB, Johnson TP, Welch VL: Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Evaluation & the Health Professions. 2007, 30: 303-321. 10.1177/0163278707307899.View ArticleGoogle Scholar
- Shih TH, Fan XT: Comparing response rates from Web and mail surveys: A meta-analysis. Field Methods. 2008, 20: 249-271. 10.1177/1525822X08317085.View ArticleGoogle Scholar
- Manfreda KL, Bosniak M, Berzelak J, Haas I, Vehovar V: Web surveys versus other survey modes - A meta-analysis comparing response rates. International Journal of Market Research. 2008, 50: 79-104.Google Scholar
- Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews. 2009, 3
- Beebe TJ, Locke GR, Barnes SA, Davern ME, Anderson KJ: Mixing Web and Mail Methods in a Survey of Physicians. Health Services Research. 2007, 42: 1219-1234. 10.1111/j.1475-6773.2006.00652.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Kroth PJ, McPherson L, Leverence R, Pace W, Daniels E, Rhyne RL, Williams RL, Consortium PN: Combining Web-Based and Mail Surveys Improves Response Rates: A PBRN Study From PRIME Net. Annals of Family Medicine. 2009, 7: 245-248. 10.1370/afm.944.View ArticlePubMedPubMed CentralGoogle Scholar
- Parsons JA, Warnecke RB, Czaja RF, Barnsley J, Kaluzny A: Factors associated with response rates in a national survey of primary care physicians. Evaluation Review. 1994, 18: 756-766. 10.1177/0193841X9401800607.View ArticleGoogle Scholar
- McMahon SR, Iwamoto M, Massoudi MS, Yusuf HR, Stevenson JM, David F, et al: Comparison of e-mail, fax and postal surveys of pediatricians. Pediatrics. 2003, 111: e299-e303. 10.1542/peds.111.4.e299.View ArticlePubMedGoogle Scholar
- Converse PD, Wolfe EW, Huang XT, Oswald FL: Response rates for mixed-mode surveys using mail and e-mail/Web. American Journal of Evaluation. 2008, 29: 99-107. 10.1177/1098214007313228.View ArticleGoogle Scholar
- de Leeuw ED: To mix or not to mix data collection modes in surveys. Journal of Official Statistics. 2005, 21: 233-255.Google Scholar
- Millar M, O'Neill A, Dillman D: Are mode preferences real?. 2009, Pullman: Washington State UniversityGoogle Scholar
- Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology. 1997, 50: 1129-1136. 10.1016/S0895-4356(97)00126-1.View ArticlePubMedGoogle Scholar
- Schoenman JA, Berk ML, Feldman JJ, Singer A: Impact of differential response rates on the quality of data collected in the CTS physician survey. Evaluation & the Health Professions. 2003, 26: 23-42. 10.1177/0163278702250077.View ArticleGoogle Scholar
- Groves RM, Peytcheva E: The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly. 2008, 72: 167-189. 10.1093/poq/nfn011.View ArticleGoogle Scholar
- Lynn P: The problem of nonresponse. International handbook of survey methodology. Edited by: de Leeuw ED, Hox JJ, Dillman DA. 2008, New York: Lawrence Erlbaum Associates, 35-55.Google Scholar
- Schouten B, Cobben F, Bethlehem J: Indicators for the representativeness of survey response. Survey Methodology. 2009, 35: 101-113.Google Scholar
- Cummings SM, Savtitz SM, Konrad TR: Reported response rates to mailed physician questionnaires. Health Services Research. 2001, 35: 1347-1355.PubMedPubMed CentralGoogle Scholar
- Bosnjak M, Tuten TL, Wittmann WW: Unit (non)response in Web-based access panel surveys: An extended planned-behavior approach. Psychology & Marketing. 2005, 22: 489-505. 10.1002/mar.20070.View ArticleGoogle Scholar
- Heerwegh D, Loosveldt G: Face-to-face versus web surveying in a high-internet-coverage populatiuon. Differences in response quality. Public Opinion Quarterly. 2008, 72: 836-846. 10.1093/poq/nfn045.View ArticleGoogle Scholar
- Lusk C, Delclos GL, Burau K, Drawhorn DD, Aday LA: Mail versus Internet surveys - Determinants of method of response preferences among health professionals. Evaluation & the Health Professions. 2007, 30: 186-201. 10.1177/0163278707300634.View ArticleGoogle Scholar
- Drummond M, Sculpher MJ, Torrance GW, O'Brien BJ, Stoddart GL: Methods for the economic evaluation of health care programmes. 2005, Oxford University Press, ThirdGoogle Scholar
- Edwards P, Roberts I, Clarke M, Di Guiseppi C, Pratap S, Wentz R, Kwan I: Increasing response rates to postal questionnaires: systematic review. British Medical Journal. 2002, 324: 1183-1192. 10.1136/bmj.324.7347.1183.View ArticlePubMedPubMed CentralGoogle Scholar
- Larson PD, Chow G: Total cost/response rate trade-offs in mail survey research: impact of follow-up mailings and monetary incentives. Industrial Marketing Management. 2003, 32: 533-537. 10.1016/S0019-8501(02)00277-8.View ArticleGoogle Scholar
- James KM, Ziegenfuss JY, Tilburt JC, Harris AM, Beebe TJ: Getting Physicians to Respond: The Impact of Incentive Type and Timing on Physician Survey Response Rates. Health Services Research. 2011, 46: 232-242. 10.1111/j.1475-6773.2010.01181.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Baron G, De Wals P, Milord F: Cost-effectiveness of a lottery for increasing physicians' responses to a mail survey. Evaluation & the Health Professions. 2001, 24: 47-52. 10.1177/01632780122034777.View ArticleGoogle Scholar
- Bjertnaes OA, Garratt A, Botten G: Nonresponse Bias and Cost-Effectiveness in a Norwegian Survey of Family Physicians. Evaluation & the Health Professions. 2008, 31: 65-80.View ArticleGoogle Scholar
- Erdogan BZ, Baker MJ: Increasing mail survey response rates from an industrial population - A cost-effectiveness analysis of four follow-up techniques. Industrial Marketing Management. 2002, 31: 65-73. 10.1016/S0019-8501(00)00117-6.View ArticleGoogle Scholar
- Shannon DM, Bradshaw CC: A comparison of response rate, response time, and costs of mail and electronic surveys. Journal of Experimental Education. 2002, 70: 179-192. 10.1080/00220970209599505.View ArticleGoogle Scholar
- Joyce CM, Scott A, Jeon S-H, Humphreys J, Kalb G, Witt J, Leahy A: The "Medicine in Australia: Balancing Employment and Life (MABEL)" longitudinal survey - Protocol and baseline data for a prospective cohort study of Australian doctors' workforce participation. BMC Health Services Research. 2010, 10: 50-10.1186/1472-6963-10-50.View ArticlePubMedPubMed CentralGoogle Scholar
- Papke LE, Wooldridge JM: Econometric Methods for fractional response variables with an application to 401(k) plan participation rates. Journal of applied econometrics. 1996, 11: 619-632. 10.1002/(SICI)1099-1255(199611)11:6<619::AID-JAE418>3.0.CO;2-1.View ArticleGoogle Scholar
- Australian Bureau of Statistics: ASGC remoteness classification: purpose and use. 2003, Canberra: ABSGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/11/126/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.