A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors

Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.


Background
Surveys of medical practitioners can provide important policy-relevant data and information that is often not captured by administrative data or registration databases. There is some suggestion that response rates for surveys of medical practitioners may be falling, with important implications for statistical inference, and for the extent to which results can be generalised and used to inform policy [1][2][3].
There is growing evidence in the literature about the most effective interventions to increase response rates in general population and doctor surveys. Interventions to improve response rates include incentive-based approaches (e.g. money, gifts, lottery and prize draws), design-based approaches (e.g. survey length, follow-up, content) and mode of administration (e.g. paper, internet, interview).
Three key factors that influence doctors' decisions to complete a survey are the opportunity cost of their time; their trust that the results will be used appropriately; and the perceived relevance of the survey [4].
Although the literature about factors influencing response rates is growing, there are three important gaps that this paper aims to address: i) a lack of evidence on the use of mixed mode survey designs; ii) a lack of evidence examining response bias and item non-response, in addition to response rate, and iii) a lack of evidence on the cost-effectiveness of different strategies.
The use of online and web-based surveys is growing, including those where email is the method of contact. Web-based surveys may seem attractive as there are no printing or data-entry costs, but response bias may be an issue, particularly if older respondents are less likely to respond, and a lack of trust in the security of transmitting information over the internet may reduce response rates and increase item non-response [2]. For doctors, emailed surveys have resulted in lower response rates than mailed surveys [4]. This is also reflected in the use of email in non-doctor populations, where meta analyses have found that web-based surveys, mostly using email contact, have a 10-11% lower response rate compared to other modes [5,6]. This is despite the fact that email surveys that include a weblink reduce the number of steps (and time) to complete a survey. However, evidence also shows that email contact can be impersonal and reduce response rates [7]. For doctors, there is an issue of whether the email will reach the respondent or be initially read by administrative staff who may not forward such emails to respondents, though this may also be an issue if mailed surveys are posted to their work address. Furthermore, the use of different types of mixed mode surveys for doctors has not yet been investigated thoroughly [4,8]. This is important if, for example, younger respondents are more likely to respond to an online survey, whilst older respondents are more likely to respond to a mailed survey. Accounting for doctors' preferences about which survey mode to complete may be important. For example, in a survey of doctors in the United States, paper surveys were preferred to email surveys when they were given the choice [9], and family physicians preferred mail surveys compared to surgeons [10]. The ability of doctors to choose their preferred mode of response to fit with their busy schedules is likely to be important [4,11]. Evidence from non-doctor populations suggests that offering a choice of mode does not increase response rates, but that the sequencing or switching of modes (e.g. paper followed by online) may matter [12][13][14]. A paper examining this for US physicians showed that mail first, followed by a web survey, had a higher response rate than web followed by mail [8].
Different modes of administration may also influence survey response bias (whether those responding are representative of the population) and item non-response (the extent to which all questions have been completed) as well as overall survey response rates. Response rates are frequently regarded as sentinel indicators of methodological quality in general, and representativeness in particular [15]. Although response rates are often used as a 'conventional proxy' for response bias, there is in fact no necessary relationship between response rate and response bias [16][17][18][19]. Despite this, less than half (44%) of published surveys of doctors discuss response bias, and only 18% provided some analysis of it [20]. Item non-response is also an issue, with respondents less likely to answer sensitive questions and some skipping whole sections, depending on how the survey has been designed and administered [21]. High item non-response was found in a web survey, compared to a face-to-face survey, of university students [22], whilst health professionals who were younger, male, and worked in hospitals were more likely to complete a web survey than a mailed survey [23].
There is also a lack of rigorous evidence on the costeffectiveness of the many different approaches to improve response rates and reduce bias [24]. Email and web surveys may seem cheaper than mailed surveys, and the effects on costs for mixed mode surveys are less clear. Researchers often have limited resources and adoption of all possible measures to increase response rates is usually not possible due to cost constraints and ethical considerations, especially when the study population or sample is widely dispersed. For these reasons, researchers must make choices as to which method leads to the largest increase in response rate (or other outcome) for each dollar spent. For example, up-front financial incentives may be the most effective, but are also costly compared with other approaches [7,[25][26][27]. Baron et al examined the effect of a lottery for GPs in Canada and found a 6.4% increase in the response rate at a cost of $CAD16 per additional returned survey [28]. Bjertnaes et al examined the effects and costs of the number of reminders in a survey of Norwegian physicians, and found that costs per response increased dramatically with telephone follow up [29]. Erdogan and Baker (2002) examined costs and effects of different methods of follow-up in a sample of advertising agency executives, but compared average costs and effect rather than incremental costs and effect [30]. A study that compared the costs of a mail and email survey in a group of academics found the email survey's costs were lower but that mail had a 12% higher response rate [31].
The aim of this study is to conduct a randomised trial and economic evaluation of an online survey compared to two types of mixed mode. Our choice of modes reflects the importance to doctors of being able to choose which mode to fill out, and the importance of a personalised mailed letter sent to their preferred mailing address (rather than their work address) as the main mode of contact rather than email. In all three modes, our method of contact was by mailed personalised letter. Three response modes were compared ( Figure 1): (i) Online mode: a mailed personal invitation letter asked doctors to logon to a secure website to fill out an online version of the questionnaire. Respondents could request a paper copy by phone/fax/email or they could print out a paper questionnaire after they logged on to the website. They were sent a reminder letter around three weeks later that again included login details. (ii) Sequential mixed mode: as above, but a paper questionnaire and reply-paid envelope was included with the reminder letter three weeks later; and (iii) Simultaneous mixed mode: a paper questionnaire and reply-paid envelope was sent out with the invitation letter, which also contained login details and so respondents could alternatively choose to fill out the survey online if they wished. A reminder letter was sent three weeks later with login details only and no paper survey. Primary outcome measures were response rate, survey response bias and item response. An economic evaluation comparing the costs of each mode of administration was also conducted by applying the results from the trial to the expected costs of the full main wave survey.
Our hypotheses are that: 1). online mode will result in a lower response rate and higher item non-response, compared with the two mixed modes; 2). sequential mixed mode will have a higher response rate than simultaneous mixed mode; 3). the costs of the online mode will be lower than the two mixed modes; and 4). the costs of the sequential mixed mode will be lower than the simultaneous mixed mode.

Methods
A randomised trial was conducted as part of the third and final pilot survey for Wave 1 of the Medicine in Australia: Balancing Employment and Life (MABEL) longitudinal cohort/panel study of the dynamics of the medical labour market in Australia, focusing on workforce participation and its determinants among Australian doctors [32]. The first wave of data collection, establishing the baseline cohort for the study, was undertaken in 2008. The questionnaire included eight sections: job satisfaction, attitudes to work and intentions to quit or change hours worked; a discrete choice experiment (DCE) examining preferences and trade-offs for different types of jobs; characteristics of work setting (public/private, hospital, private practice); workload (hours worked, on-call arrangements, number of patients seen, fees charged); finances (income, income sources, superannuation); geographic location; demographics (including specialty, qualifications, residency); and family circumstances (partner and children). There were four versions of the survey, each differing slightly in order to tailor them to the type of doctor: GPs, specialists, specialists-in-training, and hospital nonspecialists. Although survey length also matters for response rates, the context of the survey and research questions being tested required a long questionnaire, to ensure that sufficient data were collected to adequately test study hypotheses [32]. The length ranged from 58 questions in an eight-page booklet (for specialists-in-training), to 87 questions in a 13-page booklet (for specialists). In all modes, doctors in remote and rural areas (defined using the Rural, Remote and Metropolitan Area (RRMA) classification to doctors in RRMA 6 (Remote centre with population > 5,000) and RRMA7 (Other remote centre with population < 5,000)), mainly GPs, were given a cheque for $100 that was enclosed with the invite letter to recognise both their importance from a policy perspective, and the significant time pressures on these doctors. The purpose was to draw meaningful inferences about recruitment and retention in rural and remote areas. Pre-paid monetary incentives, not conditional on response, have been shown to double response rates [25]. The survey described in this paper was also the main pilot survey for the main wave of MABEL, so it was important to pilot the administration of these incentives. However, they did not influence the outcome of this trial as randomisation ensured approximately equal numbers of cheques going out in each arm of the trial.
The process of logging in and completing the survey online was kept as simple as possible. Users were directed to the main web page (http://www.mabel.org.au), where they clicked on the 'Login' link, which directed them to a login page where they entered their username and password. They were then directed to the first page of the survey. Respondents could save their responses and logout, and then login again to complete the survey, and they could skip questions. Once logged in, the padlock icon was visible, indicating that the website was secure.
The primary outcomes of interest in the trial were response rates, survey response bias with respect to age, gender, doctor type and geographic location, and itemresponse (the percentage of completed items). A three-arm parallel trial design was used with equal randomisation across arms. The sample size for the trial was calculated to detect a difference of 5% in the response rate at the 95% level of statistical significance, and with a power of 80%. This indicated that a sample of 900 doctors in each arm of the trial would be required, 2,700 doctors in total. This represented just under 5% (2700/54,160 = 0.04985) of all doctors undertaking clinical practice on the Australian Medical Publishing Company's (AMPCo) Medical Directory, which includes all doctors in all States and Territories of Australia and formed our sampling frame. This national database is used extensively for mailing purposes (e.g. the Medical Journal of Australia). The Directory is updated regularly using a number of sources. AMPCo receives 58,000 updates to doctors' details per year, through biannual telephone surveys, and checks medical registration board lists, Australian Medical Association membership lists and Medical Journal of Australia subscription lists to maintain accuracy. The directory contains a number of key characteristics that can be used for checking the representativeness of the sample and to adjust for any response bias in sample weighting. These characteristics include age, gender, location, and job description (used to group doctors into the four types).
A 4.9% stratified random sample of doctors was therefore taken, with stratification by four doctor types (general practitioners (GPs), specialists, doctors enrolled in a specialist training program, and non-specialist hospital doctors (including interns and salaried medical officers)), and six rural/remoteness categories (Rural, Remote and

Online mode
Mailed personal invitation letter with: * login details * option to request paper copy As above Sequential mixed mode Mailed personal invitation letter with: * login details * option to request paper copy Mailed personal invitation letter with: * paper copy & reply paid envelope * option to complete online Simultaneous mixed mode Mailed personal invitation letter with: * paper copy & reply paid envelope * option to complete online Mailed personal invitation letter with: * login details * option to request paper copy Metropolitan Area (RRMA) classification). This produced a list of 2,702 doctors. Doctors in this sample were then randomly allocated to a response mode by AS using random numbers generated in STATA. The AMPCo unique identifiers for each of the three groups were sent to AMPCo who conducted the mailing of invitation letters, and survey materials were mailed in late February 2008. Survey invitation letters indicated the University of Melbourne and Monash University as responsible for the survey. AMPCo also provided individual-level data on the population of doctors so we could examine response bias. Doctors were aware of which mode they had been allocated to on receipt of the invitation letter. The survey manager (AL) recorded responses and organised data entry and was blinded to group allocation. SH analysed the data and was not blinded to group allocation. Analysis was by 'intention to treat'.
Analysis included comparisons of response rates; estimation of means and proportions of respondents by age, gender, doctor type, and geographic location compared to the doctor population; logistic regression of response bias; and comparisons of the proportion of missing values (item non-response). The statistical significance of the differences between the response rates of the three response modes was analysed using a probit model with response (0/1) as the dependent variable and two dummy variables for response mode, with online as the reference category. The difference between the sequential and simultaneous modes was tested using the restriction that their coefficients be equal. Although respondents were randomly allocated across modes, it is still important to test whether any specific/particular respondent characteristics influenced the response rate. The probit model therefore included age, gender, doctor type and geographic location. Survey response bias was examined using a multinomial logit model of respondents (= 1) and the total population of doctors (= 0), with age, gender, geographic location and doctor type as independent variables. For item non-response, a comparison of the proportion of completed items was supplemented using generalised linear models that controlled for differences due to age, gender, geographic location and doctor type [33]. Analysis of geographic location was based on the Australian Standard Geographic Classification (AGSC) Accessibility and Remoteness Index for Australia (ARIA) [34].
The economic evaluation compared the costs of consumables (rental of AMPCo list, printing of surveys, letters, fax forms, further information fliers and reply-paid envelopes, mail-house processing costs, postage and data entry) across each mode of survey administration. The costs of researcher and staff time were the same for each mode, as each mode required the development of both the paper and web survey, and time liaising with AMPCo and the printers. These costs were therefore not included in the comparison of costs between modes. The expected costs for the main wave 1 survey were estimated based on sending out a survey to all doctors on the AMPCo database (n = 54,168) and using the response rates from the randomised trial to estimate the number of respondents. Data on the three primary outcome measures are presented alongside data on costs.

Results
Responses were received between March and October 2008. The characteristics of doctors in the three groups for the study sample are shown in Table 1. Although there are some small differences of up to three percentage points, the three groups are broadly similar in terms of key characteristics. The comparison of response rates across modes is shown in Table 2. Response rates were between 6 and 7 percentage points higher for the two mixed modes, compared to online ( Table 2). Table 3 shows response rates by mode and doctor type. Specialists had the highest overall response rates and GPs the lowest. Response rates for simultaneous and sequential mixed modes were between 2 and 6 percentage points higher for GPs, and between 10 and 15 percentage points higher for specialists. For hospital non-specialists, the response rate for simultaneous mixed mode was four percentage points higher than online, but four percentage points lower than sequential mixed mode. For specialists in training, the simultaneous mixed mode had the lowest response rate, with the sequential and online modes producing similar results.
The difference in response rates across modes was statistically significant ( Table 4). The table reports the marginal effects of each response mode compared with the online response, and can be interpreted as percentages. Controlling for other factors, the simultaneous mixed mode had a response rate 7 percentage points higher than online, and the sequential mixed mode was 7.7 percentage points higher than online. The effect of sequential mixed mode was not significantly different from simultaneous mixed mode (χ 2 = 0.16, p = 0.69). Specialists were 16 and 13 percentage points more likely to respond to the simultaneous and sequential mixed modes respectively, than to the online mode. Differences for other types of doctor were not statistically significant. The probit model also controls for differences in age, gender, doctor type and geographic area on response rate. Overall, females were less likely to respond, and the specialists' response rate was 5.3 percentage points higher than GPs. GPs in outer regional and very remote areas were more likely to respond than those in major cities; this was partly due to a $100 financial incentive provided to doctors in outer regional and very remote areas.
Doctors allocated to each mode were given the opportunity to complete the survey online or on paper. Table 5 shows that of those allocated to the simultaneous mixed mode, 21% chose to complete the survey online, whilst of those in the online group only 3% requested and filled out a paper survey. Doctors allocated to the sequential mixed mode group were more likely to fill out a paper survey (62%) than an online one (38%).
Response bias was examined for each mode by comparing the characteristics of respondents to each mode with the population of all doctors in Australia. This was undertaken using a multinomial logit model with four outcomes: simultaneous, sequential, online and population. Table 6 shows the odds ratios for those comparisons and factors that were statistically significant at the 95% level. For example, both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were the same as the population. Those who filled out the simultaneous mixed mode were twice as likely to be specialists when compared to the population, whilst those  filling out the sequential mixed mode were more likely to be older and more likely to be in a non-metropolitan area when compared to the population. The results also show that specialists were 2.4 times more likely to complete the simultaneous mixed mode than the online, and that those aged over 60 and in inner regional areas were more likely to complete the sequential mixed mode than the online mode. Item non-response was examined by calculating the average percentage of items completed, and the percentage of respondents who completed all relevant questions,  i.e. whether the percentage of items completed = 100% (Table 7). If a question was 'not applicable', this was counted as a completed question. The order of the sections in the survey is reflected in these tables, with the job satisfaction section coming first. Note that the online survey allowed respondents to skip questions, as would be the case in the paper survey. Overall, the online mode shows the lowest average percentage of items completed, with almost 89% of questions answered compared to around 92% for each of the other modes. This is the case for all sub-sections of the survey, with the section on finances, which includes income questions, having the lowest average percentage of items completed of 80%. This difference is statistically significant, as shown in the first half of Table 8, with odds ratios of 1.48 for paper and 1.53 for mixed mode compared to online. Table 8 also shows statistically significant differences for some sections of the survey. The sequential mixed mode was more likely to have a higher percentage of items completed than the online for the sections on DCE, workload, and location. The simultaneous mixed mode was more likely to have a higher percentage of items completed than the online in the 'About You' and 'Family' sections.
Although the percentage of questions completed overall was 91.4%, only 2.9% of respondents completed every question, and this was lowest for simultaneous mixed mode (1.1%), followed by sequential mixed mode (2.2%), and was highest for online mode (6.9%) ( Table 7). These differences were statistically significant (second half of Table 8), with odds ratios of 0.13 for paper mode and 0.23 for mixed mode compared to online. The proportion of respondents completing all questions was similar across the modes for each section. Those using simultaneous mode were significantly more likely to complete all questions in the 'Family' section compared to those in the online mode ( Table 8).
The costs of each mode were estimated for the first wave of the survey, which was to be sent out to the population of doctors in Australia ( Table 9). The online mode has the lowest total cost, followed by the sequential mixed mode, with simultaneous mixed mode having the highest cost. The total cost of the simultaneous mixed mode is 38% higher than online, and 21% higher than sequential mixed mode. The sequential mixed mode total costs are 14% higher than the online mode. The main sources of cost differences between modes are related to handling and postage of the mail-out, printing of surveys, and data entry for paper copies. Table 9 shows incremental cost-effectiveness ratios with respect to changes in response rate and number of responses. Comparing sequential and simultaneous mixed modes to online, sequential mixed mode was the most cost-effective relative to online. Costs were $6.07 per additional response, and $AUS3, 290 per 1% increase in the response rate, compared to online. Although the main outcomes were similar for the two mixed modes, the sequential mode was cheaper due to lower printing, mailing and data entry costs. Using the sequential mixed mode resulted in total costs which were 21% lower than for the simultaneous mode, with no detrimental impact on response rate, survey response bias, or item non-response.

Discussion
This study has compared response rates, survey response bias, item non-response, and costs across three modes of  conducting a doctor survey. Mailing a letter inviting respondents to complete the questionnaire online, followed by a mailed reminder letter and paper copy of the survey, was the most cost-effective mode of administration. Although online modes were less costly, due to lower printing and data entry costs, and did not exhibit evidence of increased response bias, the response rates and item completion rates were lower than for the sequential and simultaneous mixed modes. The online mode had lower item completion rates in the sections on the DCE, workload, and personal and family characteristics. Although the sequential mode is the most cost-effective with respect to response rates, whether this is chosen would depend on the weight given to the existence of response bias when compared to the population of doctors. 1. Percentage of items completed: n q i t * 100 n; the mean of the percentage of completed questions per respondent i, where q is the number of completed questions, t is the total number of questions, and n is the total number of respondents. We find no support for the hypothesis that offering a simultaneous choice of modes results in lower response rates than a sequenced choice of mode. Literature from non-doctor populations suggests that sequencing may be better than simultaneous choice [12,13]. Though there is a small difference in response rates for both mixed modes, this is not statistically significant.
Lower response rates in the online mode arguably reflect the population being surveyed, their familiarity with and trust in the internet, and reliability of access to the internet, especially in remote regions of Australia. Most doctors choose to fill out a paper questionnaire, possibly suggesting that they are less comfortable with doing a survey online or have issues with sending confidential information over the internet. This is reflected in a higher rate of item non-response for most sections of the survey, especially for the more personal questions. This finding occurred despite assurances about confidentiality and the fact that information was being sent over a secure internet connection. Doctors may also prefer the 'portability' of a paper-copy which they can fill out at the office, at home or whilst travelling. Online modes are also becoming more portable (i.e. not confined to the desktop PC) with the use of laptops, touch screen tablets and other mobile devices, so the preference for a paper copy may erode over time. A key issue in relation to survey response is the need to minimise the opportunity cost of survey completion for respondents. The need for internet access and the time it takes to logon needs to be balanced against filling out a paper survey that needs to be posted. A potential reason for the lower online response rate was the need by respondents to find a website and login using their username and password provided in the letter. Once at the website, they had to go to a login page, enter their details, and were then directed to the beginning of the survey. Though this takes time compared to an email survey with an embedded website link, it does provide a more secure process that may have increased the confidence of respondents in the security of the website.
Response rates in all three arms could be regarded as low, an increasing issue for surveys of doctors [1][2][3]. It is noteworthy that our comparative analysis with the population of Australian doctors showed that the mode with the lowest response rate (online) was the most representative, confirming the point noted in the introduction, that response rate and response bias are separate issues and should both be explicitly analysed to ensure appropriate interpretation.
Our study used a diverse sample of doctors with respect to age, specialty and geographic location, increasing the generalisability of the results. Although the trial was not designed for sub-group analysis, specialists in training allocated to the online mode had a higher response rate (17.5%) than those allocated to the simultaneous mixed mode (13.89%), and a similar response rate to those in the sequential mixed mode (18.52%). For those conducting surveys of younger doctors and doctors in training, who are more likely to be familiar with and trusting of the internet, online surveys may be a more desirable option, though item non-response may be an issue. However, specialists had the highest response rate for the simultaneous version (26.2% compared with 22.9% for mixed) and the lowest response for the online mode (11.2%). The routine use of exclusively online surveys for the population of doctors may therefore be some time off, at least until the current older cohorts have been replaced by younger cohorts. The unit costs of printing and survey administration are likely to vary across geographic locations and companies, though they are not likely to vary across modes within geographic locations, and so should not influence our findings. Printing costs vary greatly with volume, such that for the pilot online mode the unit cost per printed questionnaire (for those requesting a paper survey) was $AUD5.90. However, the unit cost of printing 54,169 paper questionnaires (for the ensuing main wave survey) was $AUD0.32. The relationship between unit costs and volume printed is not linear. The costs of establishing the online survey will also vary across settings, although there are now many low cost survey packages available that cover most needs, some of which can be re-programmed if necessary.
Our results are in line with other research showing that lower response rates are likely to result from online surveys than mailed surveys [5,6]. Other studies have compared mail and online mixed modes in non-doctor samples [12,13]. However, these studies have not examined costs. There are many different types of response mode, and different combinations of mixed modes, that can potentially be used in surveys of doctors. Further research is required in a number of areas. First, comparisons are needed of modes that offer choice compared to those that do not [14]. Second, all comparisons need to include an examination of the changes in costs. This is mentioned frequently in the literature as a motivation for using online and mixed modes, but there is hardly any evidence of the differences in costs.

Conclusion
Our study is the first, in the context of a large national survey of doctors, to include an economic evaluation alongside a randomised trial using standardised methods. Of the alternatives compared in our study, the sequential mixed mode had the lowest cost per response compared to online. Decisions on the appropriate response mode will ultimately be a function of the study objectives and context, but for large national surveys of the doctor population that include doctors at different stages of their career, the sequential mixed mode seems to be the preferred option.