This study investigated whether questionnaire length or the offer of an incentive made a difference to the response obtained to a cross-sectional postal questionnaire survey of GPs’ management of CKP. Neither questionnaire length nor offering a prize draw monetary incentive had a significant effect on response.
Response to the eight-page StQ (21.0%) in this study was only a third of the mean response among GP surveys identified in a recent review [3], however questionnaire lengths in this review were often unknown. The findings from the present study are consistent with previous GP surveys which used questionnaires of similar lengths [20–22]. However, response to the four-page AbQ was less than that which may be expected if length is judged by the number of words contained within a questionnaire (~60%) [20], and that which may be achieved by using a very short (i.e. one page) questionnaire (49%) [21]. There is no standard optimum questionnaire length [14]. In part this is due to heterogeneity in the definition of length among empirical work (number of words, pages or items or time taken for completion) [14] and because an ‘appropriate length’ is likely to differ according to the target population and topic. The lack of effect on response may indicate a non-linear relationship between response and length. There may be a threshold length at which GPs choose to respond to or not [20, 22]. If so, the lack of effect of length on response in our study could be explained by both questionnaires lying the same side of such a cut-off and/or the differential between the lengths of questionnaires being insufficient to elicit a change in response behaviour. This explanation could account for similar findings from a questionnaire study undertaken among Canadian physicians (including family physicians). No difference in response was seen between those mailed a 12-page questionnaire and those mailed a six-page version (responses of 31.7% and 31.6% respectively) [23]. The idea of a threshold was supported by a study investigating the effect of the length of reminder questionnaires on response among previously non-responding GPs. This study revealed that response among those sent a shorter four-page version (23 items) in the final reminder mailing was greater than those who were only sent the full 12-page questionnaire containing 88 items (responses of 14.8% and 7.2% respectively) [22].
Our finding that offering a prize draw monetary incentive did not influence response is not consistent with the findings of previous studies [14]. Empirical evidence suggested that a large monetary incentive should improve response compared to smaller ones and/or non-monetary incentives [14] and prize draws for substantial monetary prizes may be as effective as guaranteed smaller monetary incentives [19]. Therefore a prize draw for a single large incentive was offered to GPs in this study as this type of incentive, used in this way previously had a significant impact on response [23]. In the current study, GPs were offered entry into a prize draw where one winner would receive a gift voucher. The fact that this incentive was a prize draw and that a gift voucher, rather than cash, was offered may have reduced the incentive. Entry into a lottery is classed as a non-monetary incentive and, as such, can be less effective than a monetary incentive [14]. The incentive offered in this study was entry to a prize draw for which a winner was certain. Although this is different to being given a lottery ticket or scratch card, it may have a reduced effect compared with a guaranteed incentive. Although gift vouchers may be considered as monetary, as they provide an explicit value of currency to spend Edwards et al. referred to vouchers as being non-monetary when reviewing their impact in electronic questionnaires [14]. However, non-monetary and/or voucher incentives can improve response to surveys of health professionals and the general population [14, 24], so perception of whether the prize was monetary or non-monetary does not wholly explain the lack of difference in response identified in this study. Other explanations for the lack of impact of the incentive in this study may be that: the prize was of insufficient value in this relatively wealthy population, the GPs did not perceive the odds of winning to be high enough, as we did not communicate the size of the survey, or because entry to the prize draw was conditional on whether the GP completed the questionnaire [14, 24]. It is possible that an incentive consisting of automatic smaller financial payment to all respondents may have had more effect. However, providing meaningful automatic remuneration to a large sample of GPs to undertake a questionnaire survey may render the research impractical. Provision of incentives may introduce response bias and limit generalisability [15] although the relevance of this concern among relatively affluent GPs is unknown. A better understanding of what constitutes ‘appropriate’ remuneration and what role this actually plays in determining GPs’ involvement in research is needed to increase participation in future studies.
A key factor that may account for the lack of impact of questionnaire length and the offer of an incentive on GP survey response in this study may be time [13]; particularly as requests to participate in research are common [25] and GPs have other non-clinical duties to undertake, such as continuing professional education [26]. Similar to other work [26], in this study, most (93%) GPs providing minimum data cited ‘too little time’ as their reason for not participating in this survey. If lack of time is the key issue driving non-response among GPs, then simply offering incentives and reducing the length of a questionnaire may be insufficient to promote participation in research [12, 13]. GPs are not alone in working in time-pressured healthcare environments however similar surveys among other healthcare groups often elicit greater response. For example, responses from surveys focussing on similar topics have been reported to be 63% versus 27% and 70% versus 52% from rheumatologists versus family physicians in the UK and Canada, respectively [27, 28] and 58% from UK physiotherapists [29]. The reason for the lower response among GPs is uncertain. It is possible that many GPs are not interested in, or do not prioritise, CKP, compared with other conditions. Therefore, GPs may cite lack of time as a more socially desirable way to communicate a lack of interest in the topic, which is known to be key influence on response [1, 14, 17]. GPs may also be subject to higher numbers of requests to participate in survey research as, being generalists, their expertise is spread across a very broad range of clinical conditions and, thus, research topics. Finally cultural issues may be relevant, for example, in the secondary care environment there has been a longer tradition of empirical research activity. Empirical work examining the attitudes of German and UK GPs about their involvement in research has revealed that even when GPs felt that research was important, they were not necessarily keen to be involved and some did not view this as part of their role [26, 30]. Distrust of, and negative attitudes of GPs towards researchers, was also highlighted [26, 30]; for example, GPs were concerned that it was the researchers’, not the patients’, best interests driving the work [30].
The strengths of this study include the clear investigation of different questionnaire lengths and the offer of an incentive on questionnaire response at the same point in time and using the same clinical topic. This is important given that response to questionnaires can be impacted by level of interest the target population has in the topic [1, 14, 17] which may vary over time. Few differences were found among the characteristics of responders and those providing minimum data only. However, there is a lack of any information about GPs who did not respond at all. Therefore the degree and likely influence of non-response bias is incompletely ascertained. A potential confounder for the impact of the length of the questionnaire on response was the use of a different question format in two items on the questionnaires. These items used closed, multiple choice question in the StQ and open free text questions in the AbQ. Although use of open questions significantly reduces response compared to closed questions [14], the impact of question format among GPs is unknown. Further, this difference was only present in 2/85 questions in the StQ and 2/36 items in the AbQ and were positioned part way through the questionnaire, therefore it is unlikely that this difference will have significantly altered the GPs’ decision to respond. The confounding effect of the question format was balanced when assessing the impact of the incentive on response, as half of both the groups receiving the StQ and AbQ were offered the incentive.
Despite much work investigating strategies to improve response, successful strategies for GP surveys remain elusive [3]. Even using newer technologies to deliver surveys electronically, response remains unchanged among students and healthcare professional surveys [23, 31]. The results of this study suggest that shorter is not inevitably better and use of a prize draw monetary voucher incentive does not influence response. We propose that irrespective of the questionnaire length or the incentive offered, many GPs feel that they simply do not have the time to respond to questionnaires [9, 12]. Therefore, while solutions to this barrier are sought, in the knowledge of low response future surveys of GPs need to oversample and take measures to estimate likely non-response bias [10].
The presence of a threshold point for length influencing response needs to be formally evaluated among GPs. While doing so, critical factors determining such a threshold must be considered; for example, a higher threshold may be identified among questionnaires investigating a topic of wider interest [1, 17] and definitions of length need to be explicit. Empirical work is needed to determine whether the impact of ‘length’ differs according to how it is defined, for example, using physical length (number of words, pages or items) versus total time for completion. Qualitative work with GPs could be undertaken to establish the presence and nature of, and influences on, a threshold for response. In the absence of knowing ideal approaches to maximise response, alternative strategies for obtaining data from large samples of GPs should be considered and formally evaluated. Strategies could include assessing the value of a) using very short questionnaires (i.e. one side of A4) containing the most pertinent questions for reminder mailings [17, 22], b) distributing questionnaires at mandatory training events to enable face-to-face sampling at a time already set-aside for non-clinical work and c) staged approaches, whereby further questions are sent upon receipt of initial response. This latter strategy could be undertaken using postal, electronic or mobile phone text-based methods. Research is also needed to establish what constitutes an appropriate and meaningful incentive among GPs, particularly when time is an issue. Although UK GPs are not required to participate in research, they are expected to have a sound understanding of research methodologies, know how to appraise findings and apply results to their patients [32]. In order to sustain meaningful primary care research, work should be undertaken to establish the barriers to GPs engaging in this work and solutions to address these issues should be sought. Approaches to do so could involve establishing what constitutes appropriate remuneration [33], acknowledging that this may depend on the clinical interest of the topic being investigated, developing mechanisms by which research requests are limited to manageable numbers, although this risks being influenced by political priorities, or by including at least some level of research activity as a mandatory component for revalidation.