Does it matter whether the recipient of patient questionnaires in general practice is the general practitioner or an independent researcher? The REPLY randomised trial
© Desborough et al; licensee BioMed Central Ltd. 2008
Received: 15 May 2007
Accepted: 27 June 2008
Published: 27 June 2008
Self-administered questionnaires are becoming increasingly common in general practice. Much research has explored methods to increase response rates but comparatively few studies have explored the effect of questionnaire administration on reported answers.
The aim of this study was to determine the effect on responses of returning patient questionnaires to the respondents' medical practice or an independent researcher to questions relating to adherence and satisfaction with a GP consultation. One medical practice in Waveney primary care trust, Suffolk, England participated in this randomised trial. Patients over 18 years initiated on a new long-term medication during a consultation with a GP were randomly allocated to return a survey from their medical practice to either their medical practice or an independent researcher. The main outcome measures were self reported adherence, satisfaction with information about the newly prescribed medicine, the consultation and involvement in discussions.
274 (47%) patients responded to the questionnaire (45% medical practice, 48% independent researcher (95% CI -5 to 11%, p = 0.46)) and the groups appeared demographically comparable, although the high level of non-response limits the ability to assess this. There were no significant differences between the groups with respect to total adherence or any of the satisfaction scales. Five (4%) patients reported altering doses of medication in the medical practice group compared with 18 (13%) in the researcher group (P = 0.009, Fisher's exact test). More patients in the medical practice group reported difficulties using their medication compared to the researcher group (46 (35%) v 30 (21%); p = 0.015, Fisher's exact test).
Postal satisfaction questionnaires do not appear to be affected by whether they are returned to the patient's own medical practice or an independent researcher. However, returning postal questionnaires relating to detailed patient behaviours may be subject to response biases and further work is needed to explore this phenomena.
Patient surveys are becoming an increasingly common tool to measure quality and improve services . Although doubts remain over the validity of these surveys in assessing quality of care [2, 3] the current General Medical Services (GMS) contract provides additional financial remuneration for conducting patient surveys. Furthermore, it is proposed that in the future these surveys will require patients to provide even greater detail about their consultations .
While much research has investigated the administration of patient surveys, the majority has explored methods to increase response rates . Intense survey methods have been reported to yield greater response rates , however postal questionnaires appear to remain the most cost effective mode of administration [7, 8]. It has also been suggested that responses to postal questionnaires are likely to be more accurate than other methods when enquiring about sensitive issues such as health and behaviour [9–11].
It is unsurprising therefore that self-administered postal questionnaires have been widely adopted as the survey method of choice. However, it is not known how subtle differences within this method of administration affects elicited responses [8, 10]. Results from a small medical practice audit of patient self reported adherence suggested no patients deviated from the instruction given to them by their doctor when the self administered questionnaire was returned to the patients own medical practice. This is in sharp contrast to literature where self reported adherence to medication for chronic conditions is usually 50%, and therefore led us to the hypothesis that patients returning questionnaires regarding potentially "deviant" behaviour (in this case non-adherence to medication) directly to their medical practice might be unwilling to report this.
Questions on adherence may be subject to additional response biases if administered by the patient's own medical practice. Fear of reprisals and the reluctance to express aversion to medicines may increase bias of previously validated questionnaires despite assurances of confidentiality [12–14]. In addition, questionnaires which include questions regarding satisfaction with the doctors consultation and the information provided may also demonstrate socially desirable responses with higher levels of reported satisfaction if returned directly back to the medical practice of the patient. Although a number of researchers have required patients to submit responses to independent researchers [15, 16], greater understanding of this administrative method is needed.
We therefore performed a randomised controlled trial with the primary aim of determining whether patients returning questionnaires relating to their GP consultation to either their medical practice or an independent researcher affects patients' reports of their adherence. The secondary aims of this research to explore the components of GP consultations which affect medication adherence will be reported later.
The study was conducted in one medical practice in Suffolk and approved by the Norfolk research ethics committee prior to commencement of data collection. During the study period, the practice had a list size of 10,000 patients, with 10 GPs conducting consultations. Patients over 18 years, who had been initiated on to a long-term medication by the GP in a consultation at the practice, were invited to participate. The sample only included patients prescribed one de novo solid oral dosage form (or inhaler) medication with a specified daily dosage (excluding PRN medication) and did not included courses of medicines prescribed for the treatment of acute conditions. Patients were randomised to return their questionnaire to either the medical practice or an independent researcher by the opening of randomly allocated opaque envelopes in chronological order, blocked in varying lengths and stratified by GP.
The study was performed between November 2005 and August 2006, as a randomised control trial. Potential participants were not aware of the randomisation and were only told the research was interested in their recent visit to the doctor. Those allocated to the medical practice group were sent a questionnaire with an introductory paragraph from the medical practice and a pre-paid reply envelope addressed to the medical practice. Those allocated to the researcher group were sent a questionnaire with an introductory paragraph from the researcher and a pre-paid reply envelope addressed to the researcher (see additional files 1 and 2). The questionnaires were otherwise identical and sent to patients with the consent form seven days after the index consultation. The allocation was performed by a receptionist and both GPs and patients were blinded to the allocation. Non-responders were sent a second posting of the questionnaire after two weeks.
The questionnaire was developed from existing tools available from the medicine partnership  designed to explore components important for achieving concordance. The Medication Adherence Report Scale (MARS) was used to measure adherence to the newly prescribed medication. This scale requires patients to report their frequency of behaviour relating to five non-adherence behaviours; forgetting, missing, altering, taking less and stopping medication where 5 = never, 4 = rarely, 3 = sometimes, 2 = often and 1 = always. The Satisfaction with Information about Medicines Scale (SIMS) is a validated questionnaire providing a profile of patients' satisfaction with the information they have received about their newly initiated medication. It consists of 17 items requesting patients to rate the amount of information received on each item as: 'too much', 'about right', 'too little', 'none received' and 'none needed'. The Perception of Involvement in Discussions (PID)  requires patients to rate how much they agree with four statements relating to their involvement in discussions on a 5-point Likert scale ranging from 'strongly agree' to 'strongly disagree'. The Medication Interview Satisfaction Scale (MISS-21) asks patients to rate 21 statements about their satisfaction with their consultation on a 7-point Likert scale ranging from 'very strongly disagree' to 'very strongly agree'. The questionnaire additionally included a checklist of potential difficulties patients may have with using their medication, e.g. opening lids or swallowing tablets. Patient characteristics were extracted from practice medical records by a researcher on patient completion of the questionnaire. These included: gender, age, living status, number of prescribed daily medications, number of prescribed 'when required' medication, current active problems and details of the newly prescribed medications.
Sample size calculation
Initial responses to a practice audit of adherence (a survey returned directly to the medical practice) demonstrated 100% of patients scoring 25 (perfect score) using the MARS questionnaire. A previous study using the MARS questionnaire returned to a researcher, reported 80% of patients scoring 25 . Therefore it was calculated that 274 patients (137 in each group) would be required to detect a conservative 95 versus 85% difference in patients scoring 25 using the MARS questionnaire with 80% power and using a 5% significance level. There was a paucity of data on the likely response rates to inform calculations of how many patients would need to be sent questionnaires to yield an achieved sample size of 274. In the absence of such data, identification and recruitment of study participants continued until 274 completed questionnaires had been received.
Responders and non responders were compared with respect to age and gender. The medical practice and researcher groups were compared according to patient characteristics and the newly prescribed medicine. MARS scores were dichotomised into either 'adherent' (score = 25) or 'partially adherent' (score < 25). This was further subdivided by dichotomising each of the non-adherent behaviours as not occurring (score = 5) or occurring (score < 5). The proportion of patients in each group who reported adherence was compared using the Fisher's exact test with confidence intervals calculated assuming the normal approximation to the binomial. The SIMS, MISS-21 total scores and subscale scores and response to each individual "Perception of involvement in discussions" questions were compared between groups using the Mann Whitney U test as distributions were expected to be skewed. The proportion of patients reporting difficulties with medication in each group was compared using the Fisher's exact test. Data manipulation was carried out using SPSS version 14 with statistical significance set at 5%.
Baseline comparison of medical practice and researcher group patients. Values are numbers (percentages) unless stated otherwise.
Medical Practice (n = 132)
Researcher (n = 142)
Median (Q1–Q3) Age (years)
Median (Q1–Q3) Daily drugs
Median (Q1–Q3) PRN drugs
Diseases of circulatory system
Ischemic Heart Disease
Other forms of heart disease
Disorders of the thyroid gland
Mood [affective] disorders
Eye & adnexa
Skin & subcutaneous tissue
Musculatory system & connective tissue
Newly prescribed medication
ACE inhibitors or ARAs
Lipid regulating drugs
Central nervous system
Obstetrics & gynaecology
Tablets or Capsules
Less than daily
More than daily
Satisfaction scales scores
Possible Score Range
Medical Practice (N = 132)
Researcher (N = 142)
Score Mean (SD)
Satisfied* N (%)
Completed items** N (%)
Score Mean (SD)
Satisfied* N (%)
Completed items** N (%)
Action and usage
Potential problems of medication
Involvement in discussions
Perception of involvement in discussions. Values are numbers (percentages) unless otherwise stated.
Medical Practice (N = 132)
Researcher (N = 142)
Agree with item
Agree with item
The doctor gave me responsibility for deciding how to deal with my health problems
The doctor asked me to choose a treatment for my health problem
The doctor gave me enough information to make my own decision about treatment
The doctor did not ask my opinion about my medicines
Difficulties using medication
Potential difficulties using medication. Values are numbers (percentages) unless stated otherwise
Medical Practice (n = 132)
Researcher (n = 142)
Using blister packs
Managing eye/ear drops
Picking up tablets
Using other devices
Any difficulties reported 1
This is the first randomised controlled study to investigate whether patients give different responses to questionnaires regarding detailed aspects of their consultation including medication adherence which they return to their own medical practice, in comparison to those returned to an independent researcher. The survey was conducted from one medical practice in Suffolk.
Summary of main findings
The study found no significant differences in the proportion of patients reporting being adherent or in the satisfaction levels of components of the consultation between submitting responses to the medical practice or an independent researcher. Significant differences did exist in the secondary outcomes of reporting reasons for non-adherence and difficulties using medication. Fewer patients reported altering doses of medication and more patients reported difficulties using their medication in the medical practice group.
Comparison with existing literature
Consistent with previous data [23, 24], we found approximately 50% of patients reported non-adherence when classifying patients as non-adherent if they scored < 25 on MARS. However, the mean MARS scores were higher than previously reported  suggesting patients were either: only reporting one type of non-adherent behaviour, or more likely, patients had restricted opportunity to deviate because of the limited time between initial prescribing and survey completion. MARS and other self reported adherence measures tend to overestimate adherence, but are highly accurate for patients who report non-adherence (minimal false positives) [26, 27]. This study may have also over estimated the absolute level of non-adherence. With a response rate of less than 50% only those more adherent participants may have responded, although we believe that this effect would be consistent in both groups. All satisfaction scale scores were similar to previous studies [20, 21, 26].
Comparisons of self administered questionnaire techniques (email, postal, handing out questionnaires to patients) appear to find no differences in responses for health satisfaction surveys [28, 29] and this study suggests administration differences within the postal technique will also not affect reported satisfaction. Despite equal assurances of confidentiality, we believe the differences observed between the two groups in terms of reporting adherence and difficulties using medication may be a result of patients not wanting to appear responsible for their non-adherence. Though it is possible that these secondary outcomes are non significant and should only be considered as exploratory. Earlier studies have described how patients find it difficult to express an aversion to medicines directly to their GP [12, 13], thus reporting forgetting rather than altering doses may be more socially desirable for surveys returned directly to the patient's medical practice. The increase in reported problems using medicines in the medical practice group also corresponds to socially desirable responses, as patients attempt to justify their non-adherence as unintentional and not a conscious deviation from instructions.
There was no difference in response rate to the overall questionnaire or individual items on the questionnaire, which contrasts previous research comparing response rates between similar groups . Smith et al.'s study used a general health survey on heart disease to compare response rates between questionnaires introduced and returned to the patients' general practitioner and those introduced and returned to a doctor at a research unit. The overall response rate in the general practitioner group was 85% compared to 75% in the research unit arm. There are a number of possible explanations why this study received a higher response rate overall and a difference in response between the two groups not observed in our study. Smith et al.'s study occurred more than 20 years ago and since this time there has been a general decline in response rates to questionnaires. Furthermore it only targeted patients aged between 40 and 59 years with non response higher in younger age groups . The content of the questionnaire (a general health survey) may explain the difference in response between the two groups, as the perceived importance of the questionnaire in the medical practice arm may have been increased, an effect not demonstrated in our questionnaire regarding a specific consultation.
Strengths and limitations of this study
The results of this study should be viewed in the context of several limitations. This study occurred in one medical practice with a practice size of 10,000; although all 10 GPs were involved, it may not be generalisable to other practices, especially very small single handed practices where patients consult the same GP. The results therefore should be replicated in a wider variety of settings. However, the poor response rate (47%) makes it impossible to be sure whether randomisation ensured comparability between the groups at baseline, though for the recorded baseline variables the two groups were comparable. The response rate and age difference of non-responders was consistent with other similar postal questionnaires [32, 33]. However it may be necessary to explore other more intensive modes of administration to see if responses to questions requesting similar information are significantly different. Twenty statistical tests were performed on secondary outcomes in this study, as it was not powered for these analyses one false positive result would be expected and we made no correction for multiple testing. Therefore our significant findings should be considered exploratory. The patients' ability to identify the survey recipient may have been decreased by ethical requirements to enclose a covering letter from the medical practice. This letter was on practice headed paper and asked patients to read the enclosed information regarding a research project. To determine the true extent of response differences the ideal research design would have been a factorial design involving four cells (sent by practice returned to practice; sent by practice returned to independent researcher; sent by independent researcher returned to practice; sent by independent researcher returned to independent researcher). However, this design is not possible with current ethical and data protection concerns, hence the pragmatic nature of our design. Additional qualitative analysis would have helped to determine the significance of the covering letter and explore reasons for specific responses. As a randomised controlled trial it was hoped that this study would be able to detect genuine differences in responses between respondents that were not a result of different settings which is a general limitation of much of the literature on modes of questionnaire administration . However, the level of non-response makes it impossible to assess if the groups were comparable at baseline.
This study found no difference in recorded adherence or satisfaction between our two groups. Assuming the groups were comparable at baseline it therefore supports the validity of returning existing patient satisfaction questionnaires used in the current GMS contract to the patients' medical practice. Our secondary findings led us to the hypothesis that if questionnaires are extended to explore more detailed experiences of consultations and patient behaviours, responses may be subject to additional response bias. In our case we believe social desirability bias to be responsible for the differences observed and any method able to reduce this phenomena with behavioural questions is of benefit to future questionnaires .
In the future research should compare questionnaire response to measurable outcomes where possible to help determine the accuracy of responses. While maximising response rates remains important, greater consideration should be placed on the quality of responses for questionnaires which explore patient behaviours. Every aspect of survey administration should be carefully considered as it may confer important affects on responses.
We thank Sue Herring for her hard work in data collection and Gerard Whitfield for his continued encouragement to develop this study. We also thank the three reviewers for their useful comments which have been included in this final manuscript and all the study participants. This study was funded by Waveney primary care trust.
- Cleary PD: The increasing importance of patient surveys. Br Med J. 1999, 319 (7212): 720-721.View ArticleGoogle Scholar
- Rao M, Clarke A, Sanderson C, Hammersley R: Patients' own assessments of quality of primary care compared with objective records based measures of technical quality of care: cross sectional study. Br Med J. 2006, 333 (7557): 19-10.1136/bmj.38874.499167.7C.View ArticleGoogle Scholar
- Thompson AGH: Questioning practices in health care research: the contribution of social surveys to the creation of knowledge. Int J Qual Health Care. 2003, 15 (3): 187-188. 10.1093/intqhc/mzg040.View ArticlePubMedGoogle Scholar
- Coulter A: Can patients assess the quality of health care?. Br Med J. 2006, 333 (7557): 1-2. 10.1136/bmj.333.7557.1.View ArticleGoogle Scholar
- Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I: Increasing response rates to postal questionnaires: systematic review. Br Med J. 2002, 324 (7347): 1183-10.1136/bmj.324.7347.1183.View ArticleGoogle Scholar
- Cartwright A: Interviews or postal questionnaires? Comparisons of data about women's experiences with maternity services. Milbank Q. 1988, 66: 172-189. 10.2307/3349989.View ArticlePubMedGoogle Scholar
- Kaplan CP, Hilton JF, Park-Tanjasiri S, Pfeacute, rez-Stable EJ: The Effect of Data Collection Mode on Smoking Attitudes and Behavior in Young African American and Latina Women: Face-to-Face Interview Versus Self-Administered Questionnaires. Eval Rev. 2001, 25 (4): 454-473.View ArticlePubMedGoogle Scholar
- McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J: Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol assess. 2001, 5 (31): 1-256.View ArticlePubMedGoogle Scholar
- Bower P, Roland MO: Bias in patient assessments of general practice: General Practice Assessment Survey scores in surgery and postal responders. Br J Gen Prac. 2003, 53: 126-128.Google Scholar
- Bowling A: Mode of questionnaire administration can have serious effects on data quality. J Public Health. 2005, 27 (3): 281-291. 10.1093/pubmed/fdi031.View ArticleGoogle Scholar
- Parker C, Dewey M: Assessing research outcomes by postal questionnaire with telephone follow-up. Int J Epidemiol. 2000, 29: 1065-1069. 10.1093/ije/29.6.1065.View ArticlePubMedGoogle Scholar
- Britten N: Lay views of drugs and medicines: orthodox and unorthodox accounts. Modern Medicine: Lay perspectives and experiences. Edited by: Williams SJ, Calnan M. 1996, London , UCL Press, 48-73.Google Scholar
- Britten N, Stevenson F, Gafaranga J, Barry C, Bradley C: The expression of aversion to medicines in general practice consultations. Soc Sci Med. 2004, 59 (7): 1495-1503. 10.1016/j.socscimed.2004.01.019.View ArticlePubMedGoogle Scholar
- Singer E, Hippler HJ, Schwarz N: Confidentiality assurances in surveys: Reassurance or threat?. Int J Public Opin Res. 1992, 4 (3): 256-268. 10.1093/ijpor/4.3.256.View ArticleGoogle Scholar
- Schers H, Webster S, Hoogen H, Avery A, Grol R, Bosch W: Continuity of care in general practice: a survey of patients views. Br J Gen Prac. 2002, 52: 459-462.Google Scholar
- Vingerhoets E, Wensing M, Grol R: Feedback of patients' evaluations of general practice care: a randomised trial. Qual Saf Health Care. 2001, 10 (4): 224-228. 10.1136/qhc.0100224...View ArticleGoogle Scholar
- Cox K, Mynors G: Medicines Partnership: from compliance to concordance. Project evaluation toolkit. Medicines Partnership. 2003Google Scholar
- Horne R, Hankins M: The Medication Adherence Report Scale (MARS). Centre for Health Care Research. 1997, Brighton , University of BrightonGoogle Scholar
- Horne R, Hankins M, Jenkins R: The Satisfaction with Information about Medicines Scale (SIMS): a new measurement tool for audit and research. Qual Saf Health Care. 2001, 10 (3): 135-140. 10.1136/qhc.0100135...View ArticleGoogle Scholar
- Makoul G, Arntson P, Schofield T: Health promotion in primary care: Physician-patient communication and decision making about prescription medications. Soc Sci Med. 1995, 41 (9): 1241-1254. 10.1016/0277-9536(95)00061-B.View ArticlePubMedGoogle Scholar
- Meakin R, Weinman J: The 'Medical Interview Satisfaction Scale' (MISS-21) adapted for British general practice. Fam Pract. 2002, 19 (3): 257-263. 10.1093/fampra/19.3.257.View ArticlePubMedGoogle Scholar
- Bhattacharya D: Pharmacy domiciliary visiting: Derivation of a viable service model. PhD thesis University of Bradford, School of pharmacy. 2003, Bradford , University of BradfordGoogle Scholar
- Haynes R Brian MDHP: Helping patients follow prescribed treatment: clinical applications. JAMA: Journal of the American Medical Association. 2002, American Medical Association, 288 (22): 2880-2883. 10.1001/jama.288.22.2880.
- Wright EC: Non-compliance--or how many aunts has Matilda?. Lancet. 1993, 342 (8876): 909-913. 10.1016/0140-6736(93)91951-H.View ArticlePubMedGoogle Scholar
- Horne R, Weinman J: Self-regulation and self-management in asthma: Exploring the role of illness perceptions and treatment beliefs in explaining non-adherence of preventer medication. Psychol Health. 2002, Brunner / Routledge, 17 (1): 17-10.1080/08870440290001502.
- Oakley S, Walley T: A pilot study assessing the effectiveness of a decision aid on patient adherence with oral biphosphonate medication. Pharmaceutical Journal. 2006, 276: 536-538.Google Scholar
- Vitolins MZ, Rand CS, Rapp SR, Ribisl PM, Sevick MA: Measuring Adherence to Behavioural and Medical Interventions. Control Clin trials. 2000, 21 (5, Supplement 1): S188-S194. 10.1016/S0197-2456(00)00077-5.View ArticleGoogle Scholar
- Gasquet I, Falissard B, Ravaud P: Impact of reminders and method of questionnaire distribution on patient response to mail-back satisfaction survey. J Clin Epidemiol. 2001, 54 (11): 1174-1180. 10.1016/S0895-4356(01)00387-0.View ArticlePubMedGoogle Scholar
- Harewood GC, Yacavone RF, Locke GR, Wiersema MJ: Prospective comparison of endoscopy patient satisfaction surveys: e-mail versus standard mail versus telephone. Am J Gastroenterol. 2001, 96 (12): 3312-3317. 10.1111/j.1572-0241.2001.05331.x.View ArticlePubMedGoogle Scholar
- Smith W, Crombie I, Campion P, Knox J: Comparision of response rates to a postal questionnaire from a general practice and a research unit . Br Med J. 1985, 291: 1483-1485.View ArticleGoogle Scholar
- Tolonen H, Helakorpi S, Talala K, Helasoja V, Martelin T, Prättälä R: 25-year Trends and Socio-demographic Differences in Response Rates: Finnish Adult Health Behaviour Survey. Eur J Epidemiol. 2006, 21 (6): 409-415. 10.1007/s10654-006-9019-8.View ArticlePubMedGoogle Scholar
- Asch DA, Jedrziewski MK, Christakis NA: Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997, 50 (10): 1129-1136. 10.1016/S0895-4356(97)00126-1.View ArticlePubMedGoogle Scholar
- Cohen G, Forbes J, Garraway M: Can different patient satisfaction survey methods yield consistent results? Comparison of three surveys. Br Med J. 1996, 313 (7061): 841-844.View ArticleGoogle Scholar
- Schwarz N, Oyserman D: Asking Questions About Behavior: Cognition, Communication, and Questionnaire Construction. Am J Eval. 2001, 22 (2): 127-160.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/8/42/prepub