Skip to main content

How do quantitative studies involving people with dementia report experiences of standardised data collection? A narrative synthesis of NIHR published studies



People with dementia are routinely included as research participants in trials and other quantitative studies in which they are invited to respond to standardised measures. This paper reviews the reporting of standardised data collection from people with dementia in reports published in the National Institute for Health and Care Research (NIHR) Journals Library. The aim was to understand how the administration of standardised, self-report measures with people with dementia is reported in NIHR monographs and what could be learnt from this about the feasibility and acceptability of data collection approaches for future studies.


This was a systematic review with narrative synthesis. Broad search terms (Dementia OR Alzheimer*) were used to search the NIHR Journals Library website in December 2021. All studies that used (or intended to use) standardised measures to collect research data directly from people with dementia were eligible for inclusion. Information was extracted (where reported) on the process of data collection, dementia severity, levels of missing data and the experiences and reflections of those involved.


Searches returned 42 records, from which 17 reports were assessed as eligible for inclusion, containing 22 studies. Response rates from participants with dementia in these studies varied considerably and appeared to be related to dementia severity and place of residence. Little information was reported on the process of data collection or the reasons for missing data, and most studies did not report the experiences of participants or those administering the measures. However, there was an indication from two studies that standardised data collection could provoke emotional distress in some participants with dementia.


Through this review we identified both variation in levels of missing data and gaps in reporting which make it difficult to ascertain the reasons for this variation. We also identified potential risks to the well-being of participants with dementia which may be associated with the content of standardised measures and the context of data collection. Open reporting of and reflection upon data collection processes and the experiences of people involved is essential to ensure both the success of future data collection and the wellbeing of study participants.

Trial registration

Registered with Research on Research

Peer Review reports


People living with dementia make up a significant proportion of the adult population using health and social care services [1] yet historically this group could be excluded from research participation [2]. Over the past two decades, a growing literature has both argued the importance of involving people with dementia as participants in research [3,4,5,6] and given practical advice about the best ways to achieve this [7,8,9,10]. However, the vast majority of good practice literature focuses on qualitative methods, emphasising the importance of flexibility and foregrounding the voice of the person with dementia [11,12,13], whilst relatively little has been written about the practice of involving people with dementia as participants in trials or other quantitative research [14, 15]. Despite this, standardised measures have been developed to collect quantitative data specifically from this group [16,17,18,19,20] and a number of existing measures have been validated for use with people with dementia [21]. Detailed monographs set out the development and psychometric properties of these measures, and some come with scripted instructions for their administration [18, 22], but very little has been published examining the process of data collection or the experiences of the people involved.

In the absence of an abundant literature on good practice in quantitative research with people with dementia, this paper reviews the reporting of data collection in published National Institute for Health and Care Research (NIHR) reports where standardised measures were used with people with dementia for research purposes. The review was conducted as part of a doctoral research project aiming to better understand the process and experience of structured data collection in a large study of people with dementia (the DETERMIND programme) [23]. The overall research explores what factors influence the answers given by people living with dementia to standardised measures, how these might change over time as dementia symptoms progress, and what the implications are for research incorporating standardised measures and the people involved. The aim of the review was to consider how the administration of standardised, self-report measures with people with dementia is reported in NIHR monographs and what can be learnt from this about the feasibility and acceptability of data collection approaches for future studies. A greater focus on acceptability in quantitative dementia research should be of interest to trial and other quantitative researchers, and to all those interested in the ethics of dementia research. Key debates in dementia trials research ethics have tended to focus on capacity, consent and use of proxy data [24] but there may also be ethical considerations related to the experience of research participation that are as yet unidentified.

Standardised self-report measures for people with dementia

Questionnaires used in trials and cohort studies to measure outcomes, or assess health or psychosocial traits, are often standardised (with set wording, ordering of questions and answer scales) in order to ensure different scores reflect true differences between participants or time points rather than variation in the ways questions were asked [25]. When measures are administered face-to face, it is expected that interviewers will introduce and read each question to participants in the same way and instruct them to provide an answer in the required format in order to minimise the chances of interviewer bias [26, 27]. Since the early 2000s, research has indicated that people with dementia can (and should be enabled to) respond to such measures to appraise their own health and quality of life in this standardised way for research purposes [28,29,30]. A number of dementia specific measures have been developed; the most commonly used in published health research are DEMQOL [18] and QOL-AD [17]. These and other similar measures are referred to in this paper as ‘self-report’ to distinguish them from informant (family or professionals’) ratings of the person’s quality of life, proxy questionnaires (which typically ask family carers or professionals to consider how they think the person with dementia would score their own quality of life [31]) or observational measures such as the QuIS [32].

A number of reviews of the relative merits of different dementia specific and generic measures of quality of life have been published, but most tend to compare only the psychometric properties of measures, and although some do report rates of missing data and ‘feasibility’, there is rarely any mention of participant experience or focus on ability to respond to the items contained in the measures [33,34,35,36]. Whilst acceptability and respondent burden are important attributes of any measure [37] these tend not to be examined in the literature to the same degree as validity and reliability [38] even in dementia research where cognitive impairment and altered emotions may make this particularly relevant [12, 39]. Definitions of acceptability vary but tend to cover the degree to which participants find a measure difficult or distressing to complete, indicators for which can include refusal rates, response rates and administration time. As Fitzpatrick et al. note [38]:

‘Pragmatically, trialists using patient-based outcome measures are concerned with the end result; whether they obtain as complete data from patients as possible… However, we need to consider the different components of acceptability in turn to identify sources of missing data.’ (p40)

Krestar et al. [40] did examine people with dementia’s ability to respond to different types of structured questions, concluding that participants with greater cognitive impairment struggled more when presented with bidirectional response categories (which contain two distinct concepts - such as ‘strongly disagree, disagree, agree, or strongly agree’) than when presented with scales which varied along only one dimension (such as “not at all, just a little, a fair amount, or a great deal”). Those who struggled were permitted to use simpler, dichotomous (yes/know) response categories, but most standardised measures do not allow this option. More recently, Cohen et al. [41] found a relationship between participants’ self-reported cognitive abilities and response times to standardised questions, with those with greater self-reported cognitive impairment taking longer to respond to questions with more syllables, those which contained abstract concepts, and those which required a degree of evaluation (as opposed to simple recall of frequency, for example). However, participants were recruited because they had one of five long-term neurological conditions, which may be accompanied by dementia, but acceptability for people with dementia in particular was not the primary focus of that study.

The review

This paper presents a narrative synthesis of the reporting of standardised data collection from people with dementia in reports published in the English National Institute for Health and Care Research (NIHR) Journals Library [42]. The aim of the review was to explore the use of standardised, self-report measures with people with dementia as reported in the published monographs of research funded by the NIHR and available on the NIHR Journals Library website, focussing on the following indicators of experience, feasibility and acceptability:

  • The level of missing data, in terms of both response rates to full measures and item completeness within individual measures, where this was reported

  • The process of measure administration (how measures were used with people with dementia) and any reflections upon this process

  • The views and experiences of the people involved, including:

    • ◦ Participants with dementia and their carers or other supporters

    • ◦ The research team, including the report authors and the researchers collecting data

  • The impact of dementia severity on the experience of, and response rates for, standardised measures


This paper presents a narrative synthesis of NIHR funded dementia research. A narrative synthesis is ‘an approach to the systematic review and synthesis of findings from multiple studies that relies primarily on the use of words and text to summarise and explain the findings of the synthesis.’ ([43], p5). We conducted systematic searches, selection, and data extraction to ensure comprehensive coverage (within tight boundaries) but approached the collation and presentation of findings narratively to allow for clarification and insight. The decision to focus on NIHR funded research reports was made for two reasons. Firstly, the NIHR is internationally renowned as a leader in public and patient involvement in research, so it would be reasonable to expect that studies funded by this body would exhibit good practice in data collection involving potentially vulnerable participants and those with additional communication needs. Secondly, a number of NIHR funding streams require the research to be published in detailed monographs adhering to strict guidelines which typically run to 50,000 words. These reports contain full details of study methods, as well as study findings and limitations and thus offer sufficient space to detail any observations or learning about the use of study measures and the experiences of people involved.

All dementia focussed research reports published on the NIHR Journals Library website [42] that reported the use of standardised self-report measures with people with dementia for research purposes were targeted for review. Here ‘self-report’ does not necessarily mean that participants responded to a question or measure independently (for example, online or on a paper questionnaire), indeed it is more common for older people and people with dementia to be asked to answer questions verbally in a structured face-to-face interview [26]. Thus, the term ‘self-report’ here means specifically that questions were expected to be answered directly by the person with dementia rather than by a proxy or informant, and scores were not based primarily on the ratings or judgement of another person.

Search scope and dates

All final reports of studies listed in the NIHR Journals Library [42] involving standardised self-report data collection from people with dementia were in scope. Final searches were conducted on 17th December 2021 with no restrictions on date of publication. The journals library was established in 1997, initially only covering the journal Health Technology Assessment, but by the date the searches were conducted the library comprised five NIHR open-access journals.

Search terms and screening

Broad search terms (Dementia OR Alzheimer*) were selected in order to ensure that all potentially relevant reports were identified. No other search terms were used. Abstracts and (where the abstracts were not sufficiently clear) the full texts of all returned records were screened for eligibility.

Inclusion criteria

  • The study used (or intended to use) standardised self-report measures to collect research data from people with dementia

Exclusion criteria

  • No standardised self-report measures were used, or intended to be used, with people with dementia

  • Study of carers only

  • Measure development only

  • Measures used for screening study population or routine clinical use only

  • Review paper only

Where a report included multiple studies, one or more of which might meet the criteria, each individual study was screened for eligibility. Where a study included standardised data-collection from a subset of people with dementia, this was included, so long as at least some of the data were to be self-reported by participants with dementia themselves.

Data extraction

Data were extracted from all included reports into an Excel spreadsheet under the following headings:

  • Element of study involving standardised data collection from people with dementia

  • Eligibility of participants with dementia

  • Numbers of participants with dementia

  • Severity and type of dementia

  • Standardised outcomes measures to be completed by people with dementia

  • Reporting of measure administration

  • Data completeness and response rates

  • Action to improve accessibility and acceptability for people with dementia

  • Process evaluation/participants’ views on data collection

  • Study teams’ comments/reflections on data collection with people with dementia

As this review formed part of a PhD study, the first author worked independently to select and review studies, with regular supervision by co-authors (YB and KB). After KG had completed data extraction, KB read and independently extracted data from two of the studies to cross check the data.


A search of the NIHR Journals Library database using the terms Dementia OR Alzheimer* conducted on 17th December 2021 returned 42 reports out of a possible 2027. Figure 1 shows a PRISMA flow diagram with numbers of reports included and excluded, and reasons for exclusion. As some of the included reports contained more than one eligible study (for example, reports of programme grants), more studies were included (n = 22) than the total number of selected reports (n = 17).

Fig. 1
figure 1

PRISMA flow diagram of assessment, exclusion and inclusion

Table 1 gives a full list of all self-report measures used with people with dementia in the 22 included studies and the primary outcome measure (where applicable). Some studies restricted participation to people with mild to moderate dementia, whereas others included people with all stages of dementia (including those with more severe symptoms). We found it useful to group studies that included participants with a similar level of dementia severity together, to enable response rates to be viewed in light of the mix of people involved. Table 1 groups studies under two headings:

  • Studies collecting data from people with mild to moderate dementia only

  • Studies collecting data from people with all stages of dementia

Table 1 Self-report measures used with people with dementia and response rates (studies arranged by dementia severity)

Eight of the studies - in seven reports [44,45,46,47,48,49,50] - collected data from people with mild to moderate dementia (based on professional/carer assessment or scoring on a standardised self-report measure like SMMSE). The remaining fourteen studies – in 12 reports [45, 49, 51,52,53,54,55,56,57,58,59,60] - collected data from participants with all stages of dementia, including those with more severe symptoms. Some reports explicitly stated that dementia severity was assessed at baseline (and showed changes over time) whereas others reported severity as a static quality of the sample. A wide range of measurement tools was used across the included studies, most commonly to measure quality of life, cognition and various psychological characteristics. (These tend to be referred to by acronyms, so a glossary of measures is included as a supplementary file to aid comprehension.) Eight of the studies (in 7 reports: [44, 47,48,49,50, 54, 57]) had a self-report measure, to be completed by participants with dementia, as a primary outcome measure (usually alongside other self-report and carer or professional rated measures); seven studies (in 6 reports: [45, 46, 52, 54, 58, 60]) had a carer or professional rated measure as the primary outcome measure; two used ‘objective’ measures such as eye examinations or brain scans as the primary outcome [47, 53]; and five studies (in four reports: [51, 55, 56, 59]) did not identify a primary outcome. If response rates were not explicitly reported, we calculated these (where possible) from data provided in tables and accompanying text in the reports. Some reports amended follow-up sample sizes to reflect withdrawals, resulting in response rates appearing higher at follow-up than in studies employing an intention to treat approach. Sample size (N) at each time point has been included in Table 1 (if this information was clearly available from reports).

Response rates, measure completeness and dementia progression

Table 1 illustrates that response rates (that is, the proportion of participants to complete each measure at each timepoint) varied considerably between studies, even where studies had similar designs. Studies with participants assessed as having mild to moderate dementia generally reported response rates of over 90% at baseline, but the degree to which this was maintained at follow-up varied (where reported). Response rates for studies that included people with more severe dementia varied more widely at baseline, from 20.1% (for DEMQOL) in a longitudinal study of a toolkit for incontinence [56], to 100% (for all baseline measures) in a feasibility study of a falls intervention [51]. Overall, studies which included people with all stages of dementia were less likely to report high response rates at any time point than studies which restricted participation to those with mild to moderate symptoms.

Response rates, or information from which a response rate could be calculated, were not always reported clearly by measure and time point [45, 49, 50, 52, 57, 60]. In an observational study of dementia home support for people with ‘later stage dementia’ [45], for example, the report states that 389 out of the 518 participants (75.1%) were interviewed at both baseline and 6 month follow-up, but it is not clear what proportion of each self-reported measure this group responded to at each time point. This is important as it does not necessarily follow that all 389 participants interviewed at both time points responded to all three of the self-report measures each time. We know from other studies that response rates to different measures can differ even within a time point. In a study of life story work with people with dementia in care homes, for example, 64% of participants responded to QOL-AD at baseline, but only 31% of those same participants responded to DEMQOL at baseline [55].

Response rates appeared to be associated with setting (i.e., whether participants were recruited from community or residential care or inpatient settings), but it is difficult to separate this from dementia severity, which in theory could be higher (reflecting the need for residential care) but in practice was not always measured. One study, for example, abandoned a measure (the IDEA questionnaire) after it transpired it was ‘too cognitively complex’ ([57], p35) for most participants with dementia to respond to. The dementia severity of participants in this study (all of whom were recruited from inpatient or residential care settings) is not known because only 13 participants out of a sample of 332 completed the cognitive test. Similarly, dementia severity was not formally assessed in two studies by Gridley et al. [55] which recruited from inpatient and residential care settings and had response rates ranging from 0% to 64%. Another study ([58], Study 2) which recruited people with dementia from residential care home settings had so much missing data that planned imputation was not conducted. People with all stages of dementia were included in this study, but nearly half (49.2%) were assessed as having severe dementia at baseline. Response rates were higher in the study by the same team which only recruited participants still living in the community. Whilst this latter study also included people with all stages of dementia, less than 9% of the community cohort had severe dementia at baseline.

Another large study which recruited exclusively from residential care settings [60] excluded all data collected directly from people with dementia from the analysis because of high levels of missing data, as the authors explained in their limitations section:

‘Owing to the variability in the ability of care home residents with dementia to self-report on measures of BSC and QoL, the primary and secondary analyses were conducted using staff proxy-completed measures’ (p97).

By contrast, a study by Gathercole et al. [54] included people with all stages of dementia living in the community (not residential or inpatient settings), and reported higher response rates than the above studies, but lower than other studies which recruited from the community but restricted participation to people with mild to moderate dementia.

An RCT of individual cognitive stimulation therapy [48] reported very high response rates for multiple measures (typically close to 100%) which reduced only slightly over the 26-week follow-up period. In common with other large studies with high response rates, participants had mild to moderate dementia and were living in the community at baseline, and only people with capacity to consent and ‘no major co-morbidities affecting participation’ were eligible to take part. The authors note that tight eligibility criteria did restrict participation:

‘In total, 1340 people were considered for recruitment to the study. From these, 356 were randomised and together constituted the final sample for the study. …Losses in 22% of cases were attributable to people with dementia not meeting the clinical criteria, indicating that this factor was, to some extent, a barrier to study recruitment.’ (p 5)

An implementation study of group based maintenance cognitive stimulation therapy (MCST) had similarly tight eligibility criteria [49], excluding people with severe dementia or any additional communication, physical or intellectual impairments, specifying that participants must ‘have the ability to complete a cognitive and quality-of-life measure at three intervals over 1 year’ (p. 51). This study applied intention to treat analysis, using all available information provided by participants with dementia at follow-up regardless of whether they completed the intervention programme, but it is not clear whether the reduction in available data over time (they reported QOL-AD for 89 participants at baseline, 62 participants at first follow-up and 56 participants at second follow-up) was the result of withdrawal from the study, other loss to follow-up or some participants declining (or finding it difficult) to respond to the measures. Response rates by measure for the other two eligible studies in this programme (an RCT of MCST and an RCT of a carer supporter programme and reminiscence intervention) were not clearly reported.

It was very rare for studies to report measure completeness, that is, what proportion of the items in individual measures were completed by participants. Allan et al. [51] did note that ‘All self-reported and proxy EQ-5D-5L questionnaires that were completed had no missing data for any of the domains.’ (p89) but this level of detail was very much the exception and perhaps only reported because theirs was a small feasibility study. Statements such as the following were more common, where authors set out how missing data were handled, without specifying how much there was or from which measures:

‘Complete-case data analysis was used initially to establish the results, followed by the analysis with imputations. When individual data points were missing within a scale, data were imputed by using scale/subscale means according to the validated rules for the measures. When an outcome measure total score was missing, it was imputed using a multiple imputation regression model ...’ ([49], p29)

Without clear information about response rates by measure, measure completion, and the reasons behind missing data, it is difficult to compare approaches to data collection in different studies or ascertain possible explanations for problems encountered.

The process of measure administration

Table 2 presents information reported from included studies on the process of measure administration: this included the views of participants on their experiences of taking part in the research (for example, from embedded process evaluations); and reflections by study teams on the data collection process or measures used. Ten studies (in nine reports) collected data from people with dementia face to face in participant’s own homes or another place convenient to them [44, 45, 47,48,49,50,51, 53, 58], six studies (in five reports) collected data from participants in care homes or hospital settings [55, 57,58,59,60], and the remaining six studies (in five reports) did not state where data were collected [46, 49, 52, 54, 56].

Table 2 Measure administration, participant views and author reflections

Overall very little information was given about the circumstances or activities that took place during data collection encounters. Typically, reports featured a statement such as ‘outcomes were obtained during a face-to-face assessment by a researcher….’ ([47], p15). Occasionally, a little more detail was offered, as in this example:

‘The questionnaire measures were arranged into booklets, which facilitated their ease of delivery during the interviews. If a participant became tired, or if it was requested by participants or deemed appropriate by the researcher, an interview was occasionally broken off part-way through and then continued on another day.’ ([58], Study 4, p120)

Orgeta et al. and Woods et al. [48, 50] used very similar wording to explain that assessors occasionally arranged to return ‘to complete assessments where an interviewee became tired, or where it was otherwise requested by participants or deemed appropriate by the assessor’ ([50], p14). No further information was given in these reports about how often participants requested that an interview be paused and completed later, or why this might be ‘deemed appropriate’ by the researcher/assessor. The participants in these two latter studies had mild to moderate dementia. While a number of other studies used more measures and/or included participants with more severe dementia, they made no reference to breaking data collection sessions into more manageable chunks. It is unclear here whether such adjustments were not made, or just not reported.

Most studies did not report intervening to improve the accessibility or acceptability of data collection tools or processes for participants with dementia, other than to collect data face to face at a location acceptable to participants and employing trained research workers. Allan et al. [51] did reduce the number of questions in their health utilisation questionnaire and Orgeta et al. [48] used show cards (which typically present the answer scales visually) to support people with dementia to respond to the measures, with accompanying reports of high response rates. Surr et al. [60], on the other hand, used an adapted version of QOL-AD developed specifically for use in care homes which has ‘simple language’ and a four-response answer scale that is consistent across all questions, but still did not collect enough data directly from residents with dementia to enable their data to be used in the analysis. Similarly, Kinderman et al. [57] reported that people with dementia on hospital wards and in care homes received ‘assistance from skilled clinicians’ (p53) to answer QOL-AD, but most participants still did not complete this measure.

The views of participants and reflections of study teams

Occasionally a report included a few lines about why participants did not attempt to complete assessments. Gathercole et al. [54] for instance note ‘this could have been for several reasons, including disagreement with allocation, burden of assessments and delays in assessments being completed.’ (p40) Unusually, the Bowen et al. [53] report sets out in some detail the reasons for missing scores for a specific measure (SMMSE) for 54 participants:

‘These participants mainly comprised those for whom no coherent responses were obtained when attempting the test, and so could not be assessed using the SMMSE, and a small number who were unavailable, asleep or uncooperative on the day of recruitment, and so the test was not carried out.’ (p37)

Kinderman et al. [57] used ADAS-Cog in its standard form, but on reflection attributed the very low response rates achieved to the length of the measure, suggesting that in future more attention should be paid to the trade-off between the value of potential data from a measure, and the likelihood of obtaining enough data to be valuable:

‘…the ADAS-Cog is often used in clinical trials because it can determine incremental improvements or declines in cognitive functioning. Despite this, it is a time-consuming assessment to complete (up to 45 minutes per person) and in reality the majority of participants refused to complete it.’ (p59)

Most of the reports did not include such candid reflections on the merits or otherwise of the measures selected for use. Neither did many include reflection on the data collection processes or the experiences of research workers or research participants. Process evaluations tended to focus on the process of implementing the studied intervention or recruiting participants, not data collection per se. However, five of the 17 reports did include some form of process evaluation or embedded study that touched on the process of data collection from people with dementia [45, 51, 55, 56, 59] EVIDEM-C [56] included interviews with carer participants about their experiences of data collection, but people with dementia were not interviewed. Allan et al. [51] interviewed participants with dementia, carers and research staff. The most common concern gleaned from their combined responses was that the baseline and follow-up assessments took too long to complete. O’Brien et al. [59] collected feedback from people with dementia as well as carers and clinicians on the measures to be included in their assessment toolkit and reported that ‘… patients and carers highlighted some issues with question wording’ (p39). They noted ‘tensions between research paradigms’ (p44), in particular the value ascribed to validated questions versus qualitative feedback from participants, but offered no further details.

With reference to field notes, Gridley et al. [55] identified a number of challenges inherent in collecting data from participants with dementia including the capacity and frailty of the participants; the context within which data collection took place (e.g. care homes where staff had other priorities); and the geographic location of the research settings, compared to that of the research team (given that data collection with people with dementia can be time consuming and require multiple visits). However, they also identified the closed-question format of the standardised measures as a key reason for low response rates. Clinical trial assistants (CTAs) interviewed for the Allen et al. [51] process evaluation reported concerns that the wording of some measures was difficult for participants with more advanced dementia to understand, for example because they contained double negatives. They also felt that some participants with dementia found the questions ambiguous and needed further explanation, which they had been trained not to give as this could impair the standardisation of the measure. The authors concluded that research workers like the CTAs require better training in the administration of standardised measures to ensure a consistent approach.

Clarkson et al. [45] had the most to say about the data collection context and process, and the influence of these on the data collected, producing an accompanying paper dedicated to reflecting on the research encounter. This paper was based on findings from an embedded qualitative study in which researchers audio-recorded the data collection process, revealing the dialogue surrounding the answers given to closed questions [14]. They noted that even people in the early stages of dementia ‘struggled with the structured and standardised nature of the research interviews, finding them a linguistic and cognitive challenge’ ([45], p15). They also noted the work that researchers had to undertake to determine whose perspective was being addressed, when family carers were present during data collection sessions with people with dementia.

Emotional distress

The potential for standardised data collection to cause emotional distress in participants with dementia was explicitly identified as a risk in two of the reports [45, 55] and implied in a third [51]. While most of the reports made no mention of question content, Gridley et al. [55] noted the potential impact of sensitive or negative questions on participant wellbeing:

‘…we found that, for example, asking people in quick succession whether they had lately felt sad (question 7), lonely (question 8) and then distressed (question 9) could trigger sadness. On one occasion (plus on two occasions in hospital wards) DEMQOL was abandoned specifically for this reason.’ ([60], p69)

Clarkson et al. [45] similarly noted that the measures they used addressed sensitive topics ‘that could be distressing for people with dementia and their carers and difficult for interviewers to manage.’ (p15). Their accompanying paper identified that some standardised questions could be ‘very direct in probing potentially emotionally difficult aspects of life, particularly in the context of older age and deteriorating cognition’ ([14], p2742).

Interviews with carers for the process evaluation by Gridley et al. also suggested that some people could find the experience of being questioned worrying in itself. Again, this concurs with the account of Abendstern et al. [14] suggesting that, for some, the structured interview as a whole appeared to cause anxiety:

‘This was indicated in several ways including misunderstanding questions and showing uncertainty about how to reply, giving answers that they seemed to think the interviewer wanted, conveying feeling pressured to say the right thing, and forgetting things during the memory ‘test’…. Some participants expressed distress at the prospect of the interview itself, commenting that they were unsure about what to expect…’ (p2741)

The most common concern of those interviewed in the Allan et al. [51] process evaluation was the length of time it took to complete the baseline and follow-up assessments. However, through their illustration of this challenge it is evident that the data collection process in this study was also associated with, or may even have caused, emotional distress in some participants:

‘…for the patients, it was a bit too much when you’re sat in the house. We only had, like, 90 minutes but I couldn’t do the first one in less than 2 hours because he kept getting upset and crying, it was very difficult.’

Professional 145, CTA (interview)

([51], p93)

Such issues are generally not reported (or even recorded) in trials involving people with dementia. Together with the practice noted above of pausing data collection part way through (either if requested by participants or deemed appropriate by researchers) the issues highlighted by these three reports raise questions about participants’ experiences during data collection and the degree to which not only fatigue, but also emotional distress, may be features of the data collection process worthy of further investigation.


In this paper we presented a narrative synthesis of the reported use of standardised, self-report measures with people with dementia in 22 NIHR funded studies selected systematically from the NIHR Journals Library website. Response rates (where these could be ascertained) varied considerably and appeared to be related to dementia severity and place of residence, whilst measure completeness and patterns of item non-response were rarely reported. Overall, we found little reported information about the process of data collection from people with dementia (over and above basic setting and mode) or reasons for missing data. There was also very little information about the experiences of participants with dementia or those administering the measures. However, from the few instances where experiences were reported it seems that there may be risks to participants’ well-being associated with both the content of measures and context of data collection that are worthy of further consideration.

Despite some discrepancies in reporting, it was clear from the review that measures were not always completed in full at all time points and that some measures were not completed at all by some participants, even those still included in study samples. Such gaps are common in research; 100% response rates are rare [62] and missing data has been identified as a particular problem in research on ageing [63]. Some of the response rates reported in this review, however, seem to be considerably lower than would be expected for the general population, less than 50% in several cases, whilst in other cases they were close to 100%. Some of the apparent variation in response rates may be artefacts of reporting (for example, some studies amended sample sizes at follow-up in response to withdrawal, whilst others calculated response rates at follow-up using original sample sizes). However, it is clear that some studies faced real challenges in their attempts to obtain self-reported data from participants with dementia, which other studies appeared to avoid. The lack of detail on measure administration and participant experience means it was not always possible to determine the reasons underpinning these differences.

Studies that included people with more severe dementia tended to report more problems obtaining consistent response rates, supporting previous research where cognitive impairment has been shown to predict item nonresponse [15] or recourse to dichotomous (yes/no) answers [40]. Those which only included participants with milder cognitive impairment reported fewer problems and, where response rates were clearly reported, these were generally over 90% at baseline. In contrast, studies which included participants with all stages of dementia (i.e. including people with severe dementia) reported response rates ranging from 0 to 100%, with many under 75% or not reported. One approach used by some teams to minimise missing data was to apply tight eligibility criteria. However, while high response rates are desirable from a statistical perspective, restricting the eligibility criteria creates a trade off with generalisability, as the outcomes and perspectives of a group of people who could potentially be affected by the intervention under evaluation may not be included in the results [64, 65].

Little attention was paid in most reports to the potential risk to participants of emotional distress, despite previous flagging of this in the published literature, particularly in the context of qualitative research:

‘All too often the person with dementia can be left with the feeling of not being able to do, not being able to remember or not reaching the right score, so they can feel excluded and a failure.’ ([66], p817)

Such risks have also been noted in quantitative data collection [26, 67,68,69] and there is some indication that standardised tests of cognitive impairment can be particularly problematic [21, 39, 70]. The bulk of literature highlighting the tension between the requirements of standardisation and the wellbeing of participants has focussed on data collection in clinical settings [71,72,73]. However, research participants may also experience feelings of anxiety and people with dementia may be particularly susceptible to emotional distress or agitation brought on or exacerbated by the research encounter [74,75,76].

Failing to understand the reasons behind missing data has implications for future successful data collection and appraisal of the appropriateness of measures. For example, if data are consistently missing for an item or measure because participants find it distressing to answer, the implications and possible remedies will differ from scenarios where items are skipped or measures dropped because they were found to be too cognitively challenging. The issues related to entire measures going unanswered by particular participants may also be different from the possible reasons behind individual missing items [62]. Patterns in item non-response may reflect problems with particular item wording specific to the communication and cognitive function of people with dementia which remain unaddressed without systematic examination [15]. Alternatively, missing data may be unrelated to the content or structure of the measures, but instead be the result of contextual factors such as care home practices: perhaps participants were not available at the allotted interview times [77], or carers were not available to provide support on the day. Certainly, in this review, studies attempting to collect data from participants with dementia residing in care homes or hospital wards appeared to achieve lower response rates than those collecting data from people residing in the community, perhaps reflecting known barriers to the undertaking of research in residential settings [78, 79].

A common solution proposed to the challenges of collecting research data directly from people with dementia is to use data from proxy measures alongside, or even instead of, self-reported data. However, in addition to the ethical issues of reliance on another person’s views in place of the person with dementia’s [69], some of the reports in this review flagged methodological issues inherent in this approach, such as the various relationships of proxies to participants [49], and the tendency for proxies to rate quality of life lower than people with dementia do themselves [57, 58]. This fits with the findings of multiple previous studies [30, 80, 81] and calls into question the ability of proxy measures to validly represent the views of people with dementia. At the very least, proxies may be reporting something conceptually different from the thing people with dementia themselves are reporting when asked about their ‘quality of life’ [82, 83].

An alternative solution would be to design methods or select measures more likely to be comprehensible, manageable and meaningful to people with dementia [84], that is, measures that are a better ‘fit’ for the people affected by the intervention so that they can answer for themselves [85]. Dementia specific quality of life measures like DEMQOL and QOL-AD were designed to do this, but the results of this review call into question the appropriateness of using even current dementia specific measures without additional support for some participants with dementia. Indeed, DEMQOL was only validated using data from people with mild to moderate dementia (data from people with an MMSE score of less than 10 were excluded from the analysis [18], and whilst it is commonly quoted that QOL-AD is suitable for use with people with an MMSE score as low as 3 (based on a 2003 study by Thorgrimsen et al. [86]), Kinderman et al. [57] struggled to use this measure with people with severe dementia in residential settings:

‘Although it has been suggested that the QOL-AD can be usefully completed with some people with a MMSE score of as low as 3 (although it was originally suggested to be valid for use with people with MMSE scores of > 10), it quickly became obvious that the majority of people living with dementia in the care homes and wards visited were unable to complete the measure, even with assistance from skilled clinicians.’ (p53)

Without more detailed descriptions of what happens when researchers attempt to administer measures, and the individual items within those measures, it is hard to ascertain exactly which elements of measures, or research context, may be problematic and require attention.

Accounts from clinical settings have highlighted the, often marked, difference between the standardised conditions envisaged by those who design measures [87] and the realities of measure administration in practice. As Krohne explains:

‘…test administrators must deal with interruptions, such as test-takers falling asleep, being in pain, not understanding the question, or consciously choosing not to respond to the question.’ ([88], p29).

Conventions in standardised interviewing [89, 90] along with the specific instructions for some measures (such as DEMQOL [18]) preclude the giving of support or explanations that are not in the script, even for participants with cognitive impairment. Yet there is evidence that standardisation in practice is difficult to achieve, even with participants with no cognitive impairment [91,92,93,94]. As Gobo, Giampietro, and Mauceri note [94] the standardised interview is ‘an interaction that takes place in a social situation’ (pXVII). Responses to accounts of researchers having difficulties adhering to standardisation tend to take the form of calls for better training (such as that in Allan et al. [51], but whilst training researchers to apply greater consistency in handling participants’ queries may reduce the chances of interviewer bias, this would not necessarily address participants’ anxieties, or any underlying problems with the measures themselves.

It is necessary to find a way through such dilemmas, as simply excluding specific groups from research participation because of their perceived vulnerability is no longer considered acceptable [95] and is certainly unacceptable to an increasingly politically conscious population of people living with dementia. Several key documents have been published in collaboration with people with dementia in recent years setting out, amongst other things, their right to be involved in research that concerns them [5, 6, 96]. Dementia care theorists advocate a personalised approach to working with people with dementia [97, 98] and a growing literature from the qualitative traditions have argued for greater flexibility in data collection, as Keady notes:

[Participants with dementia often have] difficulties with linguistic, behavioural and cognitive functioning. Researchers therefore need to be creative and adapt their methods of data collection in order to address the individual needs of someone who is living with dementia.’ ([13], p2)

The principles of creativity and adaptability to the needs of the individual do not, however, sit well with the fundamentals of quantitative measurement, which rely on standardisation: essentially, inflexibility. This raises the question of whether quantitative data collection can feasibly be reconciled with the principles of best practice in dementia research. Evans et al. [99] looked at the relationship between reported quality of life scores and interviewer continuity and concluded that having a familiar person visit to collect follow-up data might influence results. The suggestion here is of a conflict between more person-centred approaches and data integrity, as opportunities to build rapport and put participants at ease might also lead to interviewer bias. However, those findings were based on exploratory secondary analysis of a completed study and the authors noted that ‘characteristics, such as age, training, experience, warmth and ability to establish rapport, were not taken into account (given the lack of data)’ ([99], p7). More attention must be paid to these factors, and the experiences of the people involved, to fully understand what influences scores and/or leads to missing data.

If compromises between standardisation and personalisation must be made, one solution proposed by Phillipson et al. [100] is to offer incremental levels of support including physical and emotional support, in addition to ‘easy read’ documentation, where this might facilitate the inclusion of people with a greater degree of impairment. Whilst this involves more flexibility than is usually permitted in standardised data collection, Phillipson et al. demonstrate that the provision of such tailored support enabled them to collect data from people who would not have been able to participate, or would have had large quantities of missing data, had they not been supported to respond, and their findings were richer because of this. However, this study only used one measure, the ASCOT, whereas some of the studies included in this review used multiple (up to 10 different) self-report measures with each person at each time point. It is likely that there would be a trade-off between the amount of tailoring of support practically achievable in a study and the number of measures used, with implications for the time and other resources required. It remains to be seen how applicable such an approach could be to a large trial or cohort study with multiple measures.


This review was conducted as part of a doctoral research project and as such is based primarily on the independent work of one researcher. However, two supervisors (both experienced senior health and social care researchers) were closely involved throughout and are co-authors on the paper. Moreover, the findings have been discussed more widely with colleagues working in the health and social care research field, including with those specialising in dementia research, with feedback integrated into our interpretation of findings. Nevertheless, it is recognised that working relatively independently, whilst necessary for doctoral studies, is a limitation in any review and the results should be read with this in mind.

The review only covered research published in a single, UK database and there may be learning from other types of research and research reporting not covered here. However, the NIHR is Britain’s largest funder of clinical, public health and social care research, and its Journals Library contains comprehensive, open access accounts of final, peer reviewed reports including methods and a full description of the results [101]. As such, the review offers a useful insight into the reporting of high status government funded dementia research and poses questions of relevance to the wider field.


In this narrative synthesis we explored the use of standardised, self-report measures to collect data from people with dementia in NIHR funded dementia research and identified an important gap in reporting on the process of data collection and the experiences of participants. It seems that some studies, particularly those that recruited from residential care settings and/or included participants with more advanced dementia, were missing sizable quantities of data, but without clear reporting it is difficult to ascertain the full range of reasons for this or the specific links between dementia severity and responses to standardised measures. As noted by Hardy et al. [63], it is essential that authors are open in their reporting about the reasons for missing data so that we can both understand the implications of this and build upon learning to improve future research practice. In addition to potentially influencing the quality and quantity of data collected, learning from the few studies that did reflect openly about data collection processes indicated that the context and content of data collection could also influence the wellbeing of participants. It is imperative therefore that more attention be paid to the experiences of all those involved in quantitative data collection.

Availability of data and materials

The datasets analysed during the current study are available in the NIHR Journals Library


  1. Prince M, Albanese E, Guerchet M, Prina M. World Alzheimer Report 2014. Dementia and risk reduction: an analysis of protective and modifiable risk factors. Alzheimers Dis Int. 2014:ffhal–03495430.

  2. Taylor JS, DeMers SM, Vig EK, Borson S. The disappearing subject: exclusion of people with cognitive impairment and dementia from geriatrics research. J Am Geriatr Soc. 2012;60(3):413–9.

    Article  PubMed  Google Scholar 

  3. Bond J, Corner L. Researching dementia: are there unique methodological challenges for health services research? Ageing Soc. 2001;21(1):95–116.

    Article  Google Scholar 

  4. Dewing J. Participatory research: a method for process consent with persons who have dementia. Dementia. 2007;6(1):11–25.

    Article  Google Scholar 

  5. Alliance NDA. Review of the dementia statements: companion paper. 2017. p. 1–34.

    Google Scholar 

  6. Flemington B, Houston A, Jackson E, Latta A, Malone B, McAdam N, et al. Core principles for involving people with dementia in research: innovative practice. Dementia. 2014;13(5):680–5.

    Article  Google Scholar 

  7. Brooks J, Savitch N, Gridley K. Removing the ‘gag’: involving people with dementia in research as advisers and participants. Soc Res Pract. 2017;Winter 201(3):3–14.

    Google Scholar 

  8. Digby R, Lee S, Williams A. Interviewing people with dementia in hospital: recommendations for researchers. J Clin Nurs. 2016;25(7–8):1156–65.

    Article  PubMed  Google Scholar 

  9. Samsi K, Manthorpe J. Interviewing people living with dementia in social care research METHODS REVIEW. 2020.

    Google Scholar 

  10. Wilkinson H. The perspectives of people with dementia: research methods and motivations. United Kingdom: Jessica Kingsley Publishers; 2002.

  11. Pratt R. ‘Nobody’s ever asked how I felt. In: Wilkinson H, editor. The perspectives of people with dementia: research methods and motivations. United Kingdom: Jessica Kingsley Publishers; 2001. p. 165–82.

  12. Keady J, Hydén L-C, Johnson A, Swarbrick C. Social research methods in dementia studies: inclusion and innovation. United Kingdom: Taylor and Francis; 2017. (Routledge Advances in Research Methods; vol. 1).

  13. Phillipson L, Hammond A. More than talking: a scoping review of innovative approaches to qualitative research involving people with dementia. Int J Qual Methods. 2018;17(1):1–13.

    Article  Google Scholar 

  14. Abendstern M, Davies K, Poland F, Chester H, Clarkson P, Hughes J, et al. Reflecting on the research encounter for people in the early stages of dementia: lessons from an embedded qualitative study. Dementia. 2020;19(8):2732–49.

    Article  PubMed  Google Scholar 

  15. Kutschar P, Weichbold M, Osterbrink J. Effects of age and cognitive function on data quality of standardized surveys in nursing home populations. BMC Geriatr. 2019;19(1):1–10.

    Article  Google Scholar 

  16. Gerolimatos LA, Ciliberti CM, Gregg JJ, Nazem S, Bamonti PM, Cavanagh CE, et al. Development and preliminary evaluation of the anxiety in cognitive impairment and dementia (ACID) scales. Int Psychogeriatr. 2015;27(11):1825–38.

    Article  PubMed  Google Scholar 

  17. Logsdon RG, Gibbons LE, McCurry SM, Teri L. Assessing quality of life in older adults with cognitive impairment. Psychosom Med. 2002;64(3):510–9.

    Article  PubMed  Google Scholar 

  18. Smith SC, Lamping DL, Banerjee S, Harwood R, Foley B, Smith P, et al. Measurement of health-related quality of life for people with dementia: development of a new instrument (DEMQOL) and an evaluation of current methodology. Health Technol Assess (Rockv). 2005;9(10):1+.

    CAS  Google Scholar 

  19. Snow AL, Huddleston C, Robinson C, Kunik ME, Bush AL, Wilson N, et al. Psychometric properties of a structured interview guide for the rating for anxiety in dementia. Aging Ment Health. 2012;16(5):592–602.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Trigg R, Skevington SM, Jones RW. How can we best assess the quality of life of people with dementia? The Bath Assessment of Subjective Quality of Life in Dementia (BASQID). Gerontologist. 2007;47(6):789–97.

    Article  PubMed  Google Scholar 

  21. Webster L, Groskreutz D, Grinbergs-Saull A, Howard R, O’Brien JT, Mountain G, et al. Core outcome measures for interventions to prevent or slow the progress of dementia for people living with mild to moderate dementia: systematic review and consensus recommendations. PLoS One. 2017;12(6):e0179521.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Official QOL-AD distributed by Mapi Research Trust | ePROVIDE. Available from: Cited 2023 Feb 20.

  23. Farina N, Hicks B, Baxter K, Birks Y, Brayne C, Dangoor M, et al. DETERMinants of quality of life, care and costs, and consequences of INequalities in people with Dementia and their carers (DETERMIND): a protocol paper. Int J Geriatr Psychiatry. 2020;35(3):290–301.

    Article  PubMed  Google Scholar 

  24. Götzelmann TG, Strech D, Kahrass H. The full spectrum of ethical issues in dementia research: findings of a systematic qualitative review. BMC Med Ethics. 2021;22(1):1–11.

    Article  Google Scholar 

  25. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health. 2018;6(June):1–18.

    Google Scholar 

  26. De Vries K, Leppa CJ, Sandford R, Vydelingum V. Administering questionnaires to older people: rigid adherence to protocol may deny and disacknowledge emotional expression. J Aging Stud. 2014;31:132–8.

    Article  PubMed  Google Scholar 

  27. Fowler F, Mangione T. Standardized interviewing techniques. In: Fowler F, Mangione T, editors. Standardized survey interviewing. Thousand Oaks: Sage; 2011. p. 33–54.

    Chapter  Google Scholar 

  28. Bond J. Quality of life for people with dementia: approaches to the challenge of measurement. Ageing Soc. 1999;19(5):561–79.

    Article  Google Scholar 

  29. Mozley CG. ‘Not knowing where I am doesn’t mean I don’t know what I like’: cognitive impairment and quality of life. Int J Geriatr Psychiatry. 1998;1999(783):776–83.

    Google Scholar 

  30. Cahill S, Begley E, Topo P, Saarikalle K, Macijauskiene J, Budraitiene A, et al. ‘I know where this is going and I know it won’t go back’: hearing the individual’s voice in dementia quality of life assessments. Dementia. 2004;3(3):313–30.

    Article  Google Scholar 

  31. Martyr A, Nelis SM, Quinn C, Wu YT, Lamont RA, Henderson C, et al. Living well with dementia: a systematic review and correlational meta-analysis of factors associated with quality of life, well-being and life satisfaction in people with dementia. Psychol Med. 2018;48(13):2130–9.

    Article  PubMed  Google Scholar 

  32. McLean C, Griffiths P, Mesa-Eguiagaray I, Pickering RM, Bridges J. Reliability, feasibility, and validity of the quality of interactions schedule (QuIS) in acute hospital care: an observational study. BMC Health Serv Res. 2017;17(1):1–10.

    Article  Google Scholar 

  33. Bowling A, Rowe G, Adams S, Sands P, Samsi K, Crane M, et al. Quality of life in dementia: a systematically conducted narrative review of dementia-specific measurement scales. Aging Ment Health. 2015;19:13–31. Routledge.

    Article  PubMed  Google Scholar 

  34. Moniz-Cook E, Vernooij-Dassen M, Woods R, Verhey F, Chattat R, De Vugt M, et al. A European consensus on outcome measures for psychosocial intervention research in dementia care. Aging Ment Health. 2008;12(1):14–29.

    Article  CAS  PubMed  Google Scholar 

  35. Yang F, Dawes P, Leroi I, Gannon B. Measurement tools of resource use and quality of life in clinical trials for dementia or cognitive impairment interventions: a systematically conducted narrative review. Int J Geriatr Psychiatry. 2018;33(2):E166–76.

    Article  PubMed  Google Scholar 

  36. Li L, Nguyen KH, Comans T, Scuffham P. Utility-based instruments for people with dementia: a systematic review and meta-regression analysis. Value Health. 2018;21(4):471–81.

    Article  PubMed  Google Scholar 

  37. Lohr KN. Assessing health status and quality-of-life instruments: attributes and review criteria. Qual Life Res. 2002;11(3):193–205.

    Article  MathSciNet  PubMed  Google Scholar 

  38. Fitzpatrick R, Davey C, Buxton MJ, Jones DR. Evaluating patient-based outcome measures for use in clinical trials. Health Technol Assess (Rockv). 1998;2(14):i–iv, 1-74.

    CAS  Google Scholar 

  39. Swallow J, Hillman A. Fear and anxiety: affects, emotions and care practices in the memory clinic. Soc Stud Sci. 2019;49(2):227–44.

    Article  PubMed  Google Scholar 

  40. Krestar ML, Looman W, Powers S, Dawson N, Judge KS. Including individuals with memory impairment in the research process: the importance of scales and response categories used in surveys. J Empir Res Hum Res Ethics. 2012;7(2):70–9.

    Article  PubMed  Google Scholar 

  41. Cohen ML, Boulton AJ, Lanzi AM, Sutherland E, Hunting PR. Psycholinguistic features, design attributes, and respondent-reported cognition predict response time to patient-reported outcome measure items. Qual Life Res. 2021;30(6):1693–704.

    Article  PubMed  PubMed Central  Google Scholar 

  42. National Institute for Health Research. NIHR Journals Library. 2021. Available from: Cited 2023 Mar 14.

  43. Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, Britten N, Roen K, Duffy S. Guidance on the conduct of narrative synthesis in systematic reviews. A product from the ESRC methods programme Version. 2006;1(1):b92.

  44. Clare L, Kudlicka A, Oyebode JR, Jones RW, Bayer A, Leroi I, et al. Goal-oriented cognitive rehabilitation for early-stage Alzheimer’s and related dementias: the GREAT RCT. Health Technol Assess (Rockv). 2019;23(10):1–244.

    Article  Google Scholar 

  45. Clarkson P, Challis D, Hughes J, Roe B, Davies L, Russell I, et al. Components, impacts and costs of dementia home support: a research programme including the DESCANT RCT. Program Grants Appl Res. 2021;9(6):1–132.

    Article  Google Scholar 

  46. Howard R, Zubko O, Gray R, Bradley R, Harper E, Kelly L, et al. Minocycline 200 mg or 400 mg versus placebo for mild Alzheimer’s disease: the MADE Phase II, three-arm RCT. Effic Mech Eval. 2020;7(2):1–62.

    Article  Google Scholar 

  47. Kehoe PG, Turner N, Howden B, Jarutyt L, Clegg SL, Malone IB, et al. Losartan to slow the progression of mild-to-moderate Alzheimer’s disease through angiotensin targeting: the RADAR RCT. Effic Mech Eval. 2021;8(19):1–72.

    Article  Google Scholar 

  48. Orgeta V, Leung P, Yates L, Kang S, Hoare Z, Henderson C, et al. Individual cognitive stimulation therapy for dementia: a clinical effectiveness and cost-effectiveness pragmatic, multicentre, randomised controlled trial. Health Technol Assess (Rockv). 2015;19(64):7–73.

    Google Scholar 

  49. Orrell M, Hoe J, Charlesworth G, Russell I, Challis D, Moniz-Cook E, et al. Support at Home: Interventions to Enhance Life in Dementia (SHIELD) – evidence, development and evaluation of complex interventions. Program Grants Appl Res. 2017;5(5):1–184.

    Article  Google Scholar 

  50. Woods R, Bruce E, Edwards R, Elvish R, Hoare Z, Hounsome B, et al. REMCARE: reminiscence groups for people with dementia and their family caregivers - effectiveness and costeffectiveness pragmatic multicentre randomised trial. Health Technol Assess (Rockv). 2012;16(48):v–116.

    CAS  Google Scholar 

  51. Allan LM, Wheatley A, Smith A, Flynn E, Homer T, Robalino S, et al. An intervention to improve outcomes of falls in dementia: the DIFRID mixed-methods feasibility study. Health Technol Assess (Rockv). 2019;23(59):1–207.

    Article  Google Scholar 

  52. Banerjee S, Hellier J, Romeo R, Dewey M, Knapp M, Ballard C, et al. Study of the use of antidepressants for depression in dementia: the HTA-SADD trial- a multicentre, randomised, double-blind, placebo-controlled trial of the clinical effectiveness and cost-effectiveness of sertraline and mirtazapine. Health Technol Assess (Rockv). 2013;17(7):1–43.

    Article  CAS  Google Scholar 

  53. Bowen M, Edgar DF, Hancock B, Haque S, Shah R, Buchanan S, et al. The Prevalence of Visual Impairment in People with Dementia (the PrOVIDe study): a cross-sectional study of people aged 60–89 years with dementia and qualitative exploration of individual, carer and professional perspectives. Health Serv Deliv Res. 2016;4(21):1–200.

    Article  Google Scholar 

  54. Gathercole R, Bradley R, Harper E, Davies L, Pank L, Lam N, et al. Assistive technology and telecare to maintain independent living at home for people with dementia: the ATTILA RCT. Health Technol Assess (Rockv). 2021;25(19):1–156.

    Article  Google Scholar 

  55. Gridley K, Brooks J, Birks Y, Baxter K, Parker G. Improving care for people with dementia: development and initial feasibility study for evaluation of life story work in dementia care. Health Serv Deliv Res. 2016;4(23):1–298.

    Article  Google Scholar 

  56. Iliffe S, Wilcock J, Drennan V, Goodman C, Griffin M, Knapp M, et al. Changing practice in dementia care in the community: developing and testing evidence-based interventions, from timely diagnosis to end of life (EVIDEM). Program Grants Appl Res. 2015;3(3):1–596.

    Article  Google Scholar 

  57. Kinderman P, Butchard S, Bruen AJ, Wall A, Goulden N, Hoare Z, et al. A randomised controlled trial to evaluate the impact of a human rights based approach to dementia care in inpatient ward and care home settings. Health Serv Deliv Res. 2018;6(13):1–134.

    Article  Google Scholar 

  58. Moniz-Cook E, Hart C, Woods B, Whitaker C, James I, Russell I, et al. Challenge Demcare: management of challenging behaviour in dementia at home and in care homes – development, evaluation and implementation of an online individualised intervention for care homes; and a cohort study of specialist community mental health car. Program Grants Appl Res. 2017;5(15):1–290.

    Article  Google Scholar 

  59. O’Brien JT, Taylor J-P, Thomas A, Bamford C, Vale L, Hill S, et al. Improving the diagnosis and management of Lewy body dementia: the DIAMOND-Lewy research programme including pilot cluster RCT. Program Grants Appl Res. 2021;9(7):1–120.

    Article  Google Scholar 

  60. Surr CA, Holloway I, Walwyn REA, Griffiths AW, Meads D, Kelley R, et al. Dementia care mapping™ to reduce agitation in care home residents with dementia: the epic cluster rct. Health Technol Assess (Rockv). 2020;24(16):1–174.

    Article  Google Scholar 

  61. Abendstern M, Davies K, Chester H, Clarkson P, Hughes J, Sutcliffe C, Poland F, Challis D. Applying a new concept of embedding qualitative research: an example from a quantitative study of carers of people in later stage dementia. BMC Geriatr. 2019;19(1):1–3.

  62. Leurent B, Gomes M, Carpenter JR. Missing data in trial-based cost-effectiveness analysis: an incomplete journey. Health Econ (United Kingdom). 2018;27(6):1024–40.

    Article  Google Scholar 

  63. Hardy SE, Allore H, Studenski SA. Missing data: a special challenge in aging research. J Am Geriatr Soc. 2009;57(4):722–9.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Jongsma KR, van Bruchem-Visser RL, van de Vathorst S, Raso FUSM. Has dementia research lost its sense of reality? A descriptive analysis of eligibility criteria of Dutch dementia research protocols. Neth J Med. 2016;74(5):201–9.

    CAS  PubMed  Google Scholar 

  65. Shepherd V. An under-represented and underserved population in trials: methodological, structural, and systemic barriers to the inclusion of adults lacking capacity to consent. Trials. 2020;21(1):445.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Murphy K, Jordan F, Hunter A, Cooney A, Casey D. Articulating the strategies for maximising the inclusion of people with dementia in qualitative research studies. Dementia. 2015;14(6):800–24.

    Article  PubMed  Google Scholar 

  67. Hoy J. The space between: making room for the unique voices of mental health consumers within a standardized measure of mental health recovery. Adm Policy Ment Health Ment Health Serv Res. 2014;41(2):158–76.

    Article  Google Scholar 

  68. Novek S, Wilkinson H. Safe and inclusive research practices for qualitative research involving people with dementia: a review of key issues and strategies. Dementia. 2019;18(3):1042–59.

    Article  PubMed  Google Scholar 

  69. Perfect D, Griffiths AW, Vasconcelos Da Silva M, Lemos Dekker N, McDermid J, Surr CA. Collecting self-report research data with people with dementia within care home clinical trials: Benefits, challenges and best practice. Dementia. 2021;20(1):148–60.

  70. Hellström I, Nolan M, Nordenfelt L, Lundh U. Ethical and methodological issues in interviewing persons with dementia. Nurs Ethics. 2007;14(5):608–19.

    Article  PubMed  Google Scholar 

  71. Hasek W. An interactional analysis of adult cognitive assessment. Electron Theses Diss. 2015.

  72. Jones D, Wilkinson R, Jackson C, Drew P. Variation and interactional non-standardization in neuropsychological tests: the case of the Addenbrooke’s Cognitive Examination. Qual Health Res. 2019.

  73. Krohne K, Torres S, Slettebø Å, Bergland A. Individualizing standardized tests: physiotherapists’ and occupational therapists’ test practices in a geriatric setting. Qual Health Res. 2013;23(9):1168–78.

    Article  PubMed  Google Scholar 

  74. Hubbard G, Downs MG, Tester S. Including older people with dementia in research: challenges and strategies. Aging Ment Health. 2003;7(5):351–62.

    Article  CAS  PubMed  Google Scholar 

  75. Kuring JK, Mathias JL, Ward L. Prevalence of depression, anxiety and PTSD in people with dementia: a systematic review and meta-analysis. Neuropsychol Rev. 2018;28(4):393–416.

    Article  CAS  PubMed  Google Scholar 

  76. Ward A, Jensen AM, Ottesen AC, Thoft DS. Observations on strategies used by people with dementia to manage being assessed using validated measures: A pilot qualitative video analysis. Health Expect. 2023;26(2):931–9.

  77. Brooks J, Gridley K, Parker G. Doing research in care homes: the experiences of researchers and participants. Soc Res Pract. 2019;8(Autumn):19–27.

  78. NIHR Dissemination Centre. Themed review - advancing care. Research with Care Homes; 2017.

  79. Towers AM, Smith N, Allan S, et al. Care home residents’ quality of life and its association with CQC ratings and workforce issues: the MiCareHQ mixed-methods study. Southampton (UK): NIHR Journals Library; 2021.

  80. Griffiths AW, Smith SJ, Martin A, Meads D, Kelley R, Surr CA. Exploring self-report and proxy-report quality-of-life measures for people living with dementia in care homes. Qual LIFE Res. 2019.

  81. Moyle W, Murfield JE, Griffiths SG, Venturato L. Assessing quality of life of older people with dementia: a comparison of quantitative self-report and proxy accounts. J Adv Nurs. 2012;68(10):2237–46.

    Article  PubMed  Google Scholar 

  82. Arons AMM, Krabbe PFM, Schölzel-dorenbos CJM, Wilt GJ Van Der, Gm M, Rikkert O. Quality of life in dementia: a study on proxy bias. Discovery Service for The University of Alabama Libraries; 2013.

  83. Trigg R, Watts S, Jones R, Tod A. Predictors of quality of life ratings from persons with dementia: the role of insight. Int J Geriatr Psychiatry. 2011;26(1):83–91.

    Article  PubMed  Google Scholar 

  84. Phillipson L, Smith L, Caiels J, Towers AM, Jenkins S. A cohesive research approach to assess care-related quality of life: lessons learned from adapting an easy read survey with older service users with cognitive impairment. Int J Qual Methods. 2019;18:1–13.

    Article  Google Scholar 

  85. Webb J, Williams V, Gall M, Dowling S. Misfitting the research process: shaping qualitative research “in the field” to fit people living with dementia. Int J Qual Methods. 2020;19:1–11.

    Article  Google Scholar 

  86. Thorgrimsen L, Selwood A, Spector A, Royan L, de Madariaga Lopez M, Woods RT, Orrell M. Whose quality of life is it, anyway? Ohio Med. 2003;87(11):201–8.

    Google Scholar 

  87. Lezak MD. Neuropsychological assessment. Oxford University Press; 2012.

  88. Krohne K. The test encounter: a qualitative study of standardized testing in a geriatric setting. University of Oslo; 2014.

  89. Fowler Jr FJ, Mangione TW. Standardized survey interviewing: Minimizing interviewer-related error. Sage; 1990.

  90. Juniper EF. Validated questionnaires should not be modified. Eur Respir J. 2009;34(5):1015–7.

    Article  CAS  PubMed  Google Scholar 

  91. Antaki C. Interviewing persons with a learning disability: how setting lower standards may inflate well-being scores. Qual Health Res. 1999;9(4):437–54.

    Article  Google Scholar 

  92. Antaki C, Houtkoop-steenstra H, Rapley M. “Brilliant. Next question...”: high-grade assessment sequences in the completion of interactional units high-grade assessment sequences in the completion of interactional units. Res Lang Soc Interact. 2010;1813(December 2014):37–41.

    Google Scholar 

  93. Houtkoop-Steenstra H, Antaki C. Creating happy people by asking yes-no questions. Res Lang Soc Interact. 1997;30(4):285–313.

    Article  Google Scholar 

  94. Gobo G, Mauceri S. Rescuing the survey from the surveyists. In: Gobo G, Mauceri S, editors. Constructing survey data. London: SAGE Publications; 2013.

    Google Scholar 

  95. Shepherd V, Wood F, Griffith R, Sheehan M, Hood K. Protection by exclusion? The (lack of) inclusion of adults who lack capacity to consent to research in clinical trials in the UK. 2019.

    Book  Google Scholar 

  96. The DEEP-Ethics Gold Standards for Dementia Research. 2020. Accessed 28 Nov 2023.

  97. Kitwood T. Rethinking dementia. Open University Press; 1997.

  98. Brooker D, Latham I. Person-centred dementia care: making services better with the VIPS framework. London; Philadelphia: Jessica Kingsley Publishers; 2016.

    Google Scholar 

  99. Evans R, Brocklehurst P, Ryan J, Hoare Z. The impact of different researchers to capture quality of life measures in a dementia randomised controlled trial. Trials. 2023;24(1):1–9.

    Article  Google Scholar 

  100. Phillipson L, Towers AM, Caiels J, Smith L. Supporting the involvement of older adults with complex needs in evaluation of outcomes in long‐term care at home programmes. Health Expect. 2022;25(4):1453–63.

  101. NIHR Journals Library/Information for Authors. Accessed 28 Nov 2023.

Download references


This PhD is funded by an NIHR School for Social Care Research Individual Career Development Award. It is attached to the DETERMIND programme of dementia research and hosted by the University of York School for Business and Society.

For the purpose of open access a Creative Commons Attribution (CC BY) licence is applied to any Author Accepted Manuscript version arising from this submission.


This paper reports the findings of a review undertaken as part of a PhD funded by an NIHR School for Social Care Research Individual Career Development Award. The views expressed are those of the authors and not necessarily those of the NIHR SSCR, NIHR or Department of Health and Social Care.

Author information

Authors and Affiliations



KG designed the review, conducted the searches, selected included reports, extracted data and drafted the paper. KB and YB supervised the review and edited drafts of the paper. KB read and reviewed two studies to quality check data extraction. All authors read and approved the final manuscript.

Authors’ information

KG is a research fellow at the University of York specialising in dementia research methods. She is currently undertaking a PhD, funded by an NIHR School for Social Care Research Career Development Award, looking at the process of standardised data collection from people with dementia with a view to improving future research practice.

Corresponding author

Correspondence to Kate Gridley.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Glossary of Measures.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gridley, K., Baxter, K. & Birks, Y. How do quantitative studies involving people with dementia report experiences of standardised data collection? A narrative synthesis of NIHR published studies. BMC Med Res Methodol 24, 43 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: