Skip to main content

Table 2 Measure administration, participant views and author reflections

From: How do quantitative studies involving people with dementia report experiences of standardised data collection? A narrative synthesis of NIHR published studies

Author (year) and study

Measure administration

Participants’ views on data collection

Author reflections on data collection processes and tools

[51] Allan (2019)

Feasibility study of a falls intervention

Data were collected during a home visit by one of three clinical trials assistants

Process evaluation involving people with dementia, carers and research workers looked at data collection processes. Research workers expressed concern about a) duration of assessments; b) wording of measures (use of double negatives in MFES, lack of clarity in QOL-AD)

The authors noted that, even though measures had good response rates, the wording was sometimes complex and difficult to explain to people with dementia.

They stressed the importance of additional training for research workers to ensure a consistent approach and minimise missing data.

[52] Banerjee (2013)

Multicentre RCT of two antidepressants for people with dementia

No information

Not reported

The authors comment that: ‘..measurement error caused by the effect of cognitive impairment on domains such as memory, language and reasoning is a potential limitation. However, the study included only those measures best validated for use in dementia.’ (p35)

[53] Bowen (2016)

A cross sectional study of visual impairment in people with dementia

The study recruiter carried out SMMSE on the day that the participant was consented into the study

Eye examinations were performed by an optometrist in participants’ homes and care homes

Not reported

The authors note that a considerable proportion of SMMSE assessments were not available ‘owing to a range of factors, notably poor participant co-operation’ (p114) and highlight that unavailable SMMSE assessments were likely to come from patients with greater cognitive impairment.

[44] Clare (2019)

RCT of goal-oriented cognitive rehabilitation

Assessments were completed by 15 trial researchers, all with backgrounds in psychology, nursing or clinical research. They received training in administering all outcome measures

The researchers recorded assessment data manually during the participant (home) visits

As part of the process evaluation, an independent researcher interviewed participants in 3 sites about their experiences of the therapy sessions. The interviews did not appear to cover data collection processes directly, but data were collected during those therapy sessions. Overall, the therapy was received positively by both carers and people with dementia.

No specific reflections on the process of data collection.

Table 57 lists all missing data in descending order of % missing. This includes ‘participants who withdrew counted as missing data’ so combines overall study participation with response to particular measures. Percentages missing for self-report measures ranged from 0.4% at baseline for HADS and DEMQOL to 21.9% for the TEA distractor task at 9 months. The authors do not reflect on this in the full text.

[45] Clarkson (2021)

Pragmatic randomised trial of dementia home support

Participants were interviewed face to face at home, often with carers present, by interviewers who had ‘received online training about administering the standardised measures in a consistent and objective manner.’ ([14]; p2735)

Embedded qualitative study [14] collected incidental comments from people with dementia about the experience of responding to standardised measures. Some participants felt anxious during the interview or were confused by questions and uncertain about how to reply.

The authors noted that the embedded study raised issues about the use of standardised measures that may be cognitively demanding for participants with dementia and said ‘the research interview is not a neutral encounter.’ (p29)

[45] Clarkson (2021)

Prospective observational study of dementia home support

Participants were interviewed at home by research staff from participating trusts. Participants met their interviewer for the first time at this point.

Embedded qualitative study [61] focussed on carers’ incidental comments during data collection supplemented by a focus group of professionals (no-one with dementia).

No specific reflections relating to data collection from people with dementia (other than those noted above).

[54] Gathercole (2021)

Pragmatic RCT of assistive technology and telecare

No information

There was an embedded ethnographic study, but this focussed on the use of assistive technology, not the research process or data collection.

The ‘burden of assessments’ is mentioned as one possible reason for some dyads not responding to measures despite continuing to participate in the trial, but there is no further explanation or discussion of this.

The authors advise caution regarding generalisability as 8% of the sample at baseline (and 16–19% of the sample still included at follow-ups) did not participate in any interviews.

[55] Gridley (2016)

Feasibility study of life story work with people with dementia (care homes)

Face to face data collection in care homes. Two researchers in the project team, supported by a research assistant, collected data at all time points.

Process evaluation involved qualitative interviews with 9 participants with dementia living in care homes, as well as staff and carers, about both the implementation of life story work and the acceptability of the research. Participants with dementia said they ‘didn’t mind’ answering questions, but some carers were concerned the person they cared for might have felt some anxiety when being questioned.

‘Completion of outcome measures by people with dementia was challenging for a number of reasons, including: the capacity and frailty of the participants; the context in which data collection took place (care homes getting on with their daily routines)’ (p69)

‘The measures chosen were all designed to be completed by people with dementia but response rates leading to usable data were low and varied between the measures … The main reason for participants not completing a measure was that they were not able to understand and/or respond to the questions.’ (p69)

[55] Gridley (2016)

Feasibility study of life story work with people with dementia (mental health inpatient assessment units)

Face to face data collection in inpatient units. Two researchers in the project team collected data at all time points.

One participant with dementia residing in a mental health inpatient assessment unit was interviewed for the process evaluation (see above)

As above

[46] Howard (2020)

RCT of Minocycline for mild Alzheimer’s Disease

No information

Not reported

Authors suggest that the low completion of SMMSE at follow-up was due to people on the higher dose withdrawing from the treatment and: ‘Although the trial protocol specified that outcome assessments should be obtained irrespective of treatment compliance, this could not always be achieved despite the vigorous efforts of the trial team.’ (p22)

[56] Iliffe (2015)

EVIDEM- C: mixed-method longitudinal study looking at incontinence

No information

A feasibility study included interviews of carer participants about their experiences of data collection, but people with dementia were not interviewed.

There is no reflection on the small numbers of completed self-report measures in the discussion or limitations sections.

[47] Kehoe et al. (2021)

RCT to study the effects of the antihypertensive drug losartan, in addition to normal care, compared with a placebo

Self-reported data were obtained during a face to face assessment by a researcher who, wherever possible, arranged to meet the participant where they felt most comfortable (e.g. at home or at the clinical research centre)

Embedded qualitative study looked at recruitment but not data collection

Authors note:

Overall ‘approximately 19% of the data or data sets were missing or incomplete, the majority of which related to the data collected from the various assessment tools used to collect some of the secondary outcomes’

Older participants were more like to have missing data

[57] Kinderman (2018)

RCT to evaluate the impact of a human rights based approach to dementia care in inpatient ward and care home settings

Limited information, but authors note in discussion re QOLAD that ‘…it quickly became obvious that the majority of people living with dementia in the care homes and wards visited were unable to complete the measure, even with assistance from skilled clinicians.’ (p53)

Not reported

QOL-AD: Authors reflect that low QOL-AD response rates and heavy reliance on proxy measures (which consistently rated QoL lower than self-reports) call into question whether this was an appropriate measure to use in this context.

IDEA Q: Authors note that, despite being developed collaboratively with people living with (the early stages of) dementia, staff and carers, the IDEA questionnaire was not an effective tool as it tended towards a floor effect, and was too complex for people with later stage dementia.

ADAS-Cog: Most people refused this because of its length: ‘On reflection, the use of a briefer screening assessment … might have yielded more useful results. Although these measures are less detailed than the ADAS-Cog, there is a greater chance that people would have engaged with them…’ (pg59)

[58] Moniz-Cook (2017)

Cluster randomised trial of online training for care home staff to deliver interventions for challenging behaviour in dementia (Study 2)

In most care homes, two researchers interviewed residents and care staff concurrently in separate rooms. In some instances, additional visits were arranged to complete interviews if, for example, participants became tired.

Not reported. Process evaluation included data from interviews with care home staff and a focus group with ‘stakeholders’ (not including anyone with dementia). The focus of both was the implementation of the intervention, not data collection processes.

A section of the report focusses on missing data but says little about the causes of missing data other than ‘The researchers endeavoured to collect as many data as they could. However, two types of missing data were inevitable: missing items within a measure and missing time points. Missing items were attributable to researcher error or participants declining to answer individual questions. When questionnaires had recommended rules for managing such missing items, these were applied.’ (pg 40)

[58] Moniz-Cook (2017)

Observational cohort study of people with dementia and challenging behaviour living at home and their carers (Study 4)

All interviews were conducted in the person’s home unless they requested an alternative location. Occasionally interviews were broken into chunks (either at the participant’s request or if the researcher deemed it appropriate)

Not reported. One person with dementia attended the stakeholder consultations (1/39 participants). The focus of discussion was the intervention and wider access to services. No mention of study methods or experiences of data collection.

As in study 2, the authors comment on the discrepancy between self-report and proxy QoL measures but do not reflect on the implications of this or low self-report response rates in the results, discussion, limitations or conclusion sections.

[59] O’Brien et al. (2021)

Pilot RCT of a management toolkit for Lewy Body Dementia

The setting for the study was secondary care memory assessment and movement disorder services in England. All assessments were undertaken by members of the National Institute for Health Clinical Research Network

As part of an embedded qualitative study it is noted that ‘… patients and carers highlighted some issues with question wording, typically with the same questions identified as problematic by clinicians’ (p39). No further information about the nature of the feedback or which questions were referred to.

In the limitations section the authors note: ‘There were occasionally some tensions between research paradigms, in particular in relation to managing qualitative feedback on question wording in the assessment toolkits, with the value given to ‘validated’ questions derived from clinical research.’ (p44). No further explanation is given.

[48] Orgeta (2015)

RCT of individual cognitive stimulation therapy for dementia

All research activities took place in participants’ homes. Showcards were used to support participants with dementia to respond to the measures. If participants felt uncomfortable with the assessment this was discontinued (and rescheduled where appropriate).

An embedded qualitative study explored experiences of 22 participants with dementia, but the focus was the intervention. No mention of participants’ experiences of data collection.

The authors reflect that the clinical criteria for inclusion of people with dementia were a barrier to recruitment.

[49] Orrell (2017)

RCT of Maintenance Cognitive Stimulation Therapy (MCST)

No information (page 25 explains that: ‘Half of the sample was recruited from care homes and half was recruited from community settings’, but the report does not specify the context in which data collection took place).

Not reported

Commented that, in their analysis, ‘DEMQOL seemed to be a more sensitive instrument than the QOL-AD for measuring change in quality of life in dementia.’ Alternatively, ‘the two measures may be measuring different aspects of quality of life.’ (p37) The authors called for more research to explore the differences between these two measures further.

[49] Orrell (2017)

Maintenance Cognitive Stimulation Therapy implementation study (observational)

Interviews with people with dementia were carried out by a researcher or staff member who was trained to undertake the assessment and had training in Good Clinical Practice and taking informed consent.

Three focus groups were conducted with 10 people with dementia and 5 staff members looking at the experience and effect of maintenance cognitive stimulation therapy. No mention of participants’ experiences of data collection.

No reflection on data collection, response rates or experiences of participants with dementia providing data.

[49] Orrell (2017)

RCT of a Carer Supporter Programme and reminiscence intervention

Face to face interviews were held at ‘times and venues organised to accommodate the carer’s needs and preferences.’ (p78) The questionnaire for the person with dementia was always completed with the researcher.

Not reported. Participants are generally referred to as carers, although self-report data were collected from people with dementia.

No reflection on data collection, response rates or experiences of participants with dementia providing data.

[60] Surr (2020)

Cluster RCT of dementia care mapping in residential care settings (with a cross sectional element added at 16 months)

The research took place in care homes. Little information is given about the data collection context other than to note that data were collected by ‘researcher interview’

Process evaluation focussed on implementation of the intervention, not data collection

Response rate was recognised as a problem and data collected directly from people with dementia was excluded from the analysis. This was noted as a limitation of the study: ‘Owing to the variability in the ability of care home residents with dementia to self-report on measures of BSC and QoL, the primary and secondary analyses were conducted using staff proxy-completed measures’

[50] Woods (2012)

RCT of group reminiscence for people with dementia and carers

Face to face interviews were conducted in participants’ homes.

Measures were arranged in a number of booklets. ‘A second visit was sometimes made to complete assessments where an interviewee became tired, or where it was otherwise requested by participants or deemed appropriate by the assessor.’ (p14)

No process evaluation, and this is identified as a limitation in the discussion

From the embedded study of EQ-5D the authors concluded: ‘Participants with dementia were able to complete the EQ-5D in a face-to-face interview, in line with evidence on suitability of this health-related quality-of-life instrument in this patient group.’ (p49)