Skip to main content

Table 1 Scoring criteria for assessment of measurement scales (Adapted from Frances et al. [21])

From: A systematic review and psychometric evaluation of resilience measurement scales for people living with dementia and their carers

Checklist Item

Score

Notes

Domain 1: Conceptual Model

The reasoning for and a description of the concept(s) and the population(s) a measure is intended to evaluate should be specified. Assessments in this domain assists in ascertaining if the measure is likely to capture the intended effect in the population of interest

  

1. Construct to be measured has been defined

1 = Yes

0 = No

As per original checklist

2. The intended respondent population has been described

1 = Yes for ‘a’ and ‘b’ and/or ‘c’

0.5 = Yes for ‘a’ and No for ‘b’ and ‘c’

0 = No for ‘a’ and/or ‘b’ and ‘c’

Checklist item broken down into 3 parts:

a) Study population

b) Original measure population

c) Authors discuss if measure suitable for their study population

3. Conceptual model addresses whether a single construct / scale or multiple subscales are expected

1 = Yes

0 = No

As per original checklist. Must be explicitly stated

Domain 2|: Content Validity

The extent to which the questions and sub-scales of a measure are relevant and appropriate for the target population and suitably reflect the concept of interest

  

4. There is evidence that members of the intended respondent population were involved in the PRO measure’s development

1 = Yes for ‘a’

0.5 = No for ‘a’ and Yes for ‘b’ and/or ‘c’

0 = No for ‘a’ and ‘b’ and ‘c’

Checklist item broken down into 3 parts:

a) Related to study

b) Related to original measure or previously validated adaptation

c) Discuss if original/adaptation involvement is suitable for study population

5. There is evidence that experts in the construct of interest were involved in the PRO measure’s development

1 = Yes for ‘a’

0.5 = No for ‘a’ and Yes for ‘b’ and/or ‘c’

0 = No for ‘a’ and ‘b’ and ‘c’

Checklist item broken down into 3 parts:

a) Related to study

b) Related to original measure or previously validated adaptation

c) Discuss if original/adaptation experts are suitable for study population

6. There is a description of the methodology for developing the items/questionnaires (e.g. noting how the respondent population and content experts were accessed and this process generated the questions in the outcome measure)

1 = Yes for ‘a’

0.5 = No for ‘a’ and Yes for ‘b’ and/or ‘c’

0 = No for ‘a’ and ‘b’ and ‘c’

Checklist item broken down into 3 parts:

a) Related to study

b) Related to original measure or previously validated adaptation

c) Discuss if original/adaptation methodology is suitable for study population

Domain 3: Reliability

The level of consistency of an outcome measure, reflected by correlations between the items at a single time point or over time to ascertain whether items or sub-scales are statistically related

  

7. There is evidence that the PRO measure’s reliability was tested (e.g. internal consistency, test–retest)

1 = Yes

0 = No

As per original checklist. Must relate to study not original measure or previous studies

8. The reported indices of reliability are adequate and/or justified (e.g. ideal r >  = 0.80; adequate r >  = 0.70; lower if justified

1 = Yes

0 = No

As per original checklist. Must relate to study not original measure or previous studies

Domain 4: Construct validity

The extent to which an outcome measure assesses the concept or construct it was designed to reflect

  

9. There is reported quantitative justification that single scale or multiple subscales exist in the PRO measure (e.g. factor analysis, item response theory)

1 = Yes

0 = No

Could be related to either current study or original measure

10. There are findings supporting expected (hypothesised) associations with other existing outcome measures or demographic data

1 = Yes for ‘ai’, and ‘b’ and ‘c’

0.5 = No for ‘ai’ and Yes for ‘aii’, and ‘b’, and ‘c’

OR

 = Yes for ‘ai’ or ‘aii’, and ‘b’ and ‘c’ for some but not all associations in ‘ai’ and/or hypotheses in ‘aii’

0 = No for ‘a’ (i or ii) and/or ‘b’ and/or ‘c’

Checklist item broken down into 3 parts:

a)

i) Known associations between other measures and resilience reported

ii) A priori hypotheses of relationship between other measures and resilience

b) Results relating to resilience measure reported

c) Results match a priori hypotheses and/or known associations (i.e. does ‘a’ match ‘b’)

11. There are findings supporting expected (hypothesised) differences in scores between relevant groups

1 = Yes for ‘ai’, and ‘b’ and ‘c’

0.5 = No for ‘ai’ and Yes for ‘aii’, and ‘b’, and ‘c’

OR

 = Yes for ‘ai’ or ‘aii’, and ‘b’ and ‘c’ for some but not all associations in ‘ai’ and/or hypotheses in ‘aii’

0 = No for ‘a’ (i or ii) and/or ‘b’ and/or ‘c’

Checklist item broken down into 3 parts:

a)

i) Known differences in resilience by group

ii) A priori hypotheses relating to expected differences in resilience by group

b) Results relating to resilience measure reported

c) Results match a priori hypotheses and/or known associations (i.e. does ‘a’ match ‘b’)

12. The measure is intended/designed to measure change over time

1 = Yes, there is evidence of both test–retest reliability and responsiveness to change

OR

 = There is an explicit statement that the PRO measure is not intended to measure change over time

0 = No

As per original

Domain 5: Scoring and interpretation

A clear description of how scores on the individual items are calculated to derive the final measure, and an explanation of how differences in scores on a measure are understood

  

13. There is documentation how to score the PRO measure

1 = Yes for ‘a’ and ‘b’

0.5 = Yes for ‘a’ and No for ‘b’

0 = No for ‘a’

Question broken down into 2 parts:

a) Document how measure scored in study

b) Measure scored in the same way as originally intended OR discusses why different

14. A plan for managing and/or interpreting missing responses has been described

1 = Yes

0 = No

As per original

15. Information is provided about how to interpret the PRO scores

1 = Yes for ‘a’ and ‘b’

0.5 = Yes for ‘a’ and No for ‘b’

0 = No for ‘a’ and ‘b’

Checklist item broken down into 2 parts:

a) Information on how to interpret score in study provided

b) Interpret score in same way as originally intended OR discusses why different

Domain 6: Respondent burden and presentation

The time and effort in relation to administering and completing a measure. The literacy level required to complete is suggested to be sixth grade reading level or lower, or the literacy level is adapted for the target population

  

16. The time to complete is reported and reasonable?

1 = Yes

0 = No

Where time to complete was not reported, no assessment of the appropriateness of the number of questions was made, as recommended in the original checklist, because of the variability across studies in terms of populations and intended application

17. There is a description of the literacy level of the PRO measure

1 = Yes

0 = No

As per original

18. The entire measure is available for public viewing

1 = Yes

0 = No

As per original