Theme | Examples | Authors and EMA type |
---|---|---|
Facilitators | ||
Accepted technology | Ease of use of the interface, familiarity with the mobile device, and brevity of the survey allowed for easy completion of survey. Does not take very long for participants to habituate to a monitoring device (i.e. E4). | Price et al. [45] - random Carlozzi et al. [47] - interval |
Clinician-monitor | Ability for results to be shared with the treating clinician. | Forster et al. [20] - random |
Compliance | Relatively short questionnaire length allowed for commitment by participants. A schedule feature resulted in great compliance and satisfaction. | Gonzalez-Borato et al. [41] - event Juengst et al. [22] - interval |
Ecological validity | Higher frequency repeated symptom assessment in a natural environment over a short period could provide a more valid measure of emotional symptoms and a better indicator of clinically meaningful change at the individual level. Data collection method had little influence on mood and feelings promoting ecological validity. | Juengst et al. [23] - interval Lenaert et al. [24] - random |
Feasibility | Simple technology and using text-message allowed for easy completion for participants. Text messages are an efficient method of implementing a “watchful waiting” program after a traumatic event. | Parcella et al. [42] - interval Price et al. [44] - interval |
Methodological quality | Combination of objective and subjective measures reduced problems related to overlapping method variance. Mobile ecological momentary assessments (mEMA) better predicted recovery time than Post-Concussion Symptom Scale (PCSS). | Kratz et al. [36] – interval and event Sufrinko et al. [30] - random |
Personal device use reduced costs | Using an individual’s personal device significantly reduced the cost associated with conducting studies or delivering intervention. | Price et al. [45] - random |
Reliability | Use of EMA may reduce misidentification of individuals with clinically significant symptoms. Data collection minimises recall bias. | Juengst et al. [23] - interval Carlozzi et al. [47] - interval |
Self-monitor | Patients able to assess and monitor their own symptoms. Users were able to monitor own progress and rehabilitation. | Forster et al. [20] - random Gonzalez-Borato et al. [41] - event |
Temporal relationships | Ability to examine temporal relationships among symptoms. mEMA data across recovery better predicted recovery duration compared with PCSS score at any clinic visit, but illustrated symptom patterns that may further inform clinical profiles and guide treatment recommendations. | Carlozzi et al. [47] - interval Sufrinko et al. [30] - random |
Barriers | ||
Capture full experience | Pain in spinal cord injury (SCI) is multifaceted, and thus ratings of pain intensity do not capture the full breadth of the pain experience in individuals with SCI. Single daily assessments may not be a valid representation of symptoms/ progression. | Carlozzi et al. [34] - interval Juengst et al. [23] - interval |
Complex technology | EMA requires more sophisticated analytical hardware for monitoring and data capture. User error or internet connection difficulties interfered with completion of assessments. | Carlozzi et al. [34] - interval Kratz et al. [37] – interval and event |
Compliance decline | Compliance decreased with every passing testing day. Participants were less likely to respond as days since injury increased. | Forster et al. [20] - random Sufrinko et al. [30] - random |
Response consistency | Difficult to gauge how reliable and realistic the patients answers were. Method of self-reporting may contribute to variability. | Forster et al. [20] - random Juengst et al. [23] - interval |
Floor/ceiling effects | The presence of a floor effect for EMA pain interference presented an analytical challenge in our data. | Carlozzi et al. [34] - interval |
Potential biases | Potential for individuals to change behaviour when they are being monitored. The use of EMA might also be limited by other psychological factors such as social desirability and patients’ individual differences. | Carlozzi et al. [47] - interval Forster et al. [20] - random |
Personalised feedback | Participants requested personalized questions and personalized feedback for a better experience. Participants indicated that they did not believe the data they were providing were doing any good because they could not see any effect on their treatment. On a few occasions participants temporarily ceased participating in EMA data collection after researchers and clinicians failed to respond to a stress or crisis event recorded in EMA data. | Price et al. [45] - random Smith et al. [28] – interval, event, and random |
Participant negatively affected by prompt frequency | Pain may have been exacerbated due to repeatedly asking participants to think about their pain levels. Participant mood and emotion may have been influenced by high frequency of prompts. | Todd et al. [39] - random Forster et al. [20] - random |
Poor technology | The PDAs were viewed as being clunky and out of date compared with smart phones. | Smith et al. [28] interval, event, and random |
Time-burden | EMA is a relatively intense data collection procedure that can be burdensome on the participant. It was more difficult for subjects to enter ratings during busy wake and bed-time routines, which, for people with SCI, often involve lengthy and assisted self-care routines (e.g. bowel and bladder care). | Carlozzi et al. [47] - interval Kratz et al. [37] – interval and event |