|Challenges||Ability of the models to control for bias|
|(1) Non-randomised design||
Limited. Stratification helps to some extent to control confounding, but covariate adjustment in linear mixed effects models provides better adjustment. There still may have been residual confounders.|
For the matched cohort analysis, there was excellent control of confounding due to matching, but there still may have been residual confounding by underlying variables not used to match on.
|(2) White coat effect||
Depends on the validity of the assumption for the difference due to ‘white coat effect’.|
Matched cohort analysis results were found to be fairly sensitive to this assumption.
In the random coefficients model, this was fully adjusted by means of adjusting for intervention/control group at baseline, although we make the reasonable assumption that the degree of ‘white coat effect’ did not change over time.
|(3) High variability in the frequency of readings||
Partially. Standardisation meant that frequencies of readings were the same between groups, but subgroup selection to achieve this may have resulted in a biased subgroup. Regression adjustment for propensity score or matching may have only partially addressed this bias by controlling for confounders between groups.|
Mixed effects models make a missing-at-random assumption for missing data. If this assumption holds true in estimating the change in BP over time, then the difference in frequency of readings would have had no effect on the estimated treatment effect because the change in BP would be correctly modelled in each group. However, if the reason for missing data (or different frequencies) was more or less informative in one of the groups compared to the other however (e.g. indicating low BP in comparator patients) then this could have biased the results.
|(4) Contamination of readings||Not an issue. All methods compared telemonitored BP with surgery measured BP from comparator patients.|
|(5) Regression to the mean||At least partially. Will be controlled to some extent due to comparison with comparator group and matching, but there may be differences in the strength of regression-to-the-mean between treatment and comparator groups. For example, it is conceivable that between-group differences in the inclusion probabilities for patients with greater propensity for stronger regression-to-the mean (e.g. those with intermittently high or unstable BP), might contribute to confounding bias.|
|(6) Measurement error||This was not addressed by any of the analysis methods based on the standardised data. We expect that there might have been attenuation of the intervention effect towards zero as a result. The random coefficients model did not fully address this, although the model did include all of the multiple BP measurements per patient which would have improved estimation of within-patient variability and the underlying true change in BP within each patient.|
|(7) End digit preference||
End digit preference will compound the effect of any measurement error such that it will lead to observed values deviating from their true values. The analyses were limited in their ability to deal with end digit preference in the same way as for measurement error.|
If there was differential change in end digit preference or specific value preference over time in one group compared to the other, then it may have caused confounding bias. For the matched analysis, patients may not have been matched correctly by systolic BP due to differential end digit bias between groups. Again, we were reliant on the reliability of the assumption about the true BP in each group. Adjustment for group at baseline in a random coefficient model should in theory have adjusted for differences in the strength of digit preference.
|(8) Withdrawal bias||
For the analyses based on the standardised dataset we used subgroup selection to select out everyone with at least two readings at baseline and follow-up. Patients who withdrew from the telemonitoring arm or those in the comparator arm who got their BP measured less frequently were more likely to be excluded from the analysis, and so this problem reduces to the problem of incomparable groups and residual confounding (issue (1)).|
For the random coefficient model analysis, this analysis assumes any missing data is “missing-at-random” conditional on covariates used in the adjustment. If the reasons for missing data or missing data mechanisms differed according to treatment group, and these were not taken into account in the statistical model, then this may have biased the results.