The ICC is a nuisance parameter that has to be *a priori* specified when planning a cluster randomized trial. The magnitude of this coefficient has a major impact on power, particularly with a small number of randomized clusters. Our results were derived considering a continuous outcome, but in their simulation study, Donner and Klar [24] showed that power never differs from more than one percentage point in continuous or binary outcomes. Moreover, we did not take into account any potential variability in cluster size, which is already known to reduce power [25]. When planning cluster randomized trials, variability in cluster size is rarely taken into account, and the cluster size *m* is generally replaced by the mean cluster size. An underestimation of the ICC may therefore be expected to have similar consequences when cluster size is constant. In the end, an underestimation of the ICC during planning could therefore lead to a severely underpowered study and thus questionable results.

In cluster randomized trials, it is known that for a fixed total number of subjects, the higher the number of clusters (and thus the smaller the average cluster size), the higher the power [2, 4, 5, 14, 24, 26, 27]. In the extreme case, in clusters of size one, individuals are randomized, with no loss of power because of correlation between subjects. Moreover, it has also been shown that increasing cluster size improves the power up to a certain threshold, which depends on the value of the ICC [24, 27]. Therefore, when planning a cluster randomized trial, the optimal strategy is indeed to randomize a large number of clusters [1, 2, 12, 29]. Such a strategy first allows for decreasing the total sample size for a pre-specified power and second, as our results show, protects against a loss of power induced by an underestimation of the ICC when planning. However, because of logistic constraints, the number of randomized clusters may be limited, and indeed, the review by Eldridge *et al* [6] noted that half of the cluster randomized trials analyzed had fewer than 29 clusters in each arm. Therefore, for most cluster randomized trials, the *a priori* postulated value of the ICC has a great impact on power.

When planning trials, the *a priori* postulated ICC will rarely be very reliable. During the study, an intermediate estimation of the ICC can be assessed, thus allowing a sample size adjustment [11]. But the determination of this intermediate estimation is not without error, as was shown in the study by Moore et al [28], in which the intermediate ICC was 0.012 and the final one 0.031. A sensitivity analysis must therefore be undertaken when planning, to account for uncertainty of the ICC. In the extreme situations, when very few clusters can be randomized, such a sensitivity analysis may illustrate the high risk of performing an underpowered study and thus highlight arguments for not performing the study.

When reporting the study results, investigators should publish both the ICC used during the planning and the *a posteriori* estimated one, as recommended initially by some authors and recently by the extension of the CONSORT statement for cluster randomized trials [27, 29–31]. However, such information is rarely available. We studied cluster randomized trials published between January 2003 and December 2004 in the *British Medical Journal*, "which contains more such reports than any other journal" [7], and the published extension of the CONSORT statement [30]). Of 16 published studies, 5 (31.2%) did not report an *a priori* postulated ICC and 2 reported no sample size calculation. Only 5 (31.2%) reports provided *a posteriori* estimated ICCs (without any confidence intervals). Such under-reporting disallows assessing the discrepancy between the *a priori* postulated ICC and the *a posteriori* estimated one. However, reporting both ICCs would help readers "assess the appropriateness of the original sample size calculations as well as the magnitude of the clustering for each outcome" [30] and help investigators design future trials [1, 27, 31]. It would also help readers understand trial results, particularly negative ones: a study may prove to be negative just by a loss of power induced by an *a priori* underestimation of the ICC. On a formal point, the publication format of the *a posteriori* estimated ICC should follow the recommendation by Campbell et al., who advocate specifying a description of the data set and information on the method used to assess it and the precision of the estimate [32].

In conclusion, our study supports modifications in investigators' practices when planning trials and reporting results, taking into account the uncertainty of the ICC by favoring a high number of clusters and publishing this parameter. For readers, an objective reading of trial results, particularly negative results, requires knowledge of *a priori* and *a posteriori* estimated ICCs.