While simple randomization (*i.e.,* randomization based on a single sequence) may be trusted in large trials with few groups (*e.g.,* 1 intervention, 1 control group), in a trial like RIPPLE (projected n=200 and 5 total groups), it is unlikely that an even number of participants will be assigned to each group. Why do we care? If we’ve recruited 200 participants, it’s completely possible that an uneven assignment (*e.g., *group A[n=45], B[n=40], C[n=20], D[n=35], E[n=60]) would impact our power, and therefore analysis.

A better option in our case is blocked randomization, in which participants are randomly assigned in blocks. For example, with five groups and a block size of five, for every five enrolled participants, one will be randomly assigned to each group, thus ensuring a balanced sample size across groups over time.

How is block size determined?

First, block size must be a multiple of the total number of groups (in our case, 4 intervention groups, 1 control group, so a block size of 5, 10, 15…). There are pros and cons to smaller (e.g., 5) vs. larger (e.g., 40) blocks. Smaller blocks means higher risk of bias (even though our trial is double-blinded, if I am unblinded and see one or two participants’ assignments, then I have an increased likelihood of guessing the next three participants’ assignments). The pro to a smaller block size is that it’s less complicated mathematically (*i.e., *with a block size of 5 we have 120 permutations, but with a block size of 10 we have 3 628 800 possible permutations!). On the other hand, with a larger block, if I’m unblinded to a handful of participants’ assignments, it is very unlikely that I could predict future participants’ assignments. While a larger block size is more rigorous, it turns out programming blocks to begin with is logistically difficult for our intervention development company, and at the end of the day *any* block is better than no block at all!

### Like this:

Like Loading...

*Related*