In a group or cluster regression discontinuity design (GRDD), a threshold or cutoff value of an assignment score variable measured at a group or cluster level is used to assign groups or clusters to the intervention or control conditions (
; ). As such, the GRDD is a group- or cluster-level analog to the more familiar individual-level regression discontinuity design (RDD) ( ; ; ; ).Special methods are needed for analysis and sample size estimation for GRDDs. This page provides guidance for these designs, as detailed below and in the GRDD sample size calculator. A variety of issues are discussed and there are many references based on work done for RDDs that are relevant for GRDDs, including information on sample size estimation.
Features and Uses
Assignment to Conditions based on a Threshold Value
The signature trait of RDDs is assignment to conditions based on a threshold or cutoff value of a score, also referred to as a running variable, forcing variable, or index. This approach can work because individuals or groups close to the cutoff value are likely to be similar on most other characteristics, so any estimated difference on either side of the cutoff is likely due to the assignment rule (
; ; ). If an intervention effect is present, then a scatterplot of the outcome by assignment score will reveal a “jump” or discontinuity at the cutoff value or a change in slope beginning at the cutoff value. The difference in estimated outcome values or in slopes at the cutoff score is the estimated RDD intervention effect.The RDD has been used to examine the effect of immediate versus deferred antiretroviral therapy (ART) on retention in HIV care, where ART was provided if the CD4 count threshold is below 350 cells per µl (
). The design has also been used to assess the association between screening for prostate cancer and mortality, where biopsy-based screening was provided if a participant’s prostate-specific antigen levels were at least 4.0 µg/l ( ).Most methods development in RDD has been in the areas of education or econometrics, but the use of RDDs in public health, epidemiology, and health care research has been suggested as observational data are commonplace in these settings (
; ; ; ).Nested or Hierarchical Design
GRDDs in which groups or clusters are assigned to conditions based on a group-level summary of a variable at pretest have a hierarchical structure similar to group- or cluster-randomized trials (GRTs). However, all other things being equal, the number of groups required for a GRDD can be two to three times greater than that of a GRT (
; ). Even so, when randomization is not possible, the GRDD can be a good alternative that supports strong causal inference.In GRDDs, participants are nested within groups and measurements are nested within members. If the assignment score is based on group-level summaries of the outcome at pretest, then repeated observations on the outcome are possible (
). In cohort designs the same participants are measured at pretest and post-test, while in cross-sectional designs different participants are observed at each measurement occasion.Appropriate Use
GRDDs can be employed in a wide variety of settings and populations to address a wide variety of research questions. They are an appropriate design if group randomization is not possible and the investigator wants to evaluate an intervention that:
- assigns groups to conditions based on a threshold value of a score variable
- does not expect groups close to the threshold to be different in the absence of the intervention, and
- the only source of the discontinuity is the score variable.
Bias-Variance Trade-Off
RDD analysis is valid for individuals or groups close to the cutoff value. However, the number of observations in a narrow band surrounding this value is usually limited, yielding an estimate of the intervention effect with a large variance. To address this, more observations may be included by increasing the width of the band surrounding the cutoff value. However, increasing this bandwidth to include more observations may yield biased estimates, because assumptions of the trend on either side of the cutoff may not hold (
). To address this bias-variance trade-off, an optimal bandwidth can be chosen on the basis of some criterion to minimize the mean square error of the intervention effect estimate ( ).Intervention Assignment
RDDs are a form of observational study because assignment to conditions is not random. As such, one serious concern is the potential for participants or groups to manipulate their assignment score so as to obtain the intervention (
; ). However, strong evidence for causal inference can be provided by an RDD for individuals or groups near the cutoff if there is no expectation for outcomes to differ in the absence of the intervention ( ).Treatment Compliance
If there is perfect compliance to the assigned condition, then the design is said to be a “sharp” RDD. Designs in which perfect compliance is not achieved are said to be a “fuzzy” RDD.
Multiple interpretations of the intervention effect are possible with RDDs. Cappelleri and Trochim (
) indicate the distinction between sharp and fuzzy RDDs is analogous to the distinction between “intention-to-treat” and “treatment-on-the-treated” in randomized settings, respectively. Cattaneo et al. ( ) describe similar interpretations, but add the effect of assigning the intervention for all participants as a possibility for fuzzy RDDs.Intraclass Correlation
One challenging feature of GRDDs is that members of the same group usually share some physical, geographic, social, or other connection. Those connections create the expectation for a positive intraclass correlation (ICC) among observations taken on members of the same group, as members of the same group tend to be more like one another than to members of other groups. Positive ICC reduces the variation among members of the same group but increases variation among the groups, which in turn increases the variance of group-level statistics. Complicating matters further, the degrees of freedom (df) available to conduct inference for the intervention effect are based on the number of groups and so are often limited. As with GRTs, any GRDD analysis that ignores the extra variation (or positive ICC) or the limited df will have an inflated type I error rate.
Solutions
The recommended solutions to these challenges are to
- reflect the hierarchical structure of the design in the analytic plan,
- assess RDD assumptions using established tests, and
- estimate the sample size for the GRDD based on realistic and data-based estimates of the ICC and the other parameters indicated by the analytic plan.