The Concept of Hybrid Designs

Much attention has been paid regarding the concept of so-called Hybrid Designs.[1] These are:

  1. Hybrid 1: effectiveness studies – intervention effects on clinical outcomes.
  2. Hybrid 2: measures both uptake and outcomes of interventions.
  3. Hybrid 3: implementation studies – uptake of the intervention, but not clinical outcomes.

These are important distinctions. However, the term ‘designs’ is misleading – as Curran’s more recent reflections [2] make clear – design relates to whether the study is randomised, quasi-randomised, cluster, individual, and so on. The hybrid distinction is more related to outcomes/end points in a study. For example, in Hybrid 2, one wishes to observe both implementation and outcome variables.

Another way of thinking about these matters aligns to modern causal thinking (see Judea Pearl).[3] This is to consider outcomes along a causal chain:

Policy high-level management targeted management clinical process outcome.

For example:

Work-force training Provision of Continuing Professional Development Training on a specific process Implementation of that process Effect on the patient.

The theoretical basis for this point, including more detailed descriptions of levels (links) in the chain and the role of context, is laid out in more detail in a BMJ article from 2010.[4] This article provides a framework for thinking about the precision necessary at the different levels and the ‘inconvenient truth’ that measurable effects upstream often cannot generate effects downstream with useful levels of precision.

Richard Lilford, ARC West Midlands Director


References:

  1. Curran GM, Landes SJ, McBain SA, et al. Reflections on 10 years of effectiveness implementation hybrid studies. Front Health Serv. 2012.
  2. Asada Y, Kroll-Desrosiers A, Chriqui JF, Curran GM, Emmons KM, Haire-Joshu D, Brownson RC. Applying hybrid effectiveness-implementation studies in equity-centered policy implementation
    science
    . Front Health Serv. 2023; 3: 1220629.
  3. Pearl J, Glymour M, Jewell NP. Causal Inference in Statistics: A Primer. Chichester: Wiley; 2016.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010: 341: c4413.
Skip to content