Advertisement

Reply

Published:March 24, 2014DOI:https://doi.org/10.1016/j.ajog.2014.03.046
      The intent and purpose of the comparability scoring system that we proposed was to address simultaneously the issues of both selection bias and bias because of measured and unmeasured (but recognizable) confounding in observational studies.
      • Vintzileos A.M.
      • Ananth C.V.
      • Smulian J.C.
      The use of a comparability scoring system in reporting observational studies.
      The letter by Bannister-Tyrrell et al, which we appreciate, indicates that they may have misunderstood the comparability scoring system. Despite their incomplete understanding of our proposal, Bannister-Tyrrell et al proceeded to make 3 assertions.
      The first assertion is that selection bias is unlikely to occur in population studies when the comparison groups are drawn from the same unselected population. However, despite the fact that the comparison groups may have been drawn from the same source population, both selection and confounding biases may still persist with respect to our comparability criteria (geographic setting, healthcare setting, types of health care providers, confounding interventions, studied time intervals, and impact from consensus statements). Importantly, these variables rarely are addressed well in population-based data sets, which limits the ability of any standard analysis to capture their impact accurately.
      Their second assertion is that the clinical and statistical relevance of the different circumstances of care may be better assessed by the use of multilevel modeling to adjust for unmeasured characteristics. However, multilevel modeling cannot adjust for differences in the timing of medical management or consensus statements or untangle the impact of other confounding interventions. In addition multilevel modeling has some drawbacks too. Notably, these superior models have a purpose and a place for their application, but the complexity in fitting these models comes at a compromise – loss of transparency. Moreover, parameters in multilevel models carry a subject-specific interpretation (as opposed to population-averaged interpretation in other simpler, nonnested model forms). Importantly, these models are prone to break down easily in the setting of smaller studies with multiple levels of nested data.
      The third assertion is that studies with small sample sizes from single institutions will result in false-positive findings if they use our proposed comparability score. We disagree. The effect of our proposed comparability scoring system should be exactly the opposite. As a matter of fact, in our article, we provide 2 examples from previously published studies
      • McPherson J.A.
      • Harper L.M.
      • Odibo A.O.
      • Roehl K.A.
      • Cahill A.G.
      Maternal seizure disorder and risk of adverse pregnancy outcomes.
      • Baud D.
      • Lausman A.
      • Alfaraj M.A.
      • et al.
      Expectant management compared with elective delivery at 37 weeks for gastroschisis.
      from single institutions in which the presupposed positive statistical associations disappeared once the analysis was adjusted for our comparability score, thus decreasing false-positive results.

      References

        • Vintzileos A.M.
        • Ananth C.V.
        • Smulian J.C.
        The use of a comparability scoring system in reporting observational studies.
        Am J Obstet Gynecol. 2014; 210: 112-116
        • McPherson J.A.
        • Harper L.M.
        • Odibo A.O.
        • Roehl K.A.
        • Cahill A.G.
        Maternal seizure disorder and risk of adverse pregnancy outcomes.
        Am J Obstet Gynecol. 2013; 208: 378.e1-378.e5
        • Baud D.
        • Lausman A.
        • Alfaraj M.A.
        • et al.
        Expectant management compared with elective delivery at 37 weeks for gastroschisis.
        Obstet Gynecol. 2013; 121: 990-998

      Linked Article

      • Utility of a comparability score for reporting studies using whole population data
        American Journal of Obstetrics & GynecologyVol. 211Issue 2
        • Preview
          We read with interest the article on the use of a comparability scoring system in reporting observational studies that was published in February in the American Journal of Obstetrics and Gynecology.1 The article proposes a checklist-based scoring system to compare intervention and control groups in terms of geographic setting, healthcare setting, healthcare providers, confounding interventions, time interval, and consensus statement impacts. Although this approach has merit for some observational studies, we consider comment on the utility of this approach for population-based studies is warranted.
        • Full-Text
        • PDF