Ƶ

Object moved to here.

The Integration of Clinical Trials With the Practice of Medicine: Repairing a House Divided | Health Care Quality | JAMA | Ƶ

Ƶ

[Skip to Navigation]
Sign In
Figure. Integration of Clinical Trials and Clinical Practice

Advancing health care at the fastest rate possible (upper panel) requires a continual evaluation cycle of which interventions (both existing and new) are superior in which clinical setting, with adoption of superior care and deadoption of inferior care. This continual evaluation cycle requires integration of clinical trials (unless an equally robust measure of effect can be generated from an alternative design) and clinical practice. Any delay or lack of integration (lower panel) delays advancement, allows potentially inferior care to persist longer than necessary, resulting in unnecessary costs, unwanted variation, and worse outcomes than otherwise possible. These unwanted features represent loss to society.

Table 1. Consequences of the Lack of Integration of Clinical Trials and Clinical Practice
Table 2. Examples of Approaches to Promote Greater Integration of Clinical Trials and Clinical Practice
1.
Cruess SR, Cruess RL. Professionalism and medicine’s social contract with society. Virtual Mentor. 2004;6(4). doi:
2.
The Institute for Healthcare Improvement. Triple aim and population health. Accessed January 21, 2024.
3.
The Pew Research Center. Living to 120 and beyond. August 6, 2013. Accessed February 23, 2024.
4.
Remes J, Linzer K, Singhal S, et al. Prioritizing Health: a Prescription for Prosperity. McKinsey Global Institute; 2020. Accessed May 14, 2024.
5.
Nundy S, Cooper LA, Mate KS. The quintuple aim for health care improvement: a new imperative to advance health equity. Ѵ. 2022;327(6):521-522. doi:
6.
The Good Clinical Trials Collaborative. Guidance for Good Randomized Clinical Trials. May 2022. Accessed February 19, 2024.
7.
Collins R, Bowman L, Landray M, Peto R. The magic of randomization versus the myth of real-world evidence. N Engl J Med. 2020;382(7):674-678. doi:
8.
Sanchez P, Voisey JP, Xia T, Watson HI, O’Neil AQ, Tsaftaris SA. Causal machine learning for healthcare and precision medicine. R Soc Open Sci. 2022;9(8):220638. doi:
9.
Gallifant J, Zhang J, Whebell S, et al. A new tool for evaluating health equity in academic journals; the Diversity Factor. PLOS Glob Public Health. 2023;3(8):e0002252. doi:
10.
Cochrane AL. Effectiveness and Efficiency: Random Reflections on Health Services. The Rock Carling Fellowship, Nuffield Provincial Hospitals Trust; 1972.
11.
The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. US National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; 1978:20.
12.
McGuire TG. Physician agency. In: Culyer AJ, Newhouse JP, eds. Handbook of Health Economics. Elsevier; 2000:461-536.
13.
Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. Ѵ. 2002;325(7370):961-964. doi:
14.
McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645. doi:
15.
Shrank WH, Rogstad TL, Parekh N. Waste in the US Health Care System: Estimated Costs and Potential for Savings. Ѵ. 2019;322(15):1501-1509. doi:
16.
Berwick DM. Salve Lucrum: The Existential Threat of Greed in US Health Care. Ѵ. 2023;329(8):629-630. doi:
17.
US government. Title 21 Part 50 Protection of Human Subjects. In: National Archives and Records Administration. 21 CFR Part 50. Federal Register; 1981. Accessed February 23, 2024.
18.
US government. Title 21 Part 56 Institutional Review Boards. In: National Archives and Records Administration, editor. 21 CFR Part 56. Federal Register; 1981. Accessed February 23, 2024.
19.
US government. Title 45 Part 46 - Protection of Human Subjects. In: National Archives and Records Administration, editor. 45 CRF Part 46. Federal Register; 1981. Accessed February 23, 2024.
20.
US Food and Drug Administration. FDA Policy for the Protection of Human Subjects. Accessed February 23, 2024.
21.
ClinicalTrials.gov. Accessed January 21, 2024.
22.
World Health Organization. WHO Clinical Trials Registry Platform. Accessed February 19, 2024.
23.
ISRCTN. ISRCTNregistry. Accessed February 19, 2024.
24.
Grand View Research. Clinical Trials Market Size, Share & Trends Analysis Report By Phase (Phase I, Phase II, Phase III, Phase IV), By Study Design, By Indication, Indication By Study Design, By Sponsors, By Service Type, By Region and Segment Forecasts, 2024 - 2030. GVR-1-68038-975-3 ed. Grand View Research; 2023. Accessed January 21, 2024.
25.
Wizemann T, Wagner Gee AS. Envisioning a Transformed Clinical Trials Enterprise for 2030: Proceedings of a Workshop. National Academies Press; 2022.
26.
US Preventive Services Task Force Recommendation Topics. US Preventive Services Task Force. Accessed February 23, 2024.
27.
ClinicalTrials.gov. Sepsis. Accessed January 21, 2024.
28.
Evans L, Rhodes A, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock 2021. Crit Care Med. 2021;49(11):e1063-e1143. doi:
29.
Fanaroff AC, Califf RM, Windecker S, Smith SC Jr, Lopes RD. Levels of evidence supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008-2018. Ѵ. 2019;321(11):1069-1080. doi:
30.
McKenzie MS, Auriemma CL, Olenik J, Cooney E, Gabler NB, Halpern SD. An observational study of decision making by medical intensivists. Crit Care Med. 2015;43(8):1660-1668. doi:
31.
Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5(4):339-342. doi:
32.
Huang AC, Zappasodi R. A decade of checkpoint blockade immunotherapy in melanoma: understanding the molecular basis for immune sensitivity and resistance. Nat Immunol. 2022;23(5):660-670. doi:
33.
Meurer WJ, Lewis RJ. Cluster randomized trials: evaluating treatments applied to groups. Ѵ. 2015;313(20):2068-2069. doi:
34.
Ellenberg SS. The stepped-wedge clinical trial: evaluation by rolling deployment. Ѵ. 2018;319(6):607-608. doi:
35.
Kasza J, Bowden R, Hooper R, Forbes AB. The batched stepped wedge design: a design robust to delays in cluster recruitment. Stat Med. 2022;41(18):3627-3641. doi:
36.
Kent DM, Hayward RA. Limitations of applying summary results of clinical trials to individual patients: the need for risk stratification. Ѵ. 2007;298(10):1209-1212. doi:
37.
Angus DC, Chang CH. Heterogeneity of treatment effect: estimating how the effects of interventions vary across individuals. Ѵ. 2021;326(22):2312-2313. doi:
38.
Angus DC. Your mileage may vary: toward personalized oxygen supplementation. Ѵ. 2024;331(14):1179-1180. doi:
39.
Freedman DH. Clinical trials have the best medicine but do not enroll the patients who need it. Scientific American. January 1, 2019. Accessed May 7, 2024.
40.
Bradley CK, Wang TY, Li S, et al. Patient-Reported reasons for declining or discontinuing statin therapy: insights from the PALM Registry. J Am Heart Assoc. 2019;8(7):e011765. doi:
41.
Gülmezoglu AM, Duley L. Use of anticonvulsants in eclampsia and pre-eclampsia: survey of obstetricians in the United Kingdom and Republic of Ireland. Ѵ. 1998;316(7136):975-976. doi:
42.
Altman D, Carroli G, Duley L, et al; Magpie Trial Collaboration Group. Do women with pre-eclampsia, and their babies, benefit from magnesium sulphate? The Magpie Trial: a randomised placebo-controlled trial. Գ. 2002;359(9321):1877-1890. doi:
43.
HPS2-THRIVE Collaborative Group. Effects of extended-release niacin with laropiprant in high-risk patients. N Eng J Med. 2014;371(3):203-212. doi:
44.
Jackevicius CA, Tu JV, Ko DT, de Leon N, Krumholz HM. Use of niacin in the United States and Canada. Ѵ Intern Med. 2013;173(14):1379-1381. doi:
45.
Roberts I, Yates D, Sandercock P, et al; CRASH trial collaborators. Effect of intravenous corticosteroids on death within 14 days in 10008 adults with clinically significant head injury (MRC CRASH trial): randomised placebo-controlled trial. Գ. 2004;364(9442):1321-1328. doi:
46.
Hoshide R CV, Marshall L, Kasper E, Chen CC. Do corticosteroids play a role in the management of traumatic brain injury? Surg Neurol Int. 2016;7:84. doi:
47.
Brower RG, Matthay MA, Morris A, Schoenfeld D, Thompson BT, Wheeler A; Acute Respiratory Distress Syndrome Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. N Engl J Med. 2000;342(18):1301-1308. doi:
48.
Bellani G, Laffey JG, Pham T, et al; LUNG SAFE Investigators; ESICM Trials Group. Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries. Ѵ. 2016;315(8):788-800. doi:
49.
Bugin K, Woodcock J. Trends in COVID-19 therapeutic clinical trials. Nat Rev Drug Discov. 2021;20(4):254-255. doi:
50.
Angus DC, Gordon AC, Bauchner H. Emerging lessons from COVID-19 for the US Clinical Research Enterprise. Ѵ. 2021;325(12):1159-1161. doi:
51.
Janiaud P, Hemkens LG, Ioannidis JPA. Challenges and lessons learned from COVID-19 trials: should we be doing clinical trials differently? Can J Cardiol. 2021;37(9):1353-1364. doi:
52.
Huang DT, McVerry BJ, Horvat C, et al; UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators. Implementation of the Randomized Embedded Multifactorial Adaptive Platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations. հ. 2021;22(1):100. doi:
53.
Närhi F, Moonesinghe SR, Shenkin SD, et al; ISARIC4C investigators. Implementation of corticosteroids in treatment of COVID-19 in the ISARIC WHO Clinical Characterisation Protocol UK: prospective, cohort study. Գ Digit Health. 2022;4(4):e220-e234. doi:
54.
Horby P, Lim WS, Emberson JR, et al; RECOVERY Collaborative Group. Dexamethasone in hospitalized patients with Covid-19. N Engl J Med. 2021;384(8):693-704. doi:
55.
Angus DC, Derde L, Al-Beidh F, et al; Writing Committee for the REMAP-CAP Investigators. Effect of hydrocortisone on mortality and organ support in patients with severe COVID-19: the REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial. Ѵ. 2020;324(13):1317-1329. doi:
56.
WHO Solidarity Trial Consortium. Remdesivir and three other drugs for hospitalised patients with COVID-19: final results of the WHO Solidarity randomised trial and updated meta-analyses. Գ. 2022;399(10339):1941-1953. doi:
57.
Angus DC. Fusing randomized trials with big data: the key to self-learning health care systems? Ѵ. 2015;314(8):767-768. doi:
58.
Kass NE, Faden RR. Ethics and learning health care: the essential roles of engagement, transparency, and accountability. Learn Health Syst. 2018;2(4):e10066. doi:
59.
Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep. 2013;(Spec No):S16-S27. doi:
60.
Whicher D, Kass N, Saghai Y, Faden R, Tunis S, Pronovost P. The views of quality improvement professionals and comparative effectiveness researchers on ethics, IRBs, and oversight. J Empir Res Hum Res Ethics. 2015;10(2):132-144. doi:
61.
Faden RR, Beauchamp TL, Kass NE. Informed consent, comparative effectiveness, and learning health care. N Engl J Med. 2014;370(8):766-768. doi:
62.
Kass NE, Faden RR, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. The research-treatment distinction: a problematic approach for determining which activities should have ethical oversight. Hastings Cent Rep. 2013;(Spec No):S4-S15. doi:
63.
Park JJH, Detry MA, Murthy S, Guyatt G, Mills EJ. How to use and interpret the results of a platform trial: users’ guide to the medical literature. Ѵ. 2022;327(1):67-74. doi:
64.
Angus DC, Berry S, Lewis RJ, et al. The REMAP-CAP (Randomized Embedded Multifactorial Adaptive Platform for Community-acquired Pneumonia) Study: rationale and design. Ann Am Thorac Soc. 2020;17(7):879-891. doi:
65.
Angus DC, Alexander BM, Berry S, et al; Adaptive Platform Trials Coalition. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019;18(10):797-807. doi:
66.
Gallo A. A refresher on A/B testing. Harvard Business Review. June 28, 2017. Accessed January 21, 2024.
67.
Sawant NN et al. Contextual Multi-Armed Bandits for Causal Marketing. Accessed May 7, 2024.
68.
The Observational Medical Outcomes Partnership Common Data Model Working Group. OMOP Common Data Model. The Observational Medical Outcomes Partnership. Accessed February 23, 2024.
69.
The Office of the National Coordinator for Healthcare Information Technology (ONC). HealthIT.gov. US Dept of Health and Human Services. Accessed February 23, 2024.
70.
HL7 FHIR Release 5. HL7 International. Accessed February 23, 2024.
71.
The Clinical Data Interchange Standards Consortium (CDISC). Study Data Tabulation Model (SDTM). CDISC. Accessed March 20, 2024.
72.
Bönisch C, Kesztyüs D, Kesztyüs T. Harvesting metadata in clinical care: a crosswalk between FHIR, OMOP, CDISC and openEHR metadata. Sci Data. 2022;9(1):659. doi:
73.
US Dept of Health and Human Services. CMS Blue Button 2.0. Accessed February 23, 2024.
74.
Qian ET, Casey JD, Wright A, et al; Vanderbilt Center for Learning Healthcare and the Pragmatic Critical Care Research Group. Cefepime vs piperacillin-tazobactam in adults hospitalized with acute infection: the ACORN randomized clinical trial. Ѵ. 2023;330(16):1557-1567. doi:
75.
Jones WS, Mulder H, Wruck LM, et al; ADAPTABLE Team. Comparative effectiveness of aspirin dosing in cardiovascular disease. N Engl J Med. 2021;384(21):1981-1990. doi:
76.
Inan OT, Tenaerts P, Prindiville SA, et al. Digitizing clinical trials. NPJ Digit Med. 2020;3(1):101. doi:
77.
Fröbert O, Lagerqvist B, Olivecrona GK, et al; TASTE Trial. Thrombus aspiration during ST-segment elevation myocardial infarction. N Engl J Med. 2013;369(17):1587-1597. doi:
78.
Weinfurt KP, Hernandez AF, Coronado GD, et al. Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory. BMC Med Res Methodol. 2017;17(1):144. doi:
79.
PCORI. The PCORI Strategic Plan. Accessed May 7, 2024.
80.
The United Kingdom National Institute for Health and Care Research. Our Research Performance. NIHR. Accessed January 21, 2024.
81.
Ozdemir BA, Karthikesalingam A, Sinha S, et al. Research activity and the association with mortality. PLoS One. 2015;10(2):e0118253. doi:
82.
Commercial clinical trials in the UK: the Lord O’Shaughnessy review: final report. 2023. Accessed May 26, 2023.
83.
Atkins VMP, Morgan E, Matheson M. Full government response to the Lord O’Shaughnessy review into commercial clinical trials. 2023. Accessed December 8, 2023.
Special Communication
Integrating Clinical Trials and Practice
ܲԱ3, 2024

The Integration of Clinical Trials With the Practice of Medicine: Repairing a House Divided

Author Affiliations
  • 1JAMA, Chicago, Illinois
  • 2University of Pittsburgh Schools of the Health Sciences, Pittsburgh, Pennsylvania
  • 3University of California, San Francisco
  • 4David Geffen School of Medicine at UCLA, Los Angeles, California
  • 5Verily Life Sciences, San Francisco, California
  • 6Now with Highlander Health, Dallas, Texas
  • 7US Food and Drug Administration, Washington, DC
  • 8Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom
  • 9Protas, Manchester, United Kingdom
  • 10Johns Hopkins University, Baltimore, Maryland
JAMA. Published online June 3, 2024. doi:10.1001/jama.2024.4088
Abstract

Importance Optimal health care delivery, both now and in the future, requires a continuous loop of knowledge generation, dissemination, and uptake on how best to provide care, not just determining what interventions work but also how best to ensure they are provided to those who need them. The randomized clinical trial (RCT) is the most rigorous instrument to determine what works in health care. However, major issues with both the clinical trials enterprise and the lack of integration of clinical trials with health care delivery compromise medicine’s ability to best serve society.

Observations In most resource-rich countries, the clinical trials and health care delivery enterprises function as separate entities, with siloed goals, infrastructure, and incentives. Consequently, RCTs are often poorly relevant and responsive to the needs of patients and those responsible for care delivery. At the same time, health care delivery systems are often disengaged from clinical trials and fail to rapidly incorporate knowledge generated from RCTs into practice. Though longstanding, these issues are more pressing given the lessons learned from the COVID-19 pandemic, heightened awareness of the disproportionate impact of poor access to optimal care on vulnerable populations, and the unprecedented opportunity for improvement offered by the digital revolution in health care. Four major areas must be improved. First, especially in the US, greater clarity is required to ensure appropriate regulation and oversight of implementation science, quality improvement, embedded clinical trials, and learning health systems. Second, greater adoption is required of study designs that improve statistical and logistical efficiency and lower the burden on participants and clinicians, allowing trials to be smarter, safer, and faster. Third, RCTs could be considerably more responsive and efficient if they were better integrated with electronic health records. However, this advance first requires greater adoption of standards and processes designed to ensure health data are adequately reliable and accurate and capable of being transferred responsibly and efficiently across platforms and organizations. Fourth, tackling the problems described above requires alignment of stakeholders in the clinical trials and health care delivery enterprises through financial and nonfinancial incentives, which could be enabled by new legislation. Solutions exist for each of these problems, and there are examples of success for each, but there is a failure to implement at adequate scale.

Conclusions and Relevance The gulf between current care and that which could be delivered has arguably never been wider. A key contributor is that the 2 limbs of knowledge generation and implementation—the clinical trials and health care delivery enterprises—operate as a house divided. Better integration of these 2 worlds is key to accelerated improvement in health care delivery.

Society’s Broad Goals for Medicine

Modern society expects not only the best possible care now, but better care in the future.1-5 These goals assume integration of the practice and science of medicine. Optimal care today requires scientific evidence for the entire spectrum of care (from studies of new diagnostic and therapeutic strategies to those on how to organize and deliver care) and mechanisms to ensure such evidence is rapidly incorporated. Simultaneously, delivery of the best possible care tomorrow requires maximal effort today to evaluate what approaches solve inadequacies in current care. The randomized clinical trial (RCT) is the criterion standard for evidence that something works in health care (see the Box).6-9 Thus, it should serve as the link between science and practice: promising approaches should, in most instances, be tested in an RCT and, when results are positive, practice should change accordingly. Yet, the clinical trials and health care delivery enterprises largely ignore each other: RCTs frequently fail to generate knowledge relevant to practice, while practice patterns are frequently unsupported by, or fail to change with, RCT evidence.

Box Section Ref ID
Box.

The Importance of Randomization

Why is the randomized clinical trial (RCT) so special?6,7

To determine whether a health care intervention improves outcomes, one typically compares the outcomes of patients exposed to the intervention of interest with the outcomes of those not exposed. If those receiving the intervention fared better, one might conclude the improvement was due to the intervention. The problem is that those receiving the intervention might always have been destined to fare better because their circumstances differed from those not receiving the intervention (eg, they may not have been as sick, may have had access to other helpful treatments, may be more likely to adhere to therapy). Although there are approaches to adjust for known differences between groups that could affect their outcome, there may be unknown, unmeasured, or inaccurately measured factors confounding the comparison.

In contrast, by dividing patients into groups randomly, one can generate groups expected to have balanced distributions of preexisting characteristics (and hence balanced prognosis), regardless of whether these characteristics are known or unknown. If the randomly divided groups are then assigned to receive different treatments, differences in outcome can be assumed to be representative of the likely effects of the treatment, unconfounded by baseline differences among groups. This feature of randomization sets apart the RCT as the approach most capable of rigorously determining whether a health care intervention actually caused an improvement in outcomes.

Why not just use artificial intelligence (AI) to analyze existing real-world data?

The AI boom offers many complementary tools to the RCT. AI tools, for example, can help screen and identify patients for enrollment in trials, particularly in precision trials that require complex analysis of imaging or biopsy data. Machine learning (ML) models can also be used to examine heterogeneity of treatment effect within RCTs. However, using ML to analyze existing observational data instead of conducting an RCT is much more controversial. Although ML models can be very sophisticated, they generally still only estimate associations, not effects. A subset of AI, known as causal AI, uses ML nested within causal inference approaches, such as structural causal models or Rubin potential outcomes.8 This approach estimates actionable effects, but only under certain assumptions and only in limited settings; randomization will likely remain essential for causal estimation in many, if not most, settings.

How Did We Get Here?

In the 1970s, Archie Cochrane lamented that care was riddled by physicians “making things up,” and urged that all clinical practice should be based on proof of benefit in an RCT.10 At the same time, to ensure RCTs were conducted ethically, the Belmont Report outlined principles and instruments for human subjects protection, such as proper informed consent, favorable risk/benefit assessment, and fair selection of participants.11 The report distinguished research from practice, stating that practice did not need such protections because the only motivation was to serve the patient’s interest and the only actions taken were expected to benefit the patient. Unfortunately, this view oversimplifies physician motivation12 and ignores the many factors that drive the massive variation in clinical practice unexplained by patient preference or clinical indication.13-16 Thus, the Belmont Report greatly improved protections for participants in RCTs but failed to address protection from the inherent uncertainty, ulterior motives, and variation in practice. The report, adopted in federal regulations for research conduct,17-20 likely further cleaved research from practice as institutions created separate processes and infrastructure to ensure compliance.

Over the past 50 years, the RCT grew dramatically in importance. Many countries passed laws requiring regulatory authorities to collect evidence from adequately sized RCTs before approval of nearly all drugs and many other health care interventions. Trial registration is standard, with approximately 40 000 new RCTs registered annually on ClinicalTrials.gov and many more registered elsewhere.21-23 The design, conduct, oversight, and reporting of RCTs are better and more sophisticated, and the annual global budget is now $80 billion.24 However, as highlighted by the National Academy of Medicine,25 major problems exist with the design and conduct of RCTs and their lack of integration with clinical practice.

Problems With RCTs
Few High-Quality Clinically Relevant RCTs

With so many trials, one might assume every question is addressed. However, many trials are duplicative, badly designed (providing no useful information), or never reach completion. Consequently, massive areas of clinical uncertainty are uninformed by any RCT. Treatment guidelines rely heavily on expert opinion, observational studies, or tangential extrapolation from partially relevant RCTs. For example, since 2021, the US Preventative Services Task Force published 40 guidelines, of which 6 (15%) were based on strong evidence.26 Similarly, only 11% of the Surviving Sepsis Guidelines and 8.5% of the American College of Cardiology/American Heart Association recommendations are based on high-grade RCT evidence.27-29 In both inpatient and outpatient settings, clinical teams make approximately 100 decisions per day, yet as little as 3% are supported by any scientific evidence.30,31

Much of the problem arises because of the mismatch between narrowly defined RCTs and the broader context of clinical practice. Even when therapies are used in the same populations studied in registration trials, new questions arise for which RCTs are lacking. For example, immune checkpoint inhibitors dramatically arrest tumor progression in metastatic melanoma, but fatal adverse effects can occur with prolonged therapy.32 Yet oncologists and their patients have no RCT guidance on when to stop therapy. Importantly, not just bedside decisions require rigorous scientific evidence. Patient and population health is highly dependent on decisions regarding health care financing and delivery. Such organizational decisions could be tested, perhaps with batched or stepped designs, but rarely are.33-35

RCTs Often Produce Answers That Are Too Narrow

RCTs can be too narrow in several ways. First, the population studied may be unrepresentative of the broader population for whom the intervention is intended. Second, aspects of background care necessary for the intervention’s success may be provided in the RCT but not be widely available. These problems are more acute in the context of precision medicine, with narrow populations and studies often restricted to academic medical centers. Third, the outcome may be defined too narrowly, testing the effect of an intervention on intermediate biological or clinical outcomes without capturing a broader set of outcomes important to patients and society.

RCTs Often Produce Answers That Are Too Broad

RCTs provide an estimate of the average effect if a group of patients were to receive a particular intervention. Though few individuals experience the average effect,36-38 the average often suffices. For example, patients respond variably to antihypertensive medications, but clinicians can start with a low dose and adjust as needed. Other instances are more problematic. Consider the decision to prescribe an immunomodulatory drug for sepsis, administered as a single bolus, that was reported to lower sepsis-related deaths by 7% but also cause fatal adverse effects in 1%, yielding a mean net benefit of 6%. A clinician does not know if the patient will be saved, fatally harmed, or unaffected. Additional RCTs or meta-analyses may improve prediction of who is most likely to benefit or be harmed, but patients and their clinicians are often left asking “This drug may work on average, but will it work for me or my patient?”

RCTs Are Often Challenging and Resource-Intensive to Implement

The period before launch is often arduous as researchers design the study, seek funding, obtain regulatory and ethical approval, recruit and contract with clinical sites, and develop the infrastructure required for study procedures (activities that must be repeated with each trial). After launch, enrollment can be difficult, leading to delay and cost overruns. Once enrollment is completed, data collection, analysis, and publication take many more months. These challenges have had corrosive effects on the clinical trials enterprise. RCTs are often woefully undersized, with little chance of demonstrating an intervention’s benefit overall, let alone in any important subgroup, largely because investigators cannot imagine how they might reasonably fund or execute a properly sized study in a timely manner. Logistical burdens lead investigators to restrict trials to sites with existing research infrastructure, exacerbating underrepresentation of populations with limited access to such settings.

RCTs are also challenging for patients. Patients may participate in an RCT because it provides the only opportunity to receive a particular therapy. RCTs may also provide other aspects of care in a manner superior to usual care. However, RCTs often rely on a patient’s altruism: there is no guarantee of either receiving the intervention under study or that it will work, participation may be burdensome, and there may be risks associated with experimental therapies and study procedures. Asking for such altruism, especially in a patient’s hour of need, asks a lot. RCTs can also impose on the clinical team caring for a patient. The team is often swamped, and explaining the trial to patients or providing referrals can be perceived as detracting from their primary duty to care.39

The Overarching Problem of a House Divided

Similar to RCTs, current practice also has considerable problems. Even when strong RCT evidence exists, it is frequently not followed in practice, leading to poor quality of care rife with both overuse and underuse.13-15 Physicians frequently fail to prescribe therapy when indicated (eg, statins for hyperlipidemia, angiotensin-converting enzyme inhibitors for congestive heart failure, or magnesium for preeclampsia) and continue to prescribe therapy known to have no benefit or cause harm (eg, niacin for cardiovascular disease, dexamethasone for head injury, and large tidal volumes for acute respiratory distress syndrome).14,40-48

Meanwhile, the clinical trials and health care delivery enterprises are supported by separate people, policies, institutions, and funding, leading to siloed incentives and goals. Keeping clinical research so separate from clinical care exacerbates the problems for both domains: RCTs cannot provide better answers on how to care for patients without better integration into clinical care and clinical care cannot improve without better RCT evidence on what care to provide and how best to provide it without a mechanism to seamlessly incorporate that evidence into practice. This disconnect slows the pace at which health care advances, a failure of medicine’s contract with patients, research participants, and society. Yet no one is accountable (Table 1).

This problem of a house divided is long-standing, but there are 3 reasons for urgency. First, the COVID-19 pandemic put in stark terms the challenge of caring for patients with no evidence on how best to provide care. Although this dilemma exists for most diseases, COVID-19 presented such an existential and visible threat that it became a major focus across the globe. Much of the response to COVID-19 was disappointing: many clinicians adopted scantily justified treatments later shown to be futile or harmful and many researchers launched RCTs that were poorly designed, underpowered, duplicative, and never completed.49 But, on a positive note, others conducted RCTs in a manner hitherto considered impossible, generating a large, rich evidence base at record pace followed by equally rapid incorporation into practice.50-56 The question now is how to extend the same efficiencies to other areas of medicine. The second reason is increased awareness of the longstanding, and widening, inequity in health care. Many adverse consequences of this fractured system of science and practice are borne disproportionately by vulnerable, lower-income, and racial and ethnic minoritized populations.5 A health care delivery system equitable and accessible to all must be built on knowledge that is based on the experiences of, and relevance to, all. Third, the digital revolution in health care provides a data infrastructure on which practice and trials could be integrated.

What Needs to Be Fixed?

To meet societal goals, RCTs must be relevant and responsive to those receiving care, delivering care, or organizing the delivery of care (Figure). To be relevant and responsive, RCTs must address the correct questions and be timely, safe, and efficient. They must also be embedded in care in ways that ensure care changes in response to meaningful trial results.57,58 Four areas require improvement: ethical and regulatory oversight, study design, data infrastructure, and alignment of incentives across the clinical trials and health care delivery enterprises (Table 2).

Ethical and Regulatory Oversight

The current ethical and regulatory oversight system was designed primarily for the evaluation of unapproved products in preregistration studies. It generally works in this regard, though efforts to reduce unnecessary bureaucracy and adopt common standards across countries would be helpful. The larger problem relates to evaluations of how best to deliver care, such as studies of approved products for existing and new indications, comparative effectiveness, prospective quality improvement, and implementation science.59 Although some countries have implemented a single national ethics review process and provided new guidance for these types of studies, ethical review is decentralized in the US and current regulations lack clarity, especially with respect to the definition of practice vis-à-vis research, the determination of minimal risk, and requirements for informed consent in different study contexts. Consequently, institutional review boards (IRBs), institutions, researchers, and clinicians inconsistently determine what activities meet criteria for oversight and apply inconsistent oversight of activities deemed research. Many activities designed to improve practice are labeled as practice or quality improvement, rather than research, and thus are exempt from human participants research protection, even though they are conducted systematically under conditions of uncertainty and intended to generate generalizable knowledge. In contrast, similar activities, when labeled as research, may be required to solicit lengthy informed consent even though the options being studied represent different ways in which the patient might well have received usual care.60 In other words, the current lack of clarity generates both overprotection and underprotection of patients.

One solution is to broaden what counts as human participant research, capturing all activities intended to generate knowledge under uncertainty but, at the same time, make protection requirements fit for purpose, ensuring that each specific activity is not unduly burdened by requirements that are neither necessary nor helpful. Four interdependent steps would be helpful. First, abandon the binary definition of research vs practice, recognizing that practice is variable and uncertain, with both risk to patients and a near-constant requirement to learn and improve. Label all systematic efforts to learn and improve as “research,” socializing the idea that research is part of best practice. The consequence for patients is greater transparency and a reduced likelihood of underprotection.

Second, move research oversight toward an evaluation of risks, burdens, and impact on meaningful patient choice, relative to what would have happened in usual care for these patients.6 Meaningful choice implies the decision is one patients care about or are asked about in normal practice.61 Many aspects of usual care are assigned without informing patients or soliciting their preferences. Examples include individual care decisions, such as the choice of intravenous fluid administered during a surgical procedure, and care organization decisions, such as nurse-patient staffing ratios. If a health care system provided such aspects of care in 2 or more ways and decided to randomize the assignment to generate evidence, it does not necessarily hold that the risks exceed those of usual care nor that patients ordinarily would be involved in or care about the decision, raising questions of whether consent should be obtained.

Third, seek greater precision in the definition of minimal risk, especially in the context of postmarket research. The current definition of minimal risk states risks, burdens, and harms should not exceed those encountered in normal daily life, but US regulations do not clarify whether that would include the risks, burdens, and harms of usual care incurred during the “normal daily life” of someone who is sick or receiving treatment.62 Typically, research involving provision of drugs or devices, even if they are already approved and even if they were previously prescribed to the patient, is not categorized as minimal risk. Adopting a standard of “relative to what that participant would have experienced in usual care” would add clarity and provide criteria for instances appropriate for expedited review and waiver or streamlining of consent.6 Fourth, adopt evidence-based processes for obtaining informed consent and expand transparency and communication requirements even where consent is not required.58

Study Design

If RCTs are to become more relevant and responsive to patients’ needs, patients and communities must be engaged in trial selection, framing, and design. Such engagement has received increasing attention, and is required by some funders, but is still in its infancy.27 Although many trialists are clinicians, their views on which study questions should be prioritized or how studies can be designed to minimize clinician burden may not represent the broader clinical community. Other stakeholders may not even think their questions could be answered by an RCT. The typical hospital president faces a continuous stream of care delivery decisions where the optimal choice is unclear, but wants answers within the current budget cycle. In other industries, business decisions can be evaluated rapidly with so-called A/B testing, a form of randomized experiment. But to execute a reliable, safe, and inexpensive RCT in weeks or months, rather than years, requires redesign of every stage: planning, start-up (contracting and approvals), recruitment, outcomes ascertainment, time to first analysis, time to conduct and report analyses, and dissemination.

One lesson from the COVID-19 pandemic was the success of adaptive platform trials such as RECOVERY and REMAP-CAP.54,55 These studies used a master protocol governing overarching features including entry criteria, data collection, and outcome ascertainment with a modular structure for questions about individual interventions that were being tested.63 This approach allowed new study questions to be launched as study amendments, rather than as brand new studies, a much leaner and faster process. Both platform trials were designed to be embedded within clinical care, of low burden to the clinical team, and linked directly to their care decisions. Traditional study designs typically test 1 intervention at a time, which is inefficient especially for diseases or conditions that require multicomponent care regimens. In contrast, these platform trials tested multiple components of a patient’s care regimen simultaneously. An additional advantage was that most patients were assigned to receive at least 1 active therapy. The use of response-adaptive randomization in REMAP-CAP updated randomization rules over time to preferentially assign patients to those treatment combinations that were faring best. This method increased the odds that patients received the best therapy even before the trial was complete.64 Also, platform trials under both bayesian and frequentist statistical frameworks employed analytical strategies to ensure patients were only enrolled until conclusions could be reached with adequate certainty, providing answers strong enough to change practice as soon as possible. Multiple study questions were answered in weeks or months and led to immediate change in care.

These examples provide encouraging proof that desired features for RCTs, such as answers in weeks rather than years, are possible. And, of course, these platform designs need not be restricted to traditional clinical questions; they can also facilitate A/B testing of many health care delivery questions. But, creating a system in which such features are the norm requires widespread change.65

Data Infrastructure

The digital revolution over the last 2 decades is remarkable. Virtually all health care encounters in the US are now captured in electronic health records (EHR); other countries are following suit. In addition, a massive trove of patient-generated and sensor-derived data can now be captured. These systems not only collect data, but can support and record decisions by patients and clinicians. Thus, much of the data needed for an RCT could be captured directly, bypassing traditional data collection approaches. In other data-rich fields, large-scale randomized experimentation has exploded. For example, online retailers randomly manipulate the choice architecture for potential shoppers to determine what presentation best encourages purchases. These experiments range from simple A/B experiments, akin to traditional RCTs, to complex multigroup assignments with response-adaptive randomization, akin to adaptive platform trials.66,67 Finance industry technology allows sensitive personal data to be shared rapidly, accurately, and safely across data vendor platforms and organizations, providing an exemplar for the cross-institutional federated data movement required for digital multicenter RCTs.

However, major challenges exist with data quality, accuracy, and suitability; interoperability and interpretability; and security, privacy, ownership, and acceptable use. The EHR, for example, is used primarily for clinical care and billing and does not capture some data with the required fidelity for some RCTs. Data are organized under variable formats across both data vendors and health care delivery systems, with extensive local customization and little audit for data accuracy. Standardization efforts by the Observational Medical Outcomes Partnership,68 the US Office of the National Coordinator for Healthcare Information Technology,69 and Fast Healthcare Interoperability Resources70 facilitate automated extraction and curation, interoperability, and auditing. However, these standards are evolving, their use is not mandated, and adoption has been slow. Furthermore, these standards address different care and research needs, with poor crosswalk to each other and to trial data tabulation standards.71,72

Health data are private, but can also serve the public good. Determining how to balance data privacy and acceptable use remains unclear, with large policy differences between countries. Finally, regulatory audit of RCT data was designed to ensure that manually abstracted data are accurate; electronic health data present different challenges, requiring new audit approaches but also rendering many existing approaches unnecessary. Of note, secure, accurate, and portable health care data with mechanisms to authorize use with patient consent would serve more than research. Medicare’s Blue Button 2.0 program,73 for example, envisions using Fast Healthcare Interoperability Resources to give beneficiaries direct control over their health care data with the hope of improving the quality and patient-centeredness of care.

Alignment of Incentives

Although many barriers impede efficient integration of clinical trials and clinical practice, solutions exist. Many comparative effectiveness, pragmatic, clinical strategy, and quality improvement trials successfully navigated the ethical, design, and data challenges outlined above.52,74-78 The problem is one of scale and priority, which is due to a lack of alignment in goals and incentives across stakeholders. Stakeholders responsible for RCTs (trialists, industry and government funders, and regulators) are often isolated from those who should use RCT results (clinicians, health care and insurance administrators, and patients). Bringing these groups together requires separate strategies for each.

Incentivizing those responsible for RCTs to conduct trials more relevant to end-users starts with funding. Most federal funding for biomedical research is allocated to investigator-initiated questions, rather than in response to an explicit determination of society’s health and health care research priorities. Only a small fraction of the federal research budget requires patient or community engagement in the framing of study questions and trial design.79 The medical products industry funds its own studies, largely in response to regulatory requirements and net present value estimation. All trials require IRB approval and IRBs include patient, community, clinician, and health care system representation. However, IRBs are not positioned to demand what trials should be conducted or request designs be modified to ensure relevance to end-users. Health insurers largely play no role in influencing what trials are conducted. One path to ensure greater relevance would be for health care systems, insurers, and patient groups to advocate for legislation that increases the proportion of RCTs whose funding or regulatory approval is conditional on proof of relevance to patients and other end-users, either through direct engagement or demonstration of postresearch implementation of findings.

To generate dramatically faster answers from RCTs requires adoption of statistical and logistical innovations that yield far faster results. For example, master protocols and preexisting or modular contracts can shorten the period to study launch. But, it also requires all parts of the clinical trials enterprise (not just the trialists, but the regulators, funders, contracting parties, and publishers) to recognize the importance of speed and be held accountable with firm and consequential milestones.

Better engagement with clinicians and health care administrators to ensure RCTs are more relevant and timely is necessary but insufficient: the work to support an RCT competes with clinical care obligations and must be funded appropriately. Research funds paid to an institution compensate for some activities, but typically are not tied to the clinical operations budget. Recognizing this problem, the UK allocated funds that are paid directly to hospital administrators (separate from investigators’ research budgets) based on each hospital’s participation in RCTs considered important for the people of Britian.50,80 This effort not only improved patient enrollment, especially for government-funded studies, but also improved overall clinical outcomes at the participating hospitals.80,81 Although important barriers remain, it is notable that the UK government recognized that greater RCT engagement was a national health priority.82,83 An equivalent approach in the US would be if Medicare, based on recognition that greater RCT engagement boosted quality of care and patient experience, included financial incentives to hospitals and health care practitioners for such engagement under the Merit-based Incentive Payment System or Hospital Value-Based Purchasing Program. Finally, building trust and awareness that integration of research and practice yields better care may motivate patients to seek care at learning health care systems, further motivating health care system leaders to support RCT engagement.

Conclusions and Next Steps

Society’s goals for medicine require integration of the clinical trials and health care delivery enterprises to ensure the efficient and rapid generation and dissemination of knowledge on what care works and how it should be organized and delivered. However, the 2 enterprises operate as a house divided, to the detriment of both and to the detriment of society. Repairing the division requires changes in ethical and regulatory oversight, study design, data infrastructure, and incentive structures for stakeholders in both enterprises. Although these problems are long-standing, the gulf between the care available and the care that could be available has never been wider. Solutions exist, but have not been prioritized. In a series of upcoming articles, we will explore each of these issues in more detail and outline specific recommendations.

Back to top
Article Information

Accepted for Publication: April 11, 2024.

Published Online: June 3, 2024. doi:10.1001/jama.2024.4088

Corresponding Author: Derek C. Angus, MD, MPH, University of Pittsburgh School of Medicine, 3550 Terrace St, 614 Scaife Hall, Pittsburgh, PA 15260 (angusdc@pitt.edu).

Author Contributions: Dr Angus had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Conflict of Interest Disclosures: Dr Angus reported receiving personal fees from Abionyx, AM Pharma, and Siemens outside the submitted work. Dr Huang reported receiving personal fees from JAMA (consultant fees) and grants from National Institutes of Health (grant no. K24AG06860) during the conduct of the study and personal fees from Wolters Kluwer (royalties) and grants from National Institutes of Health outside the submitted work. Dr Lewis reported receiving personal fees from JAMA as a statistical editor during the conduct of the study and personal fees from Berry Consultants, a statistical consulting firm focusing on the design, implementation, and analysis of adaptive and platform clinical trials, as the senior medical scientist at Berry Consultants, LLC outside the submitted work. Dr Califf reported being a board member for Cytokinetics and Centessa outside the submitted work. Dr Landray reported receiving grants from Wellcome Trust, Gates Foundation, Flu Lab, Boehringer Ingelheim, Novartis, Regeneron, Sanofi, Moderna, Apollo Tx, Verve, BioNTech, and GSK and nonfinancial support from Roche outside the submitted work. Dr Kass reported being partially supported by the FDA through the Intergovernmental Personnel Act (IPA) arrangement and serving on a patient advisory board for Flatiron Health outside the submitted work. Dr Abernethy reported serving on a board of directors for EQRx, Georgiamune, and Insitro and as an advisor for One Health and Sixth Street during the conduct of the study; and being co-chair of the Evidence Mobilization Action Collaborative, National Academy of Medicine, and Digital Health Action Collaborative, and National Academy of Medicine. No other disclosures were reported.

Group Information: The JAMA Summit on Clinical Trials Participants are listed in the Supplement.

Additional Information: The 2023 JAMA Summit was a 2-day meeting held in October 2023 at the JAMA office in Chicago. This article was developed as an outgrowth of the discussions and debate at that meeting and the participants are all collaborators on this effort.

References
1.
Cruess SR, Cruess RL. Professionalism and medicine’s social contract with society. Virtual Mentor. 2004;6(4). doi:
2.
The Institute for Healthcare Improvement. Triple aim and population health. Accessed January 21, 2024.
3.
The Pew Research Center. Living to 120 and beyond. August 6, 2013. Accessed February 23, 2024.
4.
Remes J, Linzer K, Singhal S, et al. Prioritizing Health: a Prescription for Prosperity. McKinsey Global Institute; 2020. Accessed May 14, 2024.
5.
Nundy S, Cooper LA, Mate KS. The quintuple aim for health care improvement: a new imperative to advance health equity. Ѵ. 2022;327(6):521-522. doi:
6.
The Good Clinical Trials Collaborative. Guidance for Good Randomized Clinical Trials. May 2022. Accessed February 19, 2024.
7.
Collins R, Bowman L, Landray M, Peto R. The magic of randomization versus the myth of real-world evidence. N Engl J Med. 2020;382(7):674-678. doi:
8.
Sanchez P, Voisey JP, Xia T, Watson HI, O’Neil AQ, Tsaftaris SA. Causal machine learning for healthcare and precision medicine. R Soc Open Sci. 2022;9(8):220638. doi:
9.
Gallifant J, Zhang J, Whebell S, et al. A new tool for evaluating health equity in academic journals; the Diversity Factor. PLOS Glob Public Health. 2023;3(8):e0002252. doi:
10.
Cochrane AL. Effectiveness and Efficiency: Random Reflections on Health Services. The Rock Carling Fellowship, Nuffield Provincial Hospitals Trust; 1972.
11.
The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. US National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; 1978:20.
12.
McGuire TG. Physician agency. In: Culyer AJ, Newhouse JP, eds. Handbook of Health Economics. Elsevier; 2000:461-536.
13.
Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. Ѵ. 2002;325(7370):961-964. doi:
14.
McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645. doi:
15.
Shrank WH, Rogstad TL, Parekh N. Waste in the US Health Care System: Estimated Costs and Potential for Savings. Ѵ. 2019;322(15):1501-1509. doi:
16.
Berwick DM. Salve Lucrum: The Existential Threat of Greed in US Health Care. Ѵ. 2023;329(8):629-630. doi:
17.
US government. Title 21 Part 50 Protection of Human Subjects. In: National Archives and Records Administration. 21 CFR Part 50. Federal Register; 1981. Accessed February 23, 2024.
18.
US government. Title 21 Part 56 Institutional Review Boards. In: National Archives and Records Administration, editor. 21 CFR Part 56. Federal Register; 1981. Accessed February 23, 2024.
19.
US government. Title 45 Part 46 - Protection of Human Subjects. In: National Archives and Records Administration, editor. 45 CRF Part 46. Federal Register; 1981. Accessed February 23, 2024.
20.
US Food and Drug Administration. FDA Policy for the Protection of Human Subjects. Accessed February 23, 2024.
21.
ClinicalTrials.gov. Accessed January 21, 2024.
22.
World Health Organization. WHO Clinical Trials Registry Platform. Accessed February 19, 2024.
23.
ISRCTN. ISRCTNregistry. Accessed February 19, 2024.
24.
Grand View Research. Clinical Trials Market Size, Share & Trends Analysis Report By Phase (Phase I, Phase II, Phase III, Phase IV), By Study Design, By Indication, Indication By Study Design, By Sponsors, By Service Type, By Region and Segment Forecasts, 2024 - 2030. GVR-1-68038-975-3 ed. Grand View Research; 2023. Accessed January 21, 2024.
25.
Wizemann T, Wagner Gee AS. Envisioning a Transformed Clinical Trials Enterprise for 2030: Proceedings of a Workshop. National Academies Press; 2022.
26.
US Preventive Services Task Force Recommendation Topics. US Preventive Services Task Force. Accessed February 23, 2024.
27.
ClinicalTrials.gov. Sepsis. Accessed January 21, 2024.
28.
Evans L, Rhodes A, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock 2021. Crit Care Med. 2021;49(11):e1063-e1143. doi:
29.
Fanaroff AC, Califf RM, Windecker S, Smith SC Jr, Lopes RD. Levels of evidence supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008-2018. Ѵ. 2019;321(11):1069-1080. doi:
30.
McKenzie MS, Auriemma CL, Olenik J, Cooney E, Gabler NB, Halpern SD. An observational study of decision making by medical intensivists. Crit Care Med. 2015;43(8):1660-1668. doi:
31.
Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5(4):339-342. doi:
32.
Huang AC, Zappasodi R. A decade of checkpoint blockade immunotherapy in melanoma: understanding the molecular basis for immune sensitivity and resistance. Nat Immunol. 2022;23(5):660-670. doi:
33.
Meurer WJ, Lewis RJ. Cluster randomized trials: evaluating treatments applied to groups. Ѵ. 2015;313(20):2068-2069. doi:
34.
Ellenberg SS. The stepped-wedge clinical trial: evaluation by rolling deployment. Ѵ. 2018;319(6):607-608. doi:
35.
Kasza J, Bowden R, Hooper R, Forbes AB. The batched stepped wedge design: a design robust to delays in cluster recruitment. Stat Med. 2022;41(18):3627-3641. doi:
36.
Kent DM, Hayward RA. Limitations of applying summary results of clinical trials to individual patients: the need for risk stratification. Ѵ. 2007;298(10):1209-1212. doi:
37.
Angus DC, Chang CH. Heterogeneity of treatment effect: estimating how the effects of interventions vary across individuals. Ѵ. 2021;326(22):2312-2313. doi:
38.
Angus DC. Your mileage may vary: toward personalized oxygen supplementation. Ѵ. 2024;331(14):1179-1180. doi:
39.
Freedman DH. Clinical trials have the best medicine but do not enroll the patients who need it. Scientific American. January 1, 2019. Accessed May 7, 2024.
40.
Bradley CK, Wang TY, Li S, et al. Patient-Reported reasons for declining or discontinuing statin therapy: insights from the PALM Registry. J Am Heart Assoc. 2019;8(7):e011765. doi:
41.
Gülmezoglu AM, Duley L. Use of anticonvulsants in eclampsia and pre-eclampsia: survey of obstetricians in the United Kingdom and Republic of Ireland. Ѵ. 1998;316(7136):975-976. doi:
42.
Altman D, Carroli G, Duley L, et al; Magpie Trial Collaboration Group. Do women with pre-eclampsia, and their babies, benefit from magnesium sulphate? The Magpie Trial: a randomised placebo-controlled trial. Գ. 2002;359(9321):1877-1890. doi:
43.
HPS2-THRIVE Collaborative Group. Effects of extended-release niacin with laropiprant in high-risk patients. N Eng J Med. 2014;371(3):203-212. doi:
44.
Jackevicius CA, Tu JV, Ko DT, de Leon N, Krumholz HM. Use of niacin in the United States and Canada. Ѵ Intern Med. 2013;173(14):1379-1381. doi:
45.
Roberts I, Yates D, Sandercock P, et al; CRASH trial collaborators. Effect of intravenous corticosteroids on death within 14 days in 10008 adults with clinically significant head injury (MRC CRASH trial): randomised placebo-controlled trial. Գ. 2004;364(9442):1321-1328. doi:
46.
Hoshide R CV, Marshall L, Kasper E, Chen CC. Do corticosteroids play a role in the management of traumatic brain injury? Surg Neurol Int. 2016;7:84. doi:
47.
Brower RG, Matthay MA, Morris A, Schoenfeld D, Thompson BT, Wheeler A; Acute Respiratory Distress Syndrome Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. N Engl J Med. 2000;342(18):1301-1308. doi:
48.
Bellani G, Laffey JG, Pham T, et al; LUNG SAFE Investigators; ESICM Trials Group. Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries. Ѵ. 2016;315(8):788-800. doi:
49.
Bugin K, Woodcock J. Trends in COVID-19 therapeutic clinical trials. Nat Rev Drug Discov. 2021;20(4):254-255. doi:
50.
Angus DC, Gordon AC, Bauchner H. Emerging lessons from COVID-19 for the US Clinical Research Enterprise. Ѵ. 2021;325(12):1159-1161. doi:
51.
Janiaud P, Hemkens LG, Ioannidis JPA. Challenges and lessons learned from COVID-19 trials: should we be doing clinical trials differently? Can J Cardiol. 2021;37(9):1353-1364. doi:
52.
Huang DT, McVerry BJ, Horvat C, et al; UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators. Implementation of the Randomized Embedded Multifactorial Adaptive Platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations. հ. 2021;22(1):100. doi:
53.
Närhi F, Moonesinghe SR, Shenkin SD, et al; ISARIC4C investigators. Implementation of corticosteroids in treatment of COVID-19 in the ISARIC WHO Clinical Characterisation Protocol UK: prospective, cohort study. Գ Digit Health. 2022;4(4):e220-e234. doi:
54.
Horby P, Lim WS, Emberson JR, et al; RECOVERY Collaborative Group. Dexamethasone in hospitalized patients with Covid-19. N Engl J Med. 2021;384(8):693-704. doi:
55.
Angus DC, Derde L, Al-Beidh F, et al; Writing Committee for the REMAP-CAP Investigators. Effect of hydrocortisone on mortality and organ support in patients with severe COVID-19: the REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial. Ѵ. 2020;324(13):1317-1329. doi:
56.
WHO Solidarity Trial Consortium. Remdesivir and three other drugs for hospitalised patients with COVID-19: final results of the WHO Solidarity randomised trial and updated meta-analyses. Գ. 2022;399(10339):1941-1953. doi:
57.
Angus DC. Fusing randomized trials with big data: the key to self-learning health care systems? Ѵ. 2015;314(8):767-768. doi:
58.
Kass NE, Faden RR. Ethics and learning health care: the essential roles of engagement, transparency, and accountability. Learn Health Syst. 2018;2(4):e10066. doi:
59.
Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep. 2013;(Spec No):S16-S27. doi:
60.
Whicher D, Kass N, Saghai Y, Faden R, Tunis S, Pronovost P. The views of quality improvement professionals and comparative effectiveness researchers on ethics, IRBs, and oversight. J Empir Res Hum Res Ethics. 2015;10(2):132-144. doi:
61.
Faden RR, Beauchamp TL, Kass NE. Informed consent, comparative effectiveness, and learning health care. N Engl J Med. 2014;370(8):766-768. doi:
62.
Kass NE, Faden RR, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. The research-treatment distinction: a problematic approach for determining which activities should have ethical oversight. Hastings Cent Rep. 2013;(Spec No):S4-S15. doi:
63.
Park JJH, Detry MA, Murthy S, Guyatt G, Mills EJ. How to use and interpret the results of a platform trial: users’ guide to the medical literature. Ѵ. 2022;327(1):67-74. doi:
64.
Angus DC, Berry S, Lewis RJ, et al. The REMAP-CAP (Randomized Embedded Multifactorial Adaptive Platform for Community-acquired Pneumonia) Study: rationale and design. Ann Am Thorac Soc. 2020;17(7):879-891. doi:
65.
Angus DC, Alexander BM, Berry S, et al; Adaptive Platform Trials Coalition. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019;18(10):797-807. doi:
66.
Gallo A. A refresher on A/B testing. Harvard Business Review. June 28, 2017. Accessed January 21, 2024.
67.
Sawant NN et al. Contextual Multi-Armed Bandits for Causal Marketing. Accessed May 7, 2024.
68.
The Observational Medical Outcomes Partnership Common Data Model Working Group. OMOP Common Data Model. The Observational Medical Outcomes Partnership. Accessed February 23, 2024.
69.
The Office of the National Coordinator for Healthcare Information Technology (ONC). HealthIT.gov. US Dept of Health and Human Services. Accessed February 23, 2024.
70.
HL7 FHIR Release 5. HL7 International. Accessed February 23, 2024.
71.
The Clinical Data Interchange Standards Consortium (CDISC). Study Data Tabulation Model (SDTM). CDISC. Accessed March 20, 2024.
72.
Bönisch C, Kesztyüs D, Kesztyüs T. Harvesting metadata in clinical care: a crosswalk between FHIR, OMOP, CDISC and openEHR metadata. Sci Data. 2022;9(1):659. doi:
73.
US Dept of Health and Human Services. CMS Blue Button 2.0. Accessed February 23, 2024.
74.
Qian ET, Casey JD, Wright A, et al; Vanderbilt Center for Learning Healthcare and the Pragmatic Critical Care Research Group. Cefepime vs piperacillin-tazobactam in adults hospitalized with acute infection: the ACORN randomized clinical trial. Ѵ. 2023;330(16):1557-1567. doi:
75.
Jones WS, Mulder H, Wruck LM, et al; ADAPTABLE Team. Comparative effectiveness of aspirin dosing in cardiovascular disease. N Engl J Med. 2021;384(21):1981-1990. doi:
76.
Inan OT, Tenaerts P, Prindiville SA, et al. Digitizing clinical trials. NPJ Digit Med. 2020;3(1):101. doi:
77.
Fröbert O, Lagerqvist B, Olivecrona GK, et al; TASTE Trial. Thrombus aspiration during ST-segment elevation myocardial infarction. N Engl J Med. 2013;369(17):1587-1597. doi:
78.
Weinfurt KP, Hernandez AF, Coronado GD, et al. Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory. BMC Med Res Methodol. 2017;17(1):144. doi:
79.
PCORI. The PCORI Strategic Plan. Accessed May 7, 2024.
80.
The United Kingdom National Institute for Health and Care Research. Our Research Performance. NIHR. Accessed January 21, 2024.
81.
Ozdemir BA, Karthikesalingam A, Sinha S, et al. Research activity and the association with mortality. PLoS One. 2015;10(2):e0118253. doi:
82.
Commercial clinical trials in the UK: the Lord O’Shaughnessy review: final report. 2023. Accessed May 26, 2023.
83.
Atkins VMP, Morgan E, Matheson M. Full government response to the Lord O’Shaughnessy review into commercial clinical trials. 2023. Accessed December 8, 2023.
×