Dr. Samy Suissa is a Co-Founder and Principal Investigator of the Canadian Network for Observational Drug Effect Studies (CNODES). He is Director of the Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research at the Jewish General Hospital and Professor, Departments of Epidemiology and Biostatistics and of Medicine, McGill University, in Montreal, Canada. Dr. Suissa also heads the McGill Pharmacoepidemiology Research Unit. He was the founding Director of the Quebec Research Network on Medication Use. Dr. Suissa also sits on Panalgo’s Strategic Advisory Board.
In Part 2 of this two-part blog post series, Dr. Suissa discusses the emerging topics in real-world evidence (RWE), particularly in relation to pharmacoepidemiology. Click here for Part 1 with Dr. Peter Neumann.
Q: Dr. Suissa, Big Data is often described by five characteristics: volume, value, variety, velocity, and veracity. Which one do you think needs more attention?
A: I think the one that receives less attention is the last one, veracity. The number of databases and database studies has exploded in the last 20 years. Unfortunately, we have also seen an explosion in “fake” data from studies that have suggested results that are implausible or incorrect. Data veracity will be much more important when it comes to presenting real-world evidence to support the upcoming drug pricing negotiations under the Inflation Reduction Act.
Q: Can you give some examples of these types of flawed studies?
A: One study published just a year ago was the object of a Canadian-led international randomized trial of 3,600 women followed for five years looking at metformin as a treatment for breast cancer. Metformin is a first line treatment for Type 2 diabetes. Why would they think that such drug should have an effect on cancer?
A second example is another Canadian randomized trial looking at statin for the prevention of exacerbations and mortality in COPD. Again, what do statins have to do with a lung disease that is caused mainly by cigarette smoking?
Another recent example is a large European ALL-HEART trial to evaluate allopurinol to treat ischemic heart disease. Allopurinol is a very effective treatment for gout, but here it is investigated as a treatment for ischemic heart disease.
All these trials have something in common. They are introduced in a similar way, namely that many retrospective, pharmacoepidemiologic studies have suggested that patients with diabetes treated with metformin have reduced cancer risk, improved cancer prognosis, and improved survival. Same story, same introduction for Allopurinol, same introduction for statins, same introduction for beta blockers. And there are many more such examples that have led to randomized trials that have shown absolutely no benefit for these drugs.
Q: What was the flaw in these studies?
A: Most of the studies we reviewed were subject to several time-related biases, primarily immortal time bias that suggested that these drugs have tremendous benefit on these other diseases, when in fact they do not.
Q: What is immortal time bias?
A: Immortal time refers to a period of follow-up during which, by design, death, or the study outcome cannot occur. In pharmacoepidemiology studies, immortal time typically arises when the determination of an individual’s treatment status involves a delay or wait period during which follow-up time is accrued—for example, waiting for a prescription to be dispensed after discharge from hospital when the discharge date represents the start of follow-up. This wait period is considered immortal because individuals who end up in the treated or exposed group have to survive (be alive and event free) until the treatment definition is fulfilled. If they have an event before taking up treatment, they are in the untreated or unexposed group. Bias is introduced when this period of “immortality” is either misclassified with regards to treatment status or excluded from the analysis. Immortal time bias is particularly problematic because it necessarily biases the results in favor of the treatment under study by conferring a spurious survival advantage to the treated group.
Q: Why have Congress and the FDA pushed for real-world evidence studies?
A: First, historically, observational studies were used to complement randomized controlled trial (RCT) findings mainly for safety – identifying the new adverse reactions when many patients are using the treatment. An important objective of real-world evidence is to repurpose older drugs for new indications. For these, we use existing data, such as claims databases and electronic health records, as well as observational study designs. The modern way of doing this, which unfortunately was not used in the exemplar studies above, is to emulate a randomized controlled trial. This is what the FDA will accept eventually.
Q: How can RWE studies be made more effective?
A: First, the future will need to avoid old school ways of conducting observational studies that contain the immortal time and other time-related bias errors I discussed, and to start using emulated randomized trials approaches. An emulated randomized trial using real-world observational data is appealing to the regulatory agencies that are familiar with the randomized trial. These include real-world studies for head-to-head comparisons and studies of drugs with non-use, which presents some challenges.
For example, we conducted a study of proton pump inhibitors (PPI) in idiopathic pulmonary fibrosis (IPF). We used the Clinical Practice Research Datalink (CPRD). Eventually we’ll be able to do the study using Panalgo’s IHD Analytics platform in 10 minutes rather than the number of weeks it took us. We used a prevalent new-user design versus non-use in this population of patients with IPF, a design that looked like a randomized controlled trial. We used data from patients with IPF who initiated PPIs and were matched to IPF patients who were all at the same stage and duration of the disease and did not receive a PPI. The approach is based on matching of a physician visit. This is crucial because at that visit we had evidence that the patient had the opportunity to receive a prescription for a PPI, but they did not. This, with matching on time and propensity scores that are time dependent, will give credibility and veracity to the results.
Q: How are HEOR teams who are not directly involved with the clinical teams going to receive these emulated RCT trials?
A: This is an important question. I think everyone is still in the learning phase right now. There are initiatives on this topic that are funded by the FDA, the NIH, and in Canada as well. One of them is RCT DUPLICATE that I’m involved in which is duplicating several completed trials selected with the FDA using real world data and we are learning from this process. The paper was published recently in JAMA on the first 32 studies that were duplicated. Once we have sufficient evidence, I think that the FDA, other regulators, and pharma will be able to evaluate these results to gain better understanding of the value of such real-world evidence studies.
Q: What has been the impact of single-arm trials and real-world evidence on regulatory decision-making?
A: Between January 2002 and December 2021, the FDA’s Office of Oncology Diseases granted 563 new indications. Of these, 31% were based on single-arm trials. This is quite an important number, with most of these in the latter part of this period. Here again, we showed that these can easily be affected by time-related biases when compared with historical external control cohorts. Caution will be needed in conducting and understanding these types of studies.
Q: Where do we stand today on RWD and RWE?
A: Real world data and real-world evidence are becoming accepted tools for regulators, for situations such as studies of new indications for approved drugs, for example. Expanding the indication of older drugs is valuable, but it needs to be done right. First and foremost, critical reviews of observational studies of these new indications are essential to make sure that time-related and other biases are minimized. Here again, though, observational studies that emulate randomized trials can provide more accurate real-world effects for these potential new indications. The incident and prevalent new-user designs are valuable to avoid many of the time-related biases and prevent the cost and effort that is spent in conducting futile randomized trials.
If you haven’t read part 1 of this series, where Peter Neumann, ScD, discusses his thoughts on emerging topics in real-world evidence, you can do so by clicking here.