Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.
In This Week’s Podcast
For the week ending May 10, 2024, John Mandrola, MD, comments on the following news and features stories: Inclisiran update, safety of sodium channel blocking anti-arrhythmic drugs (AADs), analytic flexibility, the work-up of patients with heart failure (HF), and blood pressure (BP) in older patients.
Reader Feedback – Inclisiran
Last week, I discussed inclisiran, a small interfering RNA that inhibits hepatic PCSK9 production. The trial was VICTORIAN-INITIATE, a randomized controlled trials ( RCT) of using inclisiran to get to goal LDL cholesterol. This was clearly a marketing-disguised-as-science trial. I bemoaned the fact that the drug gained US Food and Drug Administration (FDA) approval without a cardiovascular (CV) outcomes trial.
Many listeners wrote in to tell me about ORION 4, a 15,000-patient trial of inclisiran vs placebo in patients with established atherosclerotic CV disease and a primary outcome of major adverse cardiac events (MACE). It was delayed a bit during the pandemic and is scheduled to conclude in 2026.
Ok, thanks for that information. But I am still perplexed why results of such a trial don’t precede approval. What if the drug has no effect on MACE? What if there is harm? Both of these are unlikely, but there will have been many years for the drug company to market and build therapeutic momentum.
Dr John Cooper provided another important fact about inclisiran: in the UK inclisiran sells for approximately $60 per injection to the National Health Service (NHS), and this is the reason that the National Institute for Health and Care Excellence () is recommending it in preference to other more expensive PCSK9 antibody treatments.
What I will now say seems too obvious to even waste words and time on. Cost is a major factor in translating evidence. Many therapies in modern cardiology would be much easier to use if they cost as the same as generic metoprolol or lisinopril.
Take ARNI drugs or SGLT2 inhibitors in HF with preserved ejection fraction (HFpEF). The drugs probably exert some mildly positive effect on average in patients similar to those in the trials. But when they add many hundreds of dollars per month to the average patient who is already taking 10 drugs, it’s hard to justify.
Inclisiran and PCSK9 inhibitors and bempedoic acid may have some small incremental benefit in addition to statin drugs in patients with severe vascular disease, but American patients already pay hundreds of dollars per month for other drugs. Again, it’s hard to justify adding more cost to their often-fixed budget for an absolute risk reduction of less than 1% to 2% in nonfatal outcomes over 5 years.
So, I find it easier to add therapies with modest effect sizes when they are at NHS prices.
Thanks for the feedback. Keep it up.
Sodium Channel Anti-arrhythmic drugs
On the April 5th podcast, I discussed a small series from the University of Pennsylvania group in which they reported on 34 patients with non-ischemic cardiomyopathy (NICM), high-burden premature ventricular contractions (PVCs) , and a back-up implantable cardioverter-defibrillator (ICD). They reported that the dreaded Type 1C sodium-channel AADs, flecainide and propafenone, actually worked to suppress PVCs and not only did not cause pro-arrhythmia, but increased left ventricular EF in many cases.
I used that paper to launch into one of the biggest mistakes we have made in electrophysiology (EP) — that is, the misinterpretation of the CAST trial, which reversed the use of 1C AADs in post-myocardial infarction (MI) patients with PVCs. The major harm noted in CAST (number needed to kill, 29), induced regulators to put a black box warning on flecainide and proscribe its use in patients with any degree of “structural heart disease.” I pointed you all to a great editorial on this by Beth Israel (Boston) authors.
Well, this week, Europace has published another study further supporting the safety of 1C AADs, when used wisely.
This was a post-hoc analysis from the famous EAST AFNET 4 trial of early rhythm control (ERC) vs standard care in patients with newly diagnosed atrial fibrillation (AF). Recall that EAST AFNET showed a reduction in MACE outcomes from ERC, though I have many criticisms of that finding. Recall also that EAST was not an ablation trial. Less than one in five patients randomly assigned to ERC received an AF ablation.
The post-hoc analysis, first author, Andreas Rillig, set out to describe outcomes in EAST AFNET patients with and without established heart disease who were taking sodium-channel blockers (SCB).
By the way, this is a good use of observational data from an RCT, as long as it’s not over-interpreted. Let’s see what they did and said.
Recall that EAST randomly assigned patients with AF to an ERC strategy or standard of care. The ERC was doctor’s choice. The protocol discouraged use of SCB (flecainide and propafenone) in patients with reduced LVEF and recommended stopping the SCB if the QRS widened.
The primary endpoint was a composite of CV death, stroke, hospitalization for HF (HHF), or acute coronary syndrome (ACS). There was a safety outcome as well, including a “AE-related to AAD” outcome.
In the roughly 1400 patients in the ERC group, about half were using SCB and half were not. These two groups formed the focus of the analysis: SCB vs SCB-never.
Results. There were baseline differences — as you’d expect. Patients treated with SCB were younger, more often female, and less often had structural heart disease. SCB patients also had lower CHADSVASC scores, but there were no differences in use of oral anticoagulants.
One interesting descriptive finding was that the number of patients with sinus rhythm did not differ in patients on or not on SCBs. So they weren’t any more or less effective than other AADs.
Most patients on SCB did not have HF, but of the 177 of 700 patients on SCB with HF, 2% had HF with reduced EF, 21% had HF with mildly reduced EF, and 77% had HFpEF. During the trial, no relevant changes were noted in EF, nor was there any worsening of HF class.
Ok, here are the efficacy and safety outcomes:
ERC patients on SCB had 45% fewer primary outcome events than ERC patients not on SCB; 3% vs 4.9%. Patients on SCB also spent fewer nights in the hospital, which was the second co-primary endpoint.
The primary safety endpoint (death, stroke or serious adverse event related to rhythm control) also occurred numerically less often in the ERC on SCB arm, though it did not reach statistical significance.
Serious AEs related to rhythm control events also did not differ in the SCB or no-SCB group.
In the subgroup of patients with coronary artery disease (CAD), stable HF, and LV hypertrophy, there were also no differences in safety outcomes.
Comments. Similar to the UPenn case series, these observations also suggest that 1C drugs, flecainide and propafenone, can be used safely.
The massive caveat here is that these reassuring outcomes were obtained within the confines of a randomized trial that had strict protocols.
Another caveat is that although this data was derived from an RCT, it is observational — doctors, not randomization, chose the patients who were put on a SCB.
In the discussion section of the paper, the EAST authors cite other observational data that also reports no increases in AEs in patients treated with 1C drugs.
The authors also discuss the differences in EAST vs CAST patients. The CAST trial studied SCB in post-MI patients with LV dysfunction. Sub-analyses of CAST suggest that the greatest risk from AADs were in patients with ongoing ischemia. These patients were excluded from EAST.
I agree with the authors that the black box warning on flecainide and proscription of its use in patients with any structural heart disease went too far. Such a proclamation and current practice pattern went too far, and is an example of incorrect use of evidence based medicine.
For instance, I have cared for many patients with revascularized CAD and normal LV function who have taken propafenone and flecainide and done well. Their charts are full of notes saying that the patient understands that this goes against current patterns.
I don’t want to give listeners the impression that I love AADs. The older I get, the less I use AADs. But they do have a role. SCB are especially good at suppressing short episodes of paroxysmal AF and frequent premature atrial contractions.
I want to close with a serious warning: SCBs need to be used with extreme caution. They can cause life-threatening AEs, such as pro-arrhythmia from 1-1 flutter and ventricular tachycardia. It is crucial to watch for QRS widening. Propafenone metabolism is highly genetically variable. These AADs should probably be used with beta-blockade so as to prevent 1-1 flutter. What’s more, sodium channel blockade by definition is a negative inotrope. So SCB can exacerbate LV dysfunction. Be careful.
Analytic Flexibility
I don’t think I’ve ever covered a paper from the Journal of Clinical Epidemiology. This week is a first, and it’s one of the most important papers I will cover this year. Seriously.
The paper deals with how authors analyze data. Consider that when you read a study in a journal, the authors say in the statistics section that they analyzed the data in this or that way. The key is that it is one way. One.
But it turns out, especially for observational studies, that there are many ways investigators could have analyzed the data.
Dr Dena Zeraatker and colleagues from McMaster University actually show that there are more than a quadrillion choices on how to analyze the data set of a typical observational study. I kid you not, when you add up all the choices and combinations of co-variates, and potential statistical models, there are almost an infinite number of ways to analyze the data.
And it matters. Here is what they did.
They chose the topic of red meat consumption and mortality. They first used the best systematic review on the topic. Within that there were 15 studies and 70 unique analytic methods depicted. Recall that all these papers in the systematic review had made it through peer review and were believed to be reasonable.
They then made a list of all the different co-variates and analytic methods and came up with a quadrillion different ways of combining these choices. They took a random sample of these. And came up with about 1400 different ways to analyze a different dataset on red meat and mortality—this one a NHANES dataset.
They called each analytic method a specification; 200 of the specifications had implausibly large confidence intervals (Cis) and were excluded.
This left them with 1200 specifications. Which they then applied to the National Health and Nutrition Examination Survey data.
Their findings:
The median hazard ratio (HR) was 0.94 (CI, 0.83–1.05) for the effect of red meat on all-cause mortality. So that is not significant.
The range of HRs was large. They went from 0.51 (a 49% improvement in mortality) to 1.75 (a 75% increase in mortality). Of all specifications, 36% yielded HRs more than 1.0 and 64% less than 1.0.
As for significance at the P ≤ 0.05, only 4% (or 48 specifications) were statistically significant. And of these, 40 analytic methods indicated that red meat consumption reduced death and eight indicated red meat increased death. You will notice here that about 5% of any analyses will yield significant results when there is no overall effect.
Nearly half the specifications yielded unexciting point estimates of HRs between 0.90 to 1.10.
Comments. I have a column coming on theHeart.org | Medscape Cardiology on the matter of how analytic choices can affect outcomes.
A few years ago, I reported on a paper from Brian Nosek’s group in Virginia. They did the famous experiment wherein 29 different teams of statistics experts analyzed a dataset of European soccer leagues. The question was: did players with dark skin receive more red cards?
Similar to the McMaster’s group, Nosek found that different experts chose different ways to analyze data, and about two-thirds found a significant association and one-third found no significant association.
To me, and I hope to you, these are shocking revelations. I had no idea that choice made by authors on how to analyze data made that much difference in the results.
Every time you read an observational study that finds X or Y, you should be asking: oh, I wonder how this finding would hold up to a specification curve analysis or multiverse analysis? If you are reading the result in a journal, it is likely positive, but is this one of the outliers? Would a thousand different analytic choices deliver the same positive finding?
I spoke with Dr. Zeraatker, and she told me that multiverse analyses were not super hard to do. The biggest impediment was not the computing power, but the feeling among experts that they knew the best way to analyze the data. Which is funny since Zeraatker estimated there were 10 quadrillion analytic choices in her red meat analysis.
The Work-Up of Patients with HF
A paper from University of California San Francisco (UCSF) researchers, first author, Mathew Durstenfeld, has brought up the issue of how to work up patients who present with HF.
The algorithm, which is the therapeutic fashion, says go to the cath lab to assess for CAD because revascularization will clearly help improve ventricular function.
In the old model of cardiology wherein most cardiac disease stems from clogged pipes, this strategy makes good sense. The STICH trial of coronary artery bypass graft vs medical therapy in patients with ischemic LV dysfunction showed a small survival advantage to surgery at 10 years, though the primary analysis of STICH, at 5 years, was not significant.
Then we have REVIVED BCIS2, which randomly assigned patients with LV dysfunction and CAD amendable to percutaneous coronary intervention (PCI), and viability to revascularization vs simple tablets, and there were no differences in outcomes, and no improvement in LV function when severe CAD was stented.
REVIVED BCIS2, to me, is one of the most important trials of the last decade. It is shocking that PCI in these near perfect for PCI patients did not beat simple tablets. But it didn’t.
Well, the UCSF team was interested in a slightly different population of patients — those with any sort of HF — HFrEF or HFpEF. And UCSF General Hospital is a safety net hospital serving patients with less access to healthcare.
They did not use a randomized design. Instead they performed an observational look back type study over a decade, studying the association between coronary assessment within 30 days after an HF diagnosis and death.
They did say they used a target -trial emulation approach. As the name implies, this attempts to emulate randomization. For instance, their target trial emulation can take advantage of variations in practice and perform careful propensity matching so as to make groups as similar as possible.
They can also do sensitivity analyses after the fact to assess the robustness of the findings. For instance, you can do the analysis again with different choices — see the previous topic. They also did falsification analysis wherein they look at causes of death that would not associate with coronary assessment, such as motor vehicle crashes.
In the end, they had two groups: Those who had a coronary assessment and those who did not. About 4000 patients had coronary assessment and 11000 did not. There were many differences in baseline characteristics.
Their first analysis was to show that baseline characteristics were associated with the chance of having a coronary assessment. They found that women, older individuals, and those with documented unstable housing were less likely to have coronary assessment.
These patterns were not explained by coronary risk: for example, diabetes, and HIV were associated with lower odds of testing even accounting for chronic kidney disease.
The next step was matching and target trial emulation. They ended up with a group of 627 patients who had coronary assessment and 5300 who did not.
Mortality was lower in those who had coronary assessment; 16% lower (HR 0.84 with CI from 0.72 to 0.97).
Patients who had coronary assessment also had higher use of guideline directed medical therapy (GDMT). Patients who had their coronaries assessed were 7 times more likely to have revascularization.
Mortality with revascularization was 12% lower in the assessment arm though this did not meet statistical significance.
The authors concluded:
In a safety-net population, disparities in coronary assessment after HF diagnosis are not fully explained by coronary artery disease risk factors.
Early coronary assessment was associated with improved HF outcomes possibly related to higher rates of revascularization and guideline-directed medical therapy.
But with low certainty that this finding is not attributable to unmeasured confounding.
Comments. You already know what I will say.
The authors made a gallant effort to emulate a trial, but it is still an observational analysis of 600 patients who got assessment and more than 5000 who did not. I laud them in the highest way for stating that clearly in their conclusions. “Low certainty that this finding is not attributable to unmeasured confounding” ought to be a smart phrase added to almost every observational study manuscript in all of biomedicine.
They used ICD codes and electronic health records, and there isare likely cofounding factors between the two groups. As they point out in the limitations section, patients who have a coronary assessment are more likely to follow up as outpatients.
Proponents of the clogged pipe theory could say, the 7 times more revascularization is what drove the better outcomes, but I could also argue it was the higher rates of GDMT, or just the fact that younger patients with less diabetes, (ie healthier) patients got the assessment.
I could also argue that a specification curve analysis of this data might yield many analyses showing a null association. The UCSF team used one analytic method. What if they had used a thousand different methods?
The best way to work up patients with the most common diagnosis in all of hospital cardiology (HF) is an important question.
With the negative results of the REVIVED BCIS2 trial as background, the way to answer the question in all-comers with HF, is not to look back with observational methods, but to randomize these patients. Then follow them.
This sort of thing is happening in many different clinical scenarios in Denmark, where there is a culture of randomization, but not so much in the United States.
My main conclusion from this work is that the next major advance in cardiology might be a way to organize a randomization network, wherein patients with uncertain approaches are randomized to one or another strategy.
Systolic Blood Pressure in the Very Old
Circulation has published a provocative observational analysis that goes against one of my most common frustrations — over-treatment and iatrogenic harm in the elderly.
First my frustration. As an EP doc, I see a bunch of older patients, because they get AF. In my almost three decades of practice, one of the things that bothers me most is when over-exuberance harms older patients.
BP therapy is a common issue. I like to say the purpose of BP treatment is to get old. We treat BP in 40 to 50-year-olds so they make it to 80 or 90 years. When they get to 80 or 90, they have won, and our job is not to give back the win.
Yet we so often do. In our misapplication of the SPRINT trial, we sometimes continue to shoot for a sustolic BP < 130. Some patients tolerate that, but what do you think happens when they get AF, or a viral infection. They stand up and go boom. A fall lands them in the hospital with a broken femur, and you know where that often leads.
That is why the paper in Circulation, first author, Bernhard Haring, got my attention. This is a scary paper.
The German led team looked at data from the Women’s Health Initiative (WHI), to study the relationship between systolic BP and longevity. The specific question was which systolic BP level in older women had the highest probability of survival to 90 years.
Recall that the WHI was a famous trial that overturned the practice of using hormone replacement therapy (HRT) in post-menopausal women for CV protection because HRT actually increased CV outcomes.
A population of about 16,000 patients were enrolled in the WHI and were eligible to survive to 90 years before 2020. They excluded women with competing risks, such as cancer, CV disease, stroke, HF, or diabetes. This is key.
BP was measured way back at baseline — 1993 to 1998. Then it was measured every year till 2005.
The outcome of interest was survival until 90. They then measured the probability of surviving to 90 for all combinations of systolic BP.
This was a 20-year study. About 60% of patients in the WHI made it to 90, which speaks to healthier patients getting enrolled in trials.
Here was the main finding:
Women with a systolic BP between 110 and 130mm Hg at ages 65, 70, 75, and 80 years had a 38%, 54%, 66%, or 75% absolute probability to survive to 90 years of age, respectively.
They then separated the women into groups based on systolic BP: less than 110, 110 to 130, 130 to 139, 140 to159, and higher than 160. The curves of survival were clear: The probability of surviving to 90 years of age was lower for higher systolic BP levels. The best survival hovered around 100 to 110.
Figure 2 shows the absolute probabilities of patients who were taking BP meds. The patterns are the same, though absolute probability of making it to 90 years was lower and the greatest probability was at around 120 mm Hg, though not much different than at 140.
The authors concluded:
For women over 65 years of age with low cardiovascular disease and other chronic disease risk, an SBP level <130 mm Hg was found to be associated with longevity. These findings reinforce current guidelines targeting an SBP target <130 mm Hg in older women.
Comments. This is a dangerous study.
First it is observational. Unlike what the authors conclude, it does not support treating to a target. Patients who had BPs that low are surely healthier and it is those factors, not necessarily the meds that drove better chances for survival.
Second, the exclusion factors should have made it to the main abstract. They took out nearly every vulnerable woman. The population they studied had no other significant diseases. But that is not often recognized in practice.
If you are treating an elderly frail woman with cancer and other co-morbidities to a systolic BP of 120, I would say you fail Medicine and Common Sense 101.
Please don’t do that. Please be careful with older patients. Older patients deserve our greatest caution. Two-times old is still old. Most of the BP trials, cholesterol-treatment trials, all primary prevention trials were done in younger patients. Older patients have higher risk.
This study finds exactly what you would expect. If you have no comorbid conditions, and are not frail, and you have been gifted great blood pressure you are more likely to live to 90. It’s survivorship bias on steroids.
That should never be translated to “we can create the same scenario with three anti-hypertensive meds.”
My thesis that elderly patients incur more harm from low BP than high BP still holds. I wish Circulation editors had made the authors be more careful in their conclusions.
© 2024 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: May 10, 2024 This Week in Cardiology Podcast - Medscape - May 10, 2024.
Comments