Thursday, December 29, 2011

Cardiology JC 12.20.11

Dr. Branch: Reduced Dietary Salt for the Prevention of Cardiovascular Disease

Click here to watch a recording of the presentation.

Dr. Branch presented a recent meta-analysis from the Cochrane Collaboration which reviewed the published evidence that reductions in dietary salt intake impact cardiovascular morbidity and mortality.

The authors found 7 studies meeting their inclusion criteria, which enrolled either normotensives, hypertensives, or mixed populations, with a single study looking at salt reduction in patients with decompensated heart failure.  

Their results were, as they say, "consistent with the belief that salt reduction is beneficial in normotensive and hypertensive people."  However, the size of that beneficial effect was miniscule for most outcomes, and not especially consistent with recent, highly publicized statistical projections of the impact large-scale salt reduction might have on public health.  The authors found a nonsignificant reduction in mortality among normotensives and hypertensives (Relative Risk = 0.67 and 0.97, respectively, both confidence intervals including 1.0).  They found "no strong evidence" of a beneficial effect on cardiovascular morbidity (defined as MI, stroke, CABG, PTCA or death from cardiovascular disease) in normotensives or hypertensives either, although one study had appeared to show a large relative risk reduction in its early phase. 

Reductions in blood pressure consistent with previously published findings were observed - normotensives tended to drop by about 1mmHg in systolic blood pressure, hypertensives and those with heart failure by about 4 mmHg.  One disturbing finding was that in the one trial of heart failure patients, low-salt diets seemed to increase morbidity and mortality.  Strangely, none of the studies reviewed reported any outcomes in terms of health-related quality of life.

Overall, while this report did suggest that some people may experience small falls in blood pressure when voluntary salt consumption is reduced, it does not seem to support the idea that salt consumption is a major public health problem.  Moreover, since most salt consumption comes from processed foods, it seems clear that to the extent that this is a problem (which, again, is not clear) it is probably better resolved by regulators than individual patients and their PCPs.

Dr. Vij: Efficacy of Statins for Primary Prevention in People at Low Cardiovascular Risk

 Click here to watch a recording of the presentation.  Due to technical issues, this recording has no audio.

Dr. Vij presented a meta-analysis published in CMAJ which reviewed the available evidence on the use of statins for primary prevention in people at low cardiovascular risk.  In terms of the inclusion criteria applied to studies, "low-risk" was defined as having a projected 10-year risk of death from a cardiovascular cause or non-fatal MI of less than 20%.  

The main outcomes assessed were the risk of all-cause mortality, MI, stroke, and adverse events related to statin therapy.  The authors also undertook meta-regression analysis to assess whether a number of other variables influenced the association of statins with these outcomes, including the absolute and percent change in serum LDL during the trial.

They reported their results in terms of pooled relative risk (RR), absolute risk reduction (ARR), and number needed to treat (NNT).  Table 3 in the article gives their full report of results, but here are the highlights:
  • The RR of all-cause mortality was 0.9 (95% CI 0.79-1.03), corresponding to an ARR of 0.42%.
  • The RR of MI was 0.68 (95% CI 0.53-0.87), although weirdly the relative risk of fatal MI was not significantly different).  This corresponded to an ARR of 0.46%.
  • The RR of stroke was 0.89, with a confidence interval crossing unity.
  • The RR of revascularization was 0.66 (95% CI 0.55-0.78), corresponding to an ARR of 0.77%. 
  • No adverse event was significantly more common in the statin group, including rhabdomyolysis, cancer, and incident diabetes.
  • Overall, no absolute risk reduction was greater than 0.77%.
Dr. Frohlich pointed out a number of conceptual flaws with this meta-analysis, and expressed skepticism that the results of the individual trials were actually comparable in this way.  He also noted that such undertakings are prone to massive selection bias, since none of these trials enrolled random cohorts of healthy people; everyone in these trials was referred by a doctor who thought that, despite their low risk, they might benefit from a statin.  Dr. Barbant agreed that it is generally best, if possible, to make decisions based on original trial data.

This study provides a good opportunity to consider the mechanics and merits of the NNT.  Many clinicians find this a very intuitive measure of treatment outcome, and it has been shown that both physicians and patients understand outcomes expressed in terms of ARR and NNT better and tend to make more conservative decisions when data are presented in these terms rather than in therms of relative outcome measures such as RR or hazard ratios

However, the NNT has limitations.  First, it always needs to be considered in light of the background risk of the condition.  If this is low, the absolute value of the ARR will always be low, and even a highly efficacious intervention will be associated with a large NNT.  Second, it has to be associated with a period of time, since it changes with duration of therapy and there's no good reason to expect that change to be linear.  For example, if 10 patients are given a lethal drug which has a mean onset of action of 5 minutes and a maximum onset of 10 minutes, the NNT to produce one death at 1 minute is infinite, the NNT at 5 minutes is 2, and the NNT at 10 minutes is 1.  Thus, whenever confronted with an NNT, one's first question should be "for how long?"  When these limitations are taken into account, it's unclear how one could calculate NNT in a study like this one or how one should interpret them, and in this case it's probably best to stick with the relative risk.  For those interested, there is an excellent discussion of NNT here.

Dr. Lahsaeizadeh: Ruling Out Acute MI With Sensitive Troponin Assays

 Click here to watch a recording of the presentation.

Dr. Lahsaeizadeh presented two trials looking at the characteristics of biochemical tests for acute (non-ST elevation) myocardial infarction. 

In the first article, from the JACC, Body et al. tested a novel, fifth-generation, troponin-T assay in a prospective cohort of patients presenting to the ED with chest pain, and then validated their preliminary results in a subsequent prospective study.  They were primarily interested in the ability of this novel test to rule out MI with a single measurement on presentation. 

The second study, by Scharnhorst et al., published in Clinical Chemistry, looked at the characteristics of a conventional, second-generation troponin-I assay when a novel interpretive strategy was applied to the results.  Troponin-I levels were obtained at hour 0, 2, 6 and 12.  The authors defined a positive result as an initial troponin-I level greater than the 99th percentile of results obtained from normal subjects (0.06 micrograms/L) or an increase of more than 30% in serum troponin over 2 hours with absolute concentrations below 0.06 micrograms/L.  The investigators in this study also measured CK and myoglobin.

In both studies, the "gold standard" was considered to be clinical diagnosis of acute MI by a cardiologist, who was blinded to the experimental troponin levels; in the first study, the troponin-T level, and in the second study the troponin-I level at hour 2.

Both studies found very high sensitivities and negative predictive values for their respective diagnostic strategies.  Body et al. reported a sensitivity for acute MI of 100% (95% CI 97.2%-100%) in their initial cohort and 99.8% (99.1%-100%) in their audit, which would have allowed confident exclusion of myocardial infarction in 17.5% of their patients.  Scharnhorst et al. found a sensitivity of 100% (95% CI 86%-100%), and calculated a negative predictive value of 100% (95% CI 95%-100%).  They found no additional diagnostic value in measuring CK or myoglobin.

Dr. Frohlich thought that while the results seemed to be applicable in the appropriate context, it is important to remember what exactly that context is: this strategy enables exclusion of myocardial infarction in the setting of chest pain in patients presenting to the ED.  It might not be applicable, for instance, in the setting of sudden chest pain on an inpatient ward, where onset is witnessed and diagnosis needs to be made before the troponin starts to rise.  Additionally, while myoglobin might not add any diagnostic information in the study setting, in a patient with positive troponins and an unclear onset of symptoms it can be useful in determining the age of an infarct.  Dr. Barbant pointed out that the prevalence of acute MI in both study populations (around 20%) was higher than we see in our ED, and that this means the NPV they calculated is not neatly applicable to our population (the influence of prevalence on predictive value is explained elsewhere).  Dr. Stullman thought the entire discussion highlighted the need for a careful history in diagnosing questionably cardiac chest pain, since one can only make the most of these biochemical results if one knows the precise character and timing of the patient's symptoms.

Overall, these results suggest that the "myo/trop q6 x3" strategy of excluding NSTEMI is unnecessary, and that a significant proportion of patients presenting to the ED with chest pain can be reassured that they are not having a myocardial infarction based on a comparatively brief testing interval using conventional assays.  If Scharnhorst et al.'s results can be validated in a larger cohort, yielding a narrower confidence interval, it seems that many patients could be discharged without inpatient investigation after 2 hour serial testing using currently available troponin assays. The major caveat is that this strategy rules out infarction, not unstable angina, and so (as above,) can only be applied by an intelligent clinician in the appropriate clinical context.

Tuesday, December 6, 2011

Addicition Medicine/Behavioral Health JC 12.1.12

Dr. Vanderwaerden: Buprenorphine Maintenance versus Placebo or Methadone Maintenance for Opioid Dependence (Review)

Click this link to view a recording of the presentation.

Dr. Vanderwaerden presented a recent Cochrane Review of the literature comparing buprenorphine to methadone and to placebo as maintenance therapy for opioid dependence.  The authors assessed all trials of the two medications which met their inclusion criteria.  The outcomes they examined were retention in treatment, use of illicit opioids, other substance misuse, criminal activity and mortality.

By way of background, it's worth recalling several key pharmacological and legal differences between methadone and buprenorphine.  First, pharmacologically, methadone is a full agonist at opiate receptors and as such is associated with the full range of opiate toxicities.  Its half-life is also relatively short, requiring once-daily dosing.  Buprenorphine, on the other hand, is only a partial agonist and therefore appears to be less toxic; it may also pose fewer risks for diversion, particularly when combined (as it is in one of the more popular formulations) with naloxone.  Its half-life is also considerably longer, making every-other-day dosing possible.  These pharmacological differences are overshadowed by the enormous legal differences.  While methadone may only be dispensed according to a highly detailed procedural code in specially certified clinics, which are very difficult to establish and maintain, buprenorphine can be dispensed in a doctor's office by any licensed physician who has successfully completed an eight hour course.

In their review of the literature, the authors of this review found that patients on buprenorphine were less likely to be retained in treatment (relative risk 0.83, 95% CI 0.72-0.95).  They found no significant differences in the other endpoints they examined (see above).

This result, as Dr. Abramowitz pointed out in our discussion, should be interpreted with some caution.  One take, endorsed by the authors, might be that methadone is just clearly a better drug.  However, if one recalls the immense differences in context between methadone and buprenorphine prescribing, it's easy to believe that this might all just represent sampling bias.  Buprenorphine treatment is generally far less rigidly constrained than methadone treatment for the reasons cited above, and the lower retention in treatment may reflect participation by less committed patients because of lower barriers to entry rather than reduced efficacy.

Finally, it is worth remembering that because of the difficulties in establishing methadone clinics, the choice is often not between buprenorphine and methadone, or even buprenorphine and placebo, but between buprenorphine and nothing, in which case buprenorphine is clearly the better choice.

Dr. Litwin: Bupropion-SR, Sertraline, or Venlafaxine-XR after Failure of SSRIs for Depression

 Click here to watch a recording of the presentation.

Dr. Litwin (in spirit) presented a segment of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial.  STAR*D was an enormous and complicated trial funded entirely by the National Institute of Mental Health (NIMH), which sought to establish the best practices for treating resistant depression with medications.  It began with an observational study of citalopram, after which patients who did not experience remission (pre-defined as a particular score on the Hamilton Rating Scale for Depression, or HRSD) were then randomized to groups testing different second, third and even fourth-line therapies.  A good synopsis of the study design is available here.

The article (published in the NEJM in 2006) reported the results of the second level of the STAR*D trial.  At this level, patients who had not experienced remission with citalopram or been unable to tolerate its side effects were randomized in a 1:1:1 fashion to receive open-label sertraline (another SSRI), bupropion (an inhibitor of norepinephrine and dopamine re-uptake,) or venlafaxine (an inhibitor of sertraline and norepinephrine uptake for 14 weeks.

The remission rate in all groups was about 25%, and there was no statistically significant difference in remission rates or cumulative side-effect burden between groups.

Some interesting points came up in the discussion of this article.  Dr. Remler pointed out that the lack of a placebo group makes it impossible to judge whether any of these agents had any specific efficacy, or whether they all functioned essentially as placebos.  This question is particularly interesting given the identical results achieved with three pharmacologically distinct agents; if one were to encounter a trial, say, of atorvastatin, nitroglycerin and vancomycin for the treatment of urinary tract infections caused by E. coli which reported a 25% remission rate, we would probably conclude that none of them did anything and that the natural history of the disease was that 25% of patients have spontaneous remission.

Dr. Hicks was impressed by the creativity the investigators displayed in designing their study to balance scientific rigor with real-world clinical practice.  The STAR*D researchers put together a sophisticated and elegant design which combined the power of randomization with the exigencies of clinical care to produce results which, however one interprets them, are clearly much more easily generalizable than those of many other studies in this area.

Finally, Dr. Abramowitz emphasized that the original intent of the STAR*D group was fundamentally pragmatic.  They set out to answer a question of immediate, daily clinical relevance to all psychiatrists and primary care physicians: what do you do when your first-line SSRI fails?  Because of their creative study design, their results can give us confidence that similar benefits can be expected from any of the switches they studied, and that the decision can safely be made on the basis of cost and side effects.

Dr. Alexander: Comparative Effectiveness of Weight-Loss Interventions in Clinical Practice

 Click here to watch a recording of the presentation.

Dr. Alexander presented a very recent trial comparing two interventions to increase weight loss in primary care.  Both were complex interventions which employed trained health coaches using motivational interviewing techniques and aimed to promote stustainable weight reduction through changes in behavior.  Both involved a specially designed website containing various learning modules and motivational tools which study participants were encouraged to use.  The main difference between the two interventions was that in one the health coaches actually met with the patients in person, whereas in the other most contacts were by telephone.  These intervention groups were compared to a control group in which weight loss was self-directed.

As can be seen from the author's tabulation of their results (right) both the interventions were associated with modest but significant weight loss relative to the control group.  Interestingly, however, there was not much difference between the group which received in-person support and the group receiving only remote support.

These results, as Dr. Hicks pointed out in our discussion, are encouraging.  They suggest that remote interventions such as this one, which are easier to conduct because of their flexibility, may be as effective as more time and labor-intensive interventions involving lots of personal counselling.

Thursday, November 17, 2011

Hospitalist JC 11/14/11

Dr. Nang: Apixiban versus Warfarin in Patients with Atrial Fibrillation
Click here to listen to a recording of Dr. Nang's presentation.

Dr. Nang presented a large trial of warfarin vs. apixaban which was designed to prove the non-inferiority of apixaban for preventing stroke in atrial fibrillation.  The primary efficacy outcome was a compound of stroke and systemic embolism, and the primary safety outcome was bleeding.  They also analyzed their results for subtypes of stroke (ischemic vs. hemorrhagic) and bleeding (e.g. gastroinestinal, intracranial, etc.).  The study was large, multi-centered, and used a randomized, double dummy design.  The baseline demographic data they reported indicated highly successful randomization, and analysis was by intention to treat.

Their results showed that in their sample, apixiban was non-inferior to warfarin both in terms of reducing the risk of the primary outcome (HR 0.79 (0.66-0.95), P = 0.01) and in reducing the rate of major bleeding (HR 0.69 (0.6-0.8), P= <0.001).  The reduced incidence of stroke appeared to accrue primarily from a reduction in hemorrhagic stroke relative to warfarin (HR 0.42 (0.30-0.58) P = <0.001).

Some important points were raised during the discussion.  Dr. Flattery and Dr. Feeney both pointed out that, while the trial design and execution appear fairly impeccable based on the article, one should not ignore the extremely close involvement of the manufacturers.  Primary analysis were performed at Bristol-Myers-Squibb, its representatives had a say in trial design, participated in the study design, and the investigators had an enormous variety of relations to an inordinate number of pharmaceutical companies, including BMS and Pfizer.  Dr. Rose pointed out some sleight-of-hand at the end of the article: the trial was designed to prove non-inferiority; however, the investigators report that "apixiban was superior to warfarin in preventing stroke or systemic embolism, caused less bleeding, and resulted in lower mortality."  All of these conclusions would require another study powered for superiority to demonstrate.


Fig 1. Hazard Ratios. Prevalence is on the Y axis, time on the X.
The results were reported in hazard ratios, which (as a reminder) are similar to relative risk but are derived slightly differently.  In survival analysis (which is used to analyze many things other than survival,) people frequently use Kaplan-Meier curves, which plot the incidence of an event over time.  So long as the incidences of the event being studied in the treatment and control groups remain proportional, a ratio of the two can be taken at any point along the curve and should yield the same number, which is the hazard ratio (see Fig. 1).  Thus, a hazard ratio of 2.0 means that the hazard (i.e. the event being studied) is twice as likely to occur at any given point in time in the experimental group than the control group.

Dr. Jafari: Association of Hospitalist Care with Medical Utilization After Discharge: Evidence of Cost Shift from a Cohort Study.

Click here to listen to a recording of Dr. Jafari's presentation
 
Dr. Jafari presented a retrospective cohort study which looked at length of stay, hospital charges, and post-discharge healthcare usage among Medicare patients who were cared for by hospitalists and those who were cared for by their primary care providers (PCP) while in the hospital.

The statistical analysis, which was rather baroque, appeared to show that the patients cared for by hospitalists stayed in the hospital for around a day less, that their hospitalization was cheaper by about $282.00, and that their post-discharge healthcare costs were higher by about $332.00 compared to patients whose PCP cared for them in the hospital.  Moreover, hospitalist care was associated with a marginal risk of re-admission (OR 1.08), ED visits (OR 1.18) and a reduced chance of discharge home (OR 0.82). 

Dr Green-Yeh noted that while these results may have some validity, time is not moving backwards and PCPs are not coming back into the hospital.  She suggested that this study is more useful as a foil for policy discussion than an argument for returning to the "traditional" model of hospital practice.  Dr. Remler saw a number of problems with the study, one of the foremost being that most of the included hospitals were teaching hospitals; this, he argued, disorts the results significantly, since the hospitalist movement began in community hospitals and has primarily flourished there.

Dr. Rose and Dr. Singh both pointed out that there was no analysis by severity of illness, which would seem to be a critical feature to consider given that general internists increasingly defer management of the sickest patients to hospitalists.  Dr. Rose also pointed out that the present Medicare system reimburses hospitalization at a set rate based on DRG, whereas Medicare outpatient visits are reimbursed on a fee-for-service basis, creating an obvious incentive to shorten hospitalization at the expense of outpatient service utilization.

Dr. Flattery and Dr. Sackrin commented that another limitation of this study is that it doesn't account for the lost daytime productivity of a PCP working in a hospital.  Dr. Feeney concluded by emphasizing that what this study really demonstrates is how important it is, when working in an inpatient environment, to strive for continuity of care with patients' outpatient doctors.


Dr. Ha: Venous Thromboembolism Prophylaxis in Hospitalized Patients: A Clinical Practice Guideline From the American College of Physicians

Click here to watch a recording of Dr. Ha's presentation - unfortunately, there's no audio.
 
Dr. Ha presented both a new guideline from the ACP on VTE prophylaxis, and the background evidence review which supports it.  The new guideline (below) is based on a systematic review of the literature on heparin (unfractionated and low-molecular-weight) for the prophylaxis of VTE in medical patients and patients suffering from acute stroke.

 

The guideline contradicts conventional wisdom and many hospital's internal performance standards in questioning the utility and safety of generalized VTE prophylaxis with heparin products.  However, the background paper is, as Dr. Feeney noted, even more pessimistic than the guideline.  The authors conclude:
When considering medical patients and those with stroke together, low dose heparin prophylaxis may have reduced PE and increased risk for bleeding and major bleeding events and had no statistically significant effect on mortality.  We interpret these findings as indicative of little or no net benefit.
 There were some other interesting features of the guideline and review.  First, the authors make an implicit judgment about the cost-efficacy of low-molecular weight heparin as opposed to unfractionated.  While they did observe an increased incidence of heparin-induced thrombocytopenia among patients given unfractionated heparin (which is generally given as the main reason to use low-molecular weight heparin), the incidence in both groups was so low that they suggested the decision as to which product to use should be made on the basis of "ease of use, adverse effect profile, and cost".  Second, they found insufficient evidence to support the use of pneumatic compression devices to prevent DVT.  Finally, the absolute effect sizes derived from their review were infinitessimal.  The estimated that the number-needed-to-treat to prevent one pulmonary embolus in medical and stroke patietns considered ogether was around 333, and the number needed to cause a major bleed was 250.  This suggests that for the vast, vast majority of medical patients, heparin products do absolutely nothing.

Dr. Green-Yeh commented that what is clearly needed, and unfortunately does not yet exist, is a validated tool for establishing VTE risk in medical inpatients based on risk factors.  Some patients must benefit much more than others - but how the "assessment of the risk of VTE" recommended by the ACP is to be carried out is not at all clear.  She also pointed out that this guideline ought to have major implications for pay-for-performance measures involving VTE prophylaxis, which presently are based on the nebulous and apparently wrong sense that heparin prophylaxis should be standard of care unless contra-indicated.