Thursday, September 8, 2011

Hematology-Oncology 9/8/2011

Dr. Shi: Radical Prostatectomy versus Watchful Waiting in Early Prostate Cancer
Click the title to view a recording of the presentation.



Dr. Shi presented a very recent paper from the Scandinavian Prostate Cancer Study Group documenting the results of a trial which randomized men with prostate cancer to surgery or watchful waiting.  These results mark a median of 12.8 years of follow-up (range: 3 weeks to 20.2 years).  The investigators looked at a fairly specific group: these were men under 75 with at least a ten-year life expectancy who had localized disease diagnosed by biopsy with a PSA <50 ng/ml and a negative bone scan.  The original N was 695, of whom just over half had died by 2009, and the primary endpoints were all-cause mortality, disease-specific mortality and distant metastases.  For the most part, their tumors were diagnosed by symptoms, not population-level screening.

The authors report an absolute reduction in the risk of death from any cause of 6.6 percentage points (95% CI -1.3 to 14.5), which corresponds to a number needed to treat (NNT) of 15.  The absolute risk reduction and NNT for death from prostate cancer were similar, as you would expect.  The ARR for distant metastases was 11.7 points (95% CI 4.8-18.6) corresponding to a NNT of 8.5.   

In their subgroup analysis, it appeared that all of this benefit accrued to men under 65 (there were no significant differences in any of the three endpoints for men older than 65).  They report an absolute risk reduction for disease specific mortality of 9.4 percentage points in men under 65, although their confidence interval can't exclude a reduction as small as 0.2 points or as large as 18.6 points.

One of the most important points of the discussion, and of Dr. Shi's summary, was that this is not at all about screening for prostate cancer.  As above, these men were largely diagnosed because they presented with symptoms and many of them had palpable tumors.  So as general internists, this study gives us some guidance about what we can recommend to younger men who, for whatever reason, have received a diagnosis of early prostate cancer, but it doesn't take us beyond the USPSTF's current recommendation regarding screening:
"for men younger than age 75 years, the benefits of screening for prostate cancer are uncertain and the balance of benefits and harms cannot be determined."
Another important theme of the discussion was the low emphasis on harms associated with radical prostatectomy in this article.  The procedure has very high rate of adverse effects, particularly impotence and incontinence.  While the authors reported the 1-year cumulative incidence in the radical prostatectomy group (32% for incontinence, 58% for impotence,) they didn't give a level of detail commensurate to the exploration of benefits, which makes it hard to weigh the risks and benefits even for patients very much like those in the study.


Dr. Kim Suh: A Novel Test for Colon Cancer
Click the title to view a recording of the presentation.

Slobbery bedside manner...

Dr. Kim Suh presented a very euphemistically entitled study from Kyushu University in Japan.  The authors tested the ability of a very highly trained black labrador retriever to detect colon cancer based on breath samples and watery stool samples.  The dog's name is Marine, and she is a fixture at the oddly but accurately named St. Sugar Cancer Sniffing Dog Training Center in Chiba, Japan.  Marine  was originally trained for water rescue, but switched careers and has been learning to detect various human malignancies since 2005.


The authors collected stool and breath samples from people with known colorectal cancer and from healthy volunteers.  All subjects underwent colonoscopy with or without biopsy as indicated, which was considered to be the gold standard against which the dog's performance was judged.  Testing was conducted between November and June, because apparently the dog has trouble concentrating in hot weather. Each test consisted of four samples from individuals without cancer and one who did have cancer.  The placement of the samples in the test room was dictated by a random number table, and their identity was blinded to the dog, the handler and the lab assistants.  However, the accuracy of the dog's identification was rapidly communicated to the handler after it made its final choice so that it could be appropriately rewarded (with a tennis ball).

Future member of Tumor Board?
The dog's results were compared with the final diagnoses at colonoscopy and with conventional FOBT testing.  With respect to colonoscopy, positive identification by the dog had a sensitivity and specificity of 0.91 and 0.99 for breath samples and 0.97 and 0.99 for stool samples.  Interestingly, there was very little correlation with FOBT, demonstrating that whatever the dog is smelling, it's not a human blood product.

Following on from the first article, it seems like the obvious question is whether the Scandinavian Prostate Cancer Study Group can train a dog to tell the difference between indolent and aggressive prostate cancer.  Then the dog could tell the surgeon who is likely to enjoy a longterm survival benefit from radical prostatectomy.  Cost-effectivness would depend on the dog's number-needed-to-smell.

Joking aside, an important thing to note about studies like this is that you can't calculate positive or negative predictive values from the data they use to give sensitivity and specificity unless the prevalence in the study population mirrors the prevalence in your own population.  This is because the likelihood of a false positive depends on the prevalence of the disease; if everybody has it, there can be no false positives, and if nobody has it there can be no true positives.  In the case of this study, the incidence of colorectal cancer in the "population" was one out of every five samples, or 20% - clearly much higher than it would be in a normal screening population - so you can't generalize a PPV calculated from this study to the real world.  This is a much commoner mistake than it seems like it should be, and is worth looking out for.


Dr. Tuason: Erythropoietin Stimulating Agents for Treatment of Anemia in Chronic Kidney Disease
Click the title to view a recording of the presentation.
 Dr. Tuason presented an article reviewing the use of recombinant human erythropoietin (rHU EPO) in the management of anemia caused by chronic kidney disease (CKD).  

The use of erythropoietin stimulating agents (ESAs) such as darbepoetin alfa, epoetin alfa and epoetin beta to boost hemoglobin levels in CKD has a controversial history.  One of the main putative benefits was historically thought to be a reduction in the incidence of serious cardiovascular events related to left ventricular hypertrophy and heart failure, for which anemia is a risk factor.  Some investigators have also claimed beneficial effects of higher hemoglobin levels on various indices of quality of life, restless legs syndrome, and bleeding time.

The authors review three recent, large trials all of which were designed to assess the optimal hemoglobin level in patients with stage III-V CKD by randomizing patients to target-based therapy with ESAs.  

The CREATE trial (2006, N=603) randomized patients to a target hemoglobin of either 13-15 g/dl or 10.5-11.5 mg/dl.  The primary endpoint was a composite of cardiovascular events.  Most patients in both group received some therapy with epoietin beta, although as you might expect more patients were treated in the higher hemoglobin target group.  There was no difference in the primary endpoints between the two groups.

The CHOIR trial (also 2006, N=1432) randomized patients to a hemoglobin target of 11.3 g/dl or 13.5 g/dl, and was terminated early by its data and safety monitoring board when a higher incidence of serious cardiovascular events including deaths, MIs, strokes and hospitalizations for CHF were observed in the higher hemoglobin group. 

The TREAT trial (2009, N=4000) randomized patients to a hemoglobin goal of 13 g/dl, or a goal of >9 g/dl with darbepoietin rescue therapy for those who fell below this level.  Again, a primary end point was a composite of cardiovascular event, and once again there was no difference.  However, there was a significantly higher incidence of ischemic stroke with a number needed to harm of about 40.

So, we have three very large, well-designed trials, none of which show any benefit to targeting higher hemoglobins and two of which show clear evidence of increases in the kind of event this drug was initially hypothesized to decrease.  However, somewhat bizarrely, the authors of this review concluded that we should "avoid the knee-jerk response of underutilization of these drugs."  They cite putative benefits in ""the need for transfusions, quality of life and exercise tolerance, [regression of] LVH" and a hypothetical reduction in the progression of chronic allograft nephropathy in transplant patients.  

This conclusion deserves a little dissection.  While it is possible that ESAs are beneficial in "reducing the need for transfusions," the need for transfusions itself is defined by hemoglobin targets which, as we have seen, are not clear.  The argument about regression of LVH is equally strange, in that the whole point of treating LVH is to prevent hypertensive cardiomyopathy and associated cardiovascular events - exactly the kind of events that ESAs clearly increased the incidence of in both the CHOIR and TREAT trials.  Allograft nephropathy is, of course, a totally different condition then the anemia of chronic renal disease, and even the authors admit that there's actually not really any published data on this anyway.  

But it's their argument about "the wealth of data accumulated in the 1980s and 1990s showing improved quality of life scores and increased vitality" which deserves special attention.  In support of this claim they cite 

  • A study of Arousing Periodic Limb Movements with an N of 10, no control group and a 30% drop out rate,
  • An unblinded, placeboless trial of ESAs influence on subjective assessments of quality of life,
  • A trial of erythropoietin to correct hemoglobin to the normal range which showed increased energy levels.  This is, of course, exactly the range of correction which the TREAT, CHOIR and CREATE trials strongly suggest is dangerous, and finally,
  • A study which they say shows "improvements in exercise training" with recombinant erythropoietin, but which as far as I can tell actually shows improvements similar to exercise training.

    This, Dr. Yee pointed out, is a problem with reviews as a genre; they're put together by individuals who usually have strong views on the subject under discussion which may distort their analysis.  Dr. Irwin said that further research is clearly needed to define optimal hemoglobin levels, but that in his experience ESAs can improve quality of life and that there are cases where this benefit may outweigh the risks.  He also stressed that, in such cases, the optimal hemoglobin is the one that makes the patient feel better and discouraged the inflexible use of targets.  Finally, Dr. Yee gave an interesting oncological perspective on ESAs.  Many solid tumors over-express erythropoietin receptors, and so giving ESAs to patients with active cancer can accelerate tumor progression.

    To summarize, the take-home points from Dr. Tuason's presentation, were essentially that 
    1. Current optimal hemoglobin levels in CKD patients are not established, 
    2. The use of ESAs to raise hemoglobin to the targets used in these trials is not beneficial and probably harmful, and
    3. That one has to be cautious, when reading a review article, to try to understand how the reviewer's professional opinion may distort the conclusions they draw from the evidence.

    Thursday, August 4, 2011

    Neurology JC 8/3/11

     
    Dr Bruchanski: Thrombolytic Therapy for Acute Ischemic Stroke
    (Click the title to listen to a recording of the talk)

    Dr. Bruchanski presented a review of thrombolytic therapy in acute ischemic stroke.  The paper, from the NEJM's Clinical Therapeutics series, reviews the evidence that thrombolytic therapy with recombinant tissue plasminogen activator (rt-PA) can improve neurological outcomes in patients presenting with early thromboembolic CVA. 

    Here's a brief summary: the FDA approved rt-TPA in 1996, partly on the basis of the National Insitute of Neurological Disorders and Stroke Recombinant Tissue Plasminogen Activator study (NINDS rt-PA).  The first part of the NINDS study (N=261) showed no difference in the primary end point of neurologic recovery or improvement at 24 hours.  However, the second part of the study (N=333) looked at complete recovery at 90 days and did detect a benefit from rt-PA therapy (odds ratio 1.7; 95% CI, 1.2-2.6; P=0.008).  Three subsequent trials, (the European Cooperative Acute Stroke Study (ECASS), ECASS II, and the Alteplase Thrombolysis for Acute Noninterventional Therapy in Ischemic Stroke (ATLANTIS) studies failed to replicate these results.  The main difference between ATLANTIS, ECASS and ECASS II and the NINDS study was the timing of therapy.  In the former three trials, patients could be enrolled within 6 hours of their stroke and less than 20% were treated within three hours, whereas in the NINDS study almost all patients were treated within three hours of symptom onset and almost 50% were treated within 90 minutes.  The ECASS III, which treated patients earlier in their disease course and looked at a dichotomized measure of disability at 90 days, did seem to confirm the findings of the NINDS study (odds ratio 1.34, 95% CI 1.02-1.76, P=0.04).  This would seem to suggest that there is a significant benefit associated with rt-PA therapy in acute stroke so long as it's initiated pretty quickly, and this is reflected in the American Heart Association and the European Stroke Association's guidelines for treatment of acute stroke.  This is also supported by a concise graph Dr. Bruchanski showed from a 2004 meta-analysis published in the Lancet (the full article is in the Attendings folder).
    
    Figure 3 from the Lancet meta-analysis plotting adjusted odds
    ratio of favorable outcome against onset-to-treatment time.
    As everybody knows, therapy for acute ischemic stroke is not a benign treatment.  In part 2 of the NINDS study, the rate of symptomatic intracerebral hemorrhage was 1% in the control group and 7% in the treatment group.  In situations like this where the potential for significant benefit is balanced by a substantial incidence of serious harm, it's often useful (both in thinking about it yourself and in explainin options to patients) to consider the number needed to treat (NNT) and the number needed to harm (NNH).  The NNT and NNH are calculated in basically the same way.  First, you figure out what the arithmetic difference is between the rate of an in your experimental group is (the experimental event rate, or EER) and the rate of the same outcome in your control group (the CER).  If the outcome in question is good, we call this arithmetic difference the absolute benefit increase or absolute risk reduction (ABI or ARR), and if it's bad we call it the absolute risk increase (ARI).  The reciprocal of any of these numbers (1/ABI, 1/ARR, or 1/ARI) will all give you the number needed to do something.  If the outcome is good, it's the number needed to treat; if the outcome is bad, it's the number needed to harm

    In the case of thrombolytic therapy for acute stroke as studied in the NINDS trial, the ARI for acute intracranial hemorrhage was 6% (see above).  In the original 1995 paper, the investigators reported the ABI as 11%-13%.  So, using the formula above, we can say that for rt-PA as studied by the NINDS group:

    NNH = 1/0.06 = 17 patients who need to be treated to cause one symptomatic ICH

    NNT = 1/0.12 (roughly) =  9 patients need to be treated for one good outcome at 90 days (as defined by the study).

    (NNT and NNH are rounded up to the nearest whole number)

    You can also put this in terms of the "likelihood of being helped or harmed" (or LHH), which is just he ratio of the ABI to the ARI.  In this case,

    LHH = 0.12/0.06 = 2

    Thus you can say, when obtaining informed consent for thrombolysis in acute stroke, that the treatment is twice as likely to help the patient as it is to hurt them.  Note, however, that while the NNT and NNH give you some sense of the magnitude of the effect, the LHH is just a ratio.  If the NNT was 10,000 and the NNH was 5,000, both effects would be epidemiologically trivial, but the LHH would still be 2.

    Dr. Bruchanski's presentation sparked a lively discussion.  Several people including Dr. Cho and Dr. Lahsaeizadeh expressed concerns about generalizing these results to Highland, since one would imagine we see a disproportionate volume of acute stroke related to cocaine and methamphetamine use relative to the centers where the ECASS and NINDS trials were done.  Since these strokes presumably have a vasospastic rather than a thrombotic etiology, giving rt-PA would expose patients to all the risk of hemorrhage and none of the potential benefit of thrombolysis.  Dr. Desai suggested that a history of recent stimulant use should be considered an exclusion criterion.  While it's sometimes difficult to obtain an accurate substance history, Dr. Abramowitz reminded us that it's a lot easier when the clinical importance of this information is explained to patients in a compassionate and non-judgemental fashion.  Dr. Acharya concluded the discussion by emphasizing that, given the substantial risk of severe harm assocaited with rt-PA, it is especially vital to be completely sure that a patient meets the inclusion criteria of the population for which benefit has been demonstrated and has none of the exclusion criteria published in major guidelines.

    Dr. Kari: Imaging in Headache
    (Click the title to listen to a recording of the talk)
    Dr. Kari presented a prospective observational study which looked at the frequency of significant intracranial abnormalities on imaging in patients referred to a specialist headache unit in Birmingham.  The authors reported that out of 3,655 referrals over a five year period, they referred 530 for neuroimaging (14.5%).  They found a staggering total of 11 abnormal results (2.1% of patients imaged) which were thought to be clinically significant.   The rate of abnormal imaging findings was even lower in patients with a primary diagnosis of migraine (1.2%) and tension headache (0.9%)

    In their discussion, the authors note that their results cohere well with previous studies which looked at the rate of detection of significant abnormalities in people with migraine headaches but no other worrying signs, and in normal volunteers who were imaged with MRI.    Specifically, they cite a meta analysis of 16 studies of MRI in healthy volunteers which found a 0.7% prevalence of intracranial neoplasm and a 2% prevalence of non-neoplastic abnormalities (including cerebrovascular disease).  This implies that the incidence of abnormal findings in patients with headaches is similar to that in the general population.  Moreover, most people would think they actually over-reported the incidence of abnormal findings, since they counted studies done on people who were already known to have intracranial abnormalities (e.g. moyamoya disease) - if these are left out, then the prevalence of abnormal findings falls to 1.5%.  
    Imaging should always be considered
    for headache patients who present with
    "red flag" symptoms and signs.
    
    On the other hand, when intracranial pathology was suspected on the basis of red flags ("symptoms or signs of raised intracranial pressure, focal neurological signs, epilepsy, cognitive disturbance or recent diagnosis of cancer") the prevalence of significant abnormalities was 5.5%.

    The authors aknowledge that they "do not know whether any of the patients who were not imaged  may have subsequently been found to have intracranial pathology," but their results and those of the studies they cite suggest that it's unlikely the prevalence in this group would have exceeded that in healthy volunteers.

    Dr. Acharya pointed out that, while these results are consistent with other major studies, it's worth noting that this is a highly selected patient population.  These are people who've already been referred to neurologists by their general practitioner, had that referral vetted by a neurologist, and seen a specialist headache nurse.  This means there are (theoretically) some built in selection bias - it could be, for example, that the reason the neurologists aren't seeing brain tumors in headache patients is because they've all been diagnosed already by general practitioners.  However, this seems unlikely and this study does seem to support the idea that chronic headache in the absence of concerning findings is unlikely to be diagnostically elucidated by imaging studies.


    Dr. Wang: Vascular Risk Factors and Alzheimer's Dementia
     (Click the title to listen to a recording of the talk)
    Dr. Wang presented a prospective observational study designed to investigate the correlation of vascular risk factors (VRF) with Alzheimer's disease (AD), and to evaluate the hypothesis that treating VRF might slow progression of mild cognitive impairment (MCI) to clinical dementia.  The authors followed a cohort of 837 people who were over 55 and had a diagnosis of MCI over five years to determine if VRF could be correlated with incident Alzheimer's, and if treatment of VRF was correlated with a reduced incidence of the disease.

    Their results did demonstrate a correlation between vascular risk factors and the progression of MCI to AD, and they also showed that treatment for VRF were correlated with a reduced incidence of progression over the course of the study.  They reported statistically significant differences between those who stayed in MCI and those who progressed to AD in terms of diabetes, hypertension, and cerebrovascular disease.  However, one of the most striking differences was that those who progressed to AD were, on average, about five years older than those who didn't.  Recall that the study period was five years; so age is a majro potential confounder here, since it's perfectly possible that the reason the non-progressors didn't progress druing the study period was taht they were all five years younger, and that after the study (when they would have been, on average, as old as the people in the group who did progress to AD,) they developed AD at the same rate.

    Dr. Acharya expressed some doubt about the diagnostic utility of MCI as a category, and Dr. Desai noted that the onset of dementia is extraordinarily difficult to pin down since people can often compensate well for cognitive deficits for a long time before cognitively decompensating.  However, both agreed that if this observational study spurs further prospective work which confirms their findings, it may ultimately help us slow the development of dementia through treatment of VRF.

    The authors reported their results as hazard ratios.  Hazard ratios are a commonly used metric in survival analysis.  In statistics, the "hazard rate" refers to the rate of an event (for all practical purposes, the incidence) at a given moment in time.  The hazard ratio is the hazard rate in the study group (i.e. the group who have the exposure of interest or are being treated with the experimental drug) and the control group.  In this case, the hazard ratios reported refer to the ratio of the incidence of progression to AD among people with a given risk factor to the incidence of progression to AD among people without that risk factor.  These "crude hazard ratios" are then adjusted for potential confounding factors to give the "adjusted hazard ratios" they report.  You can see that the hazard ratio is something like the relative risk (AKA risk ratio), although they're calculated differently.

    What the hazard ratio tells you is the odds, for any moment in time, that someone in the experimental group will experience the endpoint in question before someone in the control group.  To take a specific example, when the authors say that the adjusted hazard ratio for AD among patients with diabetes is 1.62, what they mean is that at any given instant during the study period people with diabetes were 1.62 times more likely to develop AD than people without diabetes. 

    If you're interested, there's a good article explaining HRs in more detail here.

    Special thanks to Dr. Schafhalter-Zoppoth and Dr. Ren for helping us understand the statistics in this article.

    Monday, July 18, 2011

    Hospitalist JC 7/15/11

    Dr. Indulkar: Ruling Out Venous Thromboembolism
    Dr. Indulkar presented a prospective, multi-center study which evaluated the capacity of four clinical decision rules (CDRs) in conjunction with serum d-dimer to exclude the diagnosis of pulmonary embolism.  These were the Wells Score, the Revised Geneva Score, and simplified versions of both.  The study was adequately powered to detect differences of 5% or more in outpatients.  The investigators reported very similar test characteristics for all 4 CDRs with sensitivity and negative predictive value over 99% across the board.  Only one pulmonary embolism was diagnosed in a patient categorized as low-risk by all four rules, which gives a failure rate of 0.6% for the CDRs + d-dimer approach (95% CI 0.02%-3.3%).  Interestingly, the failure rate for CT was 1.6% (CI 0.08%-3.6%).

    Dr. Indulkar made an important point about the study, which was that it’s not designed for inpatients and their data doesn’t support using the CDR + d-dimer approach to rule out PE in this population.  This is because (as Dr. Ng has previously pointed out at HGH,) a very slight proportion of inpatients actually have a negative d-dimer, so it’s very infrequent that this test changes the subsequent workup since nearly all go on to CT pulmonary angiography or V:Q scanning anyway. 

    Dr. Indulkar also pointed out that this study excluded three very high risk groups: pregnant women, people who had received low-molecular-weight heparin >24 hours before evaluation for the study, and people with a life expectancy of <3 months.  It also excluded people with significant renal impairment.  These exclusions are worth bearing in mind when you apply their approach.

    With these limitations in mind, this study did show effectively that in the population they looked at, the combination of any of the four CDRs they looked at with a serum d-dimer assay was a very effective means of excluding  venous thromboembolic disease.

    This study reminded me of a term you sometimes hear in the ED, which is “PERC’ed out.”  People who say this are referring to the Pulmonary Embolism Rule-out Criteria, yet another CDR which was originally designed to separate low risk people into people who were so low risk they didn’t even need a d-dimer and people who were not quite that low risk.  So, should you use the PERC CDR to decide whether or not you want to order a d-dimer?  Do you have to remember yet another CDR?  A recent Swiss study would suggest not.  Hugli and colleagues retrospectively examined data from another PE study, and found that the PERC rule had a “negative likelihood ratio of 0.70 (95% CI: 0.67-0.73) for predicting PE overall, and 0.63 (95% CI: 0.38-1.06) in low-risk patients.”  This translates into a 6.4% prevalence of PE in people categorized as low-risk (in this study) who were “PERC’ed out,” (i.e. a negative predictive value of 93.6%).  Personally, if I thought I had a PE, I would want the NPV of 99.5 this study demonstrated for the combined CDR + d-dimer approach.

    Dr. Badran: Metformin in Heart Failure

    Dr. Badran’ presented a study retrospective cohort study of patients in Tayside, Scotland (which is near the birthplace of William McGonnagall, reputed to be the worst poet in the English language).  The study looked at patients who had a) a diagnosis of diabetes, b) had subsequently been diagnosed with heart failure, and c) had received metformin after being diagnosed with heart failure.  The did this to evaluate the popular wisdom that metformin should be avoided in heart failure because of the risk of lactic acidosis.  They found that “[metformin’s] benefits clearly outweigh its risks in patients with hemodynamically stable heart failure and adequate renal function.”

    While we would all probably like to believe this, Dr. Badran pointed out some important problems with the study.  First of all, there were statistically significant differences between the groups: patients on metformin had lower baseline creatinine, and more of them were on ACE inhibitors and aspirin.  Although the investigators seem to have tried to control for them, these differences would be expected to skew all-cause mortality in the direction observed.  Second, while the authors say that they adjusted their hazard ratios for the differences between the groups who received metformin and those who didn’t, they don’t tell you how or exactly for what. 

    Dr. Feeney pointed out that the diagnosis of heart failure was based on prescriptions for loop diuretics and ACE inhibitors, a combination which is not totally unique to heart failure and therefore may have mis-classified some people as CHF patients who in fact had, say, hypertension and renal impairment but no heart failure.  He also pointed out that the investigators didn’t categorize people by severity of heart failure (there was no data on EF), so it could be that they observed better all-cause mortality among the metformin group because they all had less severe heart failure, which is why they got metformin in the first place. 

    Dr. Flattery brought up that we don’t really know what the expected incidence of lactic acid related to metformin is supposed to be, so it’s also not clear they observed enough people to detect this adverse event.  The higher the number-needed-to-harm for metformin in heart failure is relative to the N of their study, the less likely their study would be to pick up any cases.

    Dr. Badran concluded by pointing out that, while it’s possible this is that someday we’ll happily give metformin to diabetics with CHF, the investigators are definitely wise to say that “further randomized placebo controlled trials in this area would be required to provide definitive evidence of the benefit of metformin in this group of patients.”

    Dr. Bruchanski: Beta-Blockers in COPD
    Dr. Bruchanski presented another retrospective cohort study from the same part of Scotland.  The investigators used a database whimsically named after Dr. Who’s time-traveling spaceship (the TARDIS), to examine the relation of beta-blocker administration in people with COPD to all-cause mortality, emergency steroid prescription, and hospital admission.  Like the metformin study, they were trying to evaluate an old saw which may or may not be true, i.e. that beta-blockers must be avoided in COPD because of the risk of bronchoconstriction and their antagonism of the beta-agonists which are one of the mainstays of treatment.  Strikingly, they reported a 22% reduction in all-cause mortality in the group on beta-blockers.  Moreover, they divided their data into10 subgroups covering range of possible combinations in the stepwise treatment of COPD, (e.g. inhaled corticosteroid, inhaled corticosteroid plus long-acting beta agonist, etc.) and found that for each group the population who were also on a beta-blocker had significantly lower hazard ratios for hospital admission, emergency steroid prescription, and all-cause mortality than those who were only on COPD meds.  These results held up when they controlled for the life-prolonging effects of other cardiac meds the patients were on (aspirin, statins, etc.)

    These are certainly intriguing results, but there are many alternative explanations to the authors assertion that “beta-blockers may confer reductions in mortality…in patients with COPD in addition to the benefits attributable to addressing cardiovascular risk.”  As Dr. Schub pointed out, this was not a study of people who received beta-blockers for COPD, but rather of people who got them for some other indication and also happened to have COPD.  While it does seem to suggest that the benefits of beta-blockers in conditions for which they are indicated aren’t vitiated by the presence of COPD, it would clearly require a prospective, randomized study to show that beta-blockers positively influence the natural history of COPD. 

    Nonetheless, the consensus of the faculty was that this study appeared to show that, when otherwise indicated, beta-blockers may well be safe in patients who also have COPD, and that it justifies further research to test that hypothesis.