The Spanking-Videos- 'The Green Journal' Tue, 09 Jan 2018 07:29:37 +0000 en-US hourly 1 Provoked Dizziness from Bow Hunter’s Syndrome- Tue, 26 Dec 2017 13:47:01 +0000 (A) Digital subtraction angiography (DSA) demonstrated focal stenosis of V2 segment of left vertebral artery (yellow arrow). (B) DSA revealed that the posterior circulation of the brain was otherwise normal beyond the level of stenosis.

(A) Digital subtraction angiography (DSA) demonstrated focal stenosis of V2 segment of left vertebral artery (yellow arrow). (B) DSA revealed that the posterior circulation of the brain was otherwise normal beyond the level of stenosis.


A 62-year-old man was experiencing debilitating bouts of dizziness provoked by head rotation until ultimately finding relief with cervical spine fixation. He had a history of coronary artery disease, human immunodeficiency virus, tobacco use disorder, and generalized anxiety, and initially presented to a routine primary care visit with 3 weeks of episodic dizziness. During these dizzy spells, he became unsteady on his feet and typically experienced tinnitus. His symptoms worsened when he turned his head to the left; this was occasionally followed by intense, albeit brief, headaches. He denied any episodes of syncope and did not experience chest pain, palpitations, or diaphoresis during these attacks. He had not experienced diplopia, dysarthria, focal limb weakness, or paresthesias. He had no ear fullness or hearing loss. He denied weight loss, fevers, or chills.


A comprehensive neurologic examination including detailed oculomotor testing was unremarkable. He had a steady gait with no truncal sway. A Dix-Hallpike maneuver failed to reproduce his symptoms. However, on leftward rotation of the head to approximately 30-45 degrees, the patient experienced near syncope with nausea, and began to hyperventilate. Cardiovascular examination revealed no carotid bruits, a regular cardiac rhythm with no murmurs, jugular venous pressure estimated at 6 cm H2O, and normal radial and pedal pulses. A 12-lead electrocardiogram revealed normal sinus rhythm with an incomplete bundle block, and 48-hour Holter monitoring did not identify any brady- or tachyarrhythmias. The patient’s dizziness was attributed to possible benign paroxysmal positional vertigo (BPPV), or possibly a somatic manifestation of anxiety.

Over the ensuing month, his spells became more frequent and intense. Given his burden of cardiovascular risk, he was referred for magnetic resonance angiography, which revealed focal moderate-to-severe stenosis of the mid-V2 segment of the dominant left vertebral artery (Video 1). Given these findings, he was referred to a neurovascular specialist who recommended additional neurovascular imaging.


The differential diagnosis for episodic dizziness is broad and includes myriad causes; the Table1234 highlights some of the most common etiologies. The breadth of the differential diagnosis and the fact that some episodes of dizziness represent life-threatening disorders (eg, stroke, vertebral artery dissection, ventricular arrhythmia) makes the evaluation of dizziness an oft-daunting task for generalists.

To read this article in its entirety please visit our website.

-Paul A. Bergl, MD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
Hepatocellular Carcinoma Screening Associated with Early Tumor Detection and Improved Survival Among Patients with Cirrhosis in the US- Sun, 24 Dec 2017 13:00:10 +0000

detail of cancer cell

Professional societies recommend hepatocellular carcinoma screening in patients with cirrhosis, but high-quality data evaluating its effectiveness to improve early tumor detection and survival in “real world” clinical practice are needed. We aim to characterize the association between hepatocellular carcinoma screening and early tumor detection, curative treatment, and overall survival among patients with cirrhosis.


We performed a retrospective cohort study of patients diagnosed with hepatocellular carcinoma between June 2012 and May 2013 at 4 health systems in the US. Patients were categorized in the screening group if hepatocellular carcinoma was detected by imaging performed for screening purposes. Generalized linear models and multivariate Cox regression with frailty adjustment were used to compare early detection, curative treatment, and survival between screen-detected and non-screen-detected patients.


Among 374 hepatocellular carcinoma patients, 42% (n = 157) were detected by screening. Screen-detected patients had a significantly higher proportion of early tumors (Barcelona Clinic Liver Cancer stage A 63.1% vs 36.4%, P <.001) and were more likely to undergo curative treatment (31% vs 13%, P = .02). Hepatocellular carcinoma screening was significantly associated with improved survival in multivariate analysis (hazards ratio 0.41; 95% confidence interval, 0.26-0.65) after adjusting for patient demographics, Child-Pugh class, and performance status. Median survival of screen-detected patients was 14.6 months, compared with 6.0 months for non-screen-detected patients, with the difference remaining significant after adjusting for lead-time bias (hazards ratio 0.59, 95% confidence interval, 0.37-0.93).


Hepatocellular carcinoma screening is associated with increased early tumor detection and improved survival; however, a minority of hepatocellular carcinoma patients are detected by screening. Interventions to increase screening use in patients with cirrhosis may help curb hepatocellular carcinoma mortality rates.

To read this article in its entirety please visit our website.

-Amit G. Singal, MD, MS, Sahil Mittal, MD, Olutola A. Yerokun, BS, Chul Ahn, PhD, Jorge A. Marrero, MD, MS, Adam C. Yopp, MD, Neehar D. Parikh, MD, MS, Steve J. Scaglione, MD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
Single High-Sensitivity Cardiac Troponin I to Rule Out Acute Myocardial Infarction- Sat, 23 Dec 2017 13:00:09 +0000
Myocardial Infarction J. Heuser 19. June 2006

Myocardial Infarction
J. Heuser
19. June 2006

This study examined the performance of single high-sensitivity cardiac troponin I (hs-cTnI) measurement strategies to rule out acute myocardial infarction.


This was a prospective, observational study of consecutive patients presenting to the emergency department (n = 1631) in whom cTnI measurements were obtained using an investigational hs-cTnI assay. The goals of the study were to determine 1) negative predictive value (NPV) and sensitivity for the diagnosis of acute myocardial infarction, type 1 myocardial infarction, and type 2 myocardial infarction; and 2) safety outcome of acute myocardial infarction or cardiac death at 30 days using hs-cTnI less than the limit of detection (LoD) (<1.9 ng/L) or the High-STEACS threshold (<5 ng/L) alone and in combination with normal electrocardiogram (ECG).


Acute myocardial infarction occurred in 170 patients (10.4%), including 68 (4.2%) type 1 myocardial infarction and 102 (6.3%) type 2 myocardial infarction. For hs-cTnI<LoD (27%), the NPV and sensitivity for acute myocardial infarction were 99.6% (95% confidence interval 98.9%-100%) and 98.8 (97.2%-100%). For hs-cTnI<5 ng/L (50%), the NPV and sensitivity for acute myocardial infarction were 98.9% (98.2%-99.6%) and 94.7% (91.3%-98.1%). In combination with a normal ECG, 1) hs-cTnI<LoD had an NPV of 99.6% (98.9%-100%) and sensitivity of 99.4% (98.3%-100%); and 2) hs-cTnI<5 ng/L had an NPV of 99.5% (98.8%-100%) and sensitivity of 98.8% (97.2%-100%). The NPV and sensitivity for the safety outcome were excellent for hs-cTnI<LoD alone or in combination with a normal ECG, and for hs-cTnI<5 ng/L in combination with a normal ECG.


Strategies using a single hs-cTnI alone or in combination with a normal ECG allow the immediate identification of patients unlikely to have acute myocardial infarction and who are at very low risk for adverse events at 30 days.

To read this article in its entirety please visit our website.

-Yader Sandoval, MD, Stephen W. Smith, MD, Sara A. Love, PhD, Anne Sexter, MPH, Karen Schulz, DC, Fred S. Apple, PhD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
The Clinical Relevance of Studies on Borrelia burgdorferi Persisters- Fri, 22 Dec 2017 13:58:11 +0000 Ixodes scapularis, the primary vector of Lyme disease in eastern North America. (Image Credit: Public Library of Science, Wiki CC License.)

Ixodes scapularis, the primary vector of Lyme disease in eastern North America. (Image Credit: Public Library of Science, Wiki CC License.)

In North America, Lyme disease is principally caused by Borrelia burgdorferi sensu stricto, hereafter referred to as “B. burgdorferi.” It is acquired by the bite of an infected Ixodes tick. The most common clinical manifestation is a skin lesion, referred to as “erythema migrans,” which is due to cutaneous infection with B. burgdorferi. Other objective manifestations may involve the nervous system, heart, or joints. Treatment with antibiotics typically resolves the objective clinical manifestation. Accompanying subjective symptoms, such as fatigue and joint or muscle pain, often persist for many weeks. Patients with such subjective symptoms lasting 6 months or more are often referred to as having “post-treatment Lyme disease symptoms.” Such prolonged symptoms occur in approximately 10% of US patients treated for erythema migrans.

One theory advanced to explain the long-term persistence of symptoms is failure of the initial course of antibiotic therapy to eradicate fully B. burgdorferi cells. Why or how residual bacterial cells might result in persistence of nonspecific symptoms, in the absence of a localized inflammatory lesion at the site of the residual infection, is not known. However, the possibility that post-treatment symptoms are due to persistent B. burgdorferi infection has been explored in several placebo-controlled, antibiotic retreatment studies. The results of 5 such clinical trials failed to provide evidence of convincing clinical benefit or that the risk/benefit ratio favored this therapeutic approach.12 Because considerable improvement (up to 38%) was observed among placebo controls, this suggests that persistent symptoms are often reversible.1 Some of these studies also attempted to establish evidence of persistence of B. burgdorferi by culture or molecular methods.1 None were successful.

Despite the lack of evidence of persistent infection and the absence of a discrete inflammatory focus of infection expected for infections caused by B. burgdorferi, other indirect approaches have been examined to validate the assumption of persistent infection in patients with post-treatment symptoms. One is based on in vitro studies demonstrating persistence of viable B. burgdorferi in cultures treated with antibiotics.34 This form of persistence has been seen with many other species of bacteria.5 Such “persisters,” after isolation and recultivation in vitro, however, are no more resistant to the killing effects of the antibiotic studied than they originally were; thus, they are neither antibiotic resistant mutants nor appear to be biologically altered in any other way from the original bacterial strain.5 Various mechanisms have been proposed to account for this form of persistence67; however, none have been confirmed experimentally.

Furthermore, this form of persistence in vitro has not been observed consistently with B. burgdorferi and appears to be highly dependent on the particular laboratory conditions used. One condition is the requirement for a large inoculum of bacterial cells, numbers that may be less relevant—if relevant at all—to what occurs in vivo.8 For example, in patients with meningitis due to Lyme disease, so few spirochetes are present in the subarachnoid space that both culture and polymerase chain reaction are negative in the majority of cases, before any treatment with antibiotics. In addition, and of greater importance, is that the in vitro conditions required to demonstrate the presence of “persisters” fail to account for the role of the humoral and cellular effects of the host’s immune system. Because the protective effects of the host’s immune system play a decisive role in curing or limiting infections in vivo, it is impossible to evaluate the clinical significance of “persisters” observed in in vitro experiments. Moreover, the in vitro phenomenon of “persisters” as described earlier is pertinent only to the cidal effects of antibiotics. Except for certain infections, for example, infective endocarditis, inhibitory effects of antibiotics are sufficient to cure bacterial infections. Many currently used antibiotics exhibit only bacteriostatic effects when used in vitro and in vivo, yet are highly effective clinically.

Multiple studies have investigated whether B. burgdorferi might persist in infected animals after antibiotic therapy.9101112 Several approaches have been used to assess persistence. They include culture; the polymerase chain reaction to detect DNA and RNA; quantitative polymerase chain reaction to determine if the number of borrelial cells is changing over time; whether ticks that fed on treated animals become infected (xenodiagnosis) and, if they do, whether they are capable of transmitting infection to uninfected animals; whether tissue samples obtained from antibiotic-treated infected animals cause infection after transfer to uninfected animals; whether antibody levels to B. burgdorferi change over time after antibiotic treatment; and various methods to visualize spirochetes in the tissues of antibiotic-treated animals. A major limitation of most of these studies is the failure to treat animals with doses of antibiotics that approximate the antibiotic exposure that would be expected in humans receiving standard treatment regimens.9 In addition, most of these studies failed to measure the antibiotic blood levels achieved in infected animals at even a single time point.9 The results of these studies have been highly variable. Some have claimed that viable cells were found,911 whereas others have found evidence only of bacterial cellular debris.10 Most of the studies that have claimed to demonstrate viability did not base this assessment on the ability to grow B. burgdorferi in culture. In one study, in which infected mice were treated with only 5 days of an antibiotic, culture of the entire mouse failed to reveal the presence of viable B. burgdorferi.12 In addition, none of these studies have demonstrated that what was assumed to be a persistent infection was associated with tissue inflammation in the originally infected animals or that tick or tissue transfer of putative residual borrelia to uninfected animals induced inflammation.9 One finding that emerged provided additional support for the hypothesis that the pathogenesis of Lyme arthritis might be related at least in part to B. burgdorferi cellular debris in or near joint spaces.1013

Whether B. burgdorferi persists in some antibiotic-treated patients in the United States with clinically resolved Lyme disease has not been established or completely excluded. We do know that patients with recurrences of erythema migrans skin lesions are typically newly infected with a different strain of B. burgdorferi that was acquired from another tick bite.14 As noted earlier, there is no evidence to date to indicate that “persisters” are present in patients with post-treatment Lyme disease symptoms.1 If “persisters” were present in patients with persistent symptoms, what would be the mechanism responsible for causing symptoms in the absence of residual inflammation, because B. burgdorferi is not known to produce exotoxins.15

It cannot be overemphasized that the complete elimination of infection is seldom used as the benchmark for success in the treatment of other infectious diseases. Resolution of the objective manifestations of the infection and lack of relapse, rather than the complete elimination of viable bacteria, are of primary concern. Experience with latent tuberculosis has been highly instructive in providing evidence that persistence per se causes no symptoms, and if latent disease becomes active it is associated with a site of inflammation.

To read this article in its entirety please visit our website.

-Phillip J. Baker, PhD, Gary P. Wormser, MD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
The Coronary (Cardiac) Care Unit at 50 Years: A Major Advance in the Practice of Hospital Medicine- Wed, 20 Dec 2017 13:58:10 +0000 heart-ecg-background-stock

This year, 2017, marks the 50th anniversary commemorating the publication of an article describing the results from the classic study by Killip and Kimball showing a reduction in mortality from acute myocardial infarction in patients sequestered in a specialized hospital unit1 at New York Hospital in New York City. Also described in the article was the Killip Classification of acute myocardial infarction, which described and detailed the relationship between the presence or absence of classic heart failure and shock with mortality outcomes, a bedside prognostic index that has stood the test of time.

When we (WHF and JSA) were medical students in Boston during the mid-1960s, patients with acute myocardial infarction were placed in oxygen tents, often on large medicine wards mixed with other medical patients. The in-hospital mortality rate of patients ranged from 30% to 40%. One of us (WHF) remembers the case of a 35-year-old Boston fireman with an acute myocardial infarction who was admitted to the Pavilion Medical Service at Boston City Hospital onto a 40 patient bed male medicine ward, where he was put in an oxygen tent for 1 week, with his bed situated between that of a patient with terminal uremia and another dying of metastatic esophageal cancer. A primitive bedside arrhythmia monitor was used, and the constant beeping sounds kept all the other patients on the ward awake. The fireman survived the acute myocardial infarction on coumadin without having the common complication of acute pulmonary embolism from prolonged bedrest, the practice at the time.

With advances taking place during the 1960s in cardiac resuscitation procedures, such as the introduction of closed chest cardiac massage,2 transthoracic defibrillation,3 and cardiac pacemakers, the natural history of in-patient acute myocardial infarction began to change in a favorable direction. More effective arrhythmia monitoring technologies also became available.3

The concept of the coronary care unit (CCU) actually began in the early 1960s and was first conceived by Day4 in the United States, by Brown in Canada,5 and by Julian6 in the United Kingdom. The large New York Hospital experience reported in 19671 demonstrated that monitored patients in a CCU, especially those patients without pulmonary embolism or shock, seemed to benefit from being in such a monitored unit compared with those patients treated on a general medicine ward. The benefit related to the early recognition and treatment of arrhythmias. The New York Hospital experience also recognized that trained nurses could begin resuscitation efforts immediately while in-house physicians were being called.

Realizing the potential importance of the CCU, the National Institutes of Health supported the Myocardial Infarction Research Unit (MIRU) program to further improve outcomes with acute myocardial infarction. The MIRUs were located at the University of Alabama in Birmingham, Duke University in North Carolina, New York Hospital-Cornell in New York City (where WHF was a fellow and where TK was Chief of Cardiology), Cedars of Sinai Hospital in Los Angeles, the University of Rochester, the University of Chicago, Johns Hopkins, and Massachusetts General Hospital in Boston. Many of the participating faculty and trainees in the MIRU program would become the leaders of academic cardiology for years to come. The Swan-Ganz catheter came out of the MIRU program,7 adding hemodynamic monitoring to the role of the CCU.8

With other advances in the management of acute myocardial infarction,9 such as coronary artery bypass surgery (also celebrating its 50th year),10 coronary angioplasty and stenting, the intra-aortic balloon pump,11 left and right ventricular assist devices, extracorporeal membrane oxygenation,12 anticoagulants, β-adrenergic blockers,13 and hypothermia, the overall in-hospital mortality from myocardial infarction has been reduced below 5%, including Killip Class III and IV patients. For class I patients, the in-hospital mortality has become negligible. Advances have also taken place in prehospitalization coronary care by trained paramedics, and prevention of acute myocardial infarction has become a major emphasis of medical care and public policy compared with 50 years ago.

Of significance, in recent years the types of patients admitted to the CCU have changed. The patients who are now admitted have more critical illnesses and comorbidities. In a recent retrospective review of 1042 patients admitted to a CCU,14 the patient diagnoses continue to include patients with acute coronary syndromes (ST and non-ST elevation acute myocardial infarctions) but also those with severe heart failure (ischemic and nonischemic), valve disease, pericardial disease, primary ventricular and bradyarrhythmias, acute aortic dissection, renal failure, and sepsis. The care needs of these patients go beyond the expertise of the clinical cardiologist15 and require the input of cardiothoracic surgeons, cardiac electro-physiologists, heart failure specialists, pulmonary–critical care intensivists with expertise in ventilation, nephrologists, and infectious disease consultants. The term CCU should be changed to “cardiac intensive care unit” (CICU), reflecting the changing population and their care needs.

The 50th anniversary of the Killip-Kimball article marks an important milestone, and to quote the authors, “the development of the CCU represents one of the most significant advances in the hospital practice of medicine.”1

To read this article in its entirety please visit our website.

-William H. Frishman, MD, Joseph S. Alpert, MD, Thomas Killip III, MD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
The History of the Salt Wars- Mon, 18 Dec 2017 13:38:03 +0000 salt-shaker-stock

The “Salt–Blood Pressure Hypothesis” states that an increase in the intake of salt leads to an increased in blood pressure and subsequently increases the risk for cardiovascular events, which has been a point of contention for decades. This article covers the history and some of the key players pertaining to “The Salt Wars” during the first half of the 1900s, both in Europe and in the United States. Early studies finding benefits with salt restriction in those with hypertension were based on uncontrolled case reports. The overall evidence in the first half of the 1900s suggests that a low-salt diet was not a reasonable strategy for treating hypertension.

In the late 1800s salt was not demonized as a cause of water retention, edema, and kidney disease. In fact, salt restriction was actually thought to cause some of these conditions.1 According to an article published by Branche in 1885, salt depletion resulted in extreme weakness, anemia, albuminuria, and edema; and as early as 1909, heat and muscle cramps from sodium depletion were well recognized symptoms.23 Other side effects of salt restriction included vertigo, headache, apathy, anorexia, nausea, feeble twitching of the muscles, abdominal cramps, and oliguria. More severe side effects included vascular collapse, cold extremities, and large drops in blood pressure (hypotension).1

Carrion and Hallion in 1899 were the first to suggest that excess salt in the body pulled water from bodily tissues, increasing plasma volume.1 This theory was soon championed by Achard in 1901, who suggested that edema of Bright’s disease (chronic inflammation of the kidneys) was caused by the retention of chloride, causing an over-retention of water to dilute excess chloride. Afterward, Achard went on to confirm that chloride was also retained in febrile disease, heart failure, and nephritis (inflammation of the kidneys).1 It was thus argued that salt retention was the cause of numerous diseases rather than its retention being caused by the disease condition. This was essentially the beginning of the end for salt, being considered not a healthy natural substance providing 2 essential minerals (sodium and chloride) but rather a dietary blood pressure–raising demon.

Widal in 1903 and Strauss in 1904 were the first to test a low-salt diet as a treatment of edema, noting “peripheral, pulmonary, and even cerebral edema” with the addition of salt to the diet, whereas limiting salt intake “…occasioned a relatively rapid disappearance of the edema.”1 According to Widal, “Salt…in certain cases of Bright’s disease is a dangerous article of diet”1; and Widal and Archard both claimed credit for the idea that chloride retention causes heart and kidney edema.1

In 1904, 2 French scientists named Ambard and Beaujard (sometimes spelled Beauchard) further promoted the idea that salt retention was a driver of edema and hypertension. These authors were credited for inventing the Salt–Blood Pressure Hypothesis and were some of the first scientists to spark The Salt Wars.4 However, there was tremendous controversy at the time because “…the general German experience was opposed to a strict relationship between retention of chlorids and elevation of blood pressure.”5 In 1907, Lowenstein was unable to demonstrate a correlation between chloride retention and blood pressure in patients with renal hypertension, with only 1 of 10 cases having “a definite relationship between the fall in blood pressure and elimination of chloride from the body.”1

During this time Ambard and Beaujard were testing salt restriction in patients with hypertension and found retention of chloride in hypertensive patients. They studied 6 hypertensive patients (some with valvular heart disease and/or Bright’s disease) with a low-salt diet consisting of 3 g of salt (1.2 g of sodium) and compared it against a high-salt diet (14 g of salt or 5.8 g of sodium). Despite the salt intake being approximately twice that compared with a normal sodium diet (ie, 5.8 vs 3.4 g of sodium), “The changes in blood pressure were not striking but tended to be downward when the low salt diet was given and upward when the higher salt intake was allowed.”1

Ambard and Beaujard believed that both edema and hypertension were caused by a saturation of the body with salt, but even these authors realized that salt restriction did not completely normalize blood pressure in those with hypertension. However, the idea that salt restriction would prevent those with kidney disease from developing permanent severe hypertension made logical sense.1 Soon after, Laufer came up with a diet that was even lower in salt compared with that recommended by Ambard and Beaujard. The diet contained just 100-720 mg of sodium/d (instead of 1200 mg) but provided a sufficient amount of calories and protein. Laufer’s diet consisted of 200 g of rice, 300 g of wheat flour, 500 g of potato, 100 g of white cheese, 100 g of sugar, and 1 L of water. This diet was very similar to that which would be recommended by Walter Kempner 40 years later.1 However, the “low-salt rice diet” was actually first invented in 1904 by Laufer (40 years before Walter Kempner’s rice diet). Interestingly, both diets allowed fairly high amounts of sugar, because back then sugar was considered innocuous. However, the evidence is finally starting to shed light on the harms of sugar, suggesting that we may have blamed the wrong white crystal all along.6789

To read this article in its entirety please visit our website.

-James J. DiNicolantonio, PharmD, James H. O’Keefe, MD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
Stem Cell Therapy: The Phoenix in Clinical Medicine?- Fri, 15 Dec 2017 13:38:01 +0000 Dr. Joseph S. Alpert

The Phoenix is a mythical bird with brightly colored plumage known in ancient Greek for the legend of its rebirth. After a long life, the Phoenix dies in a fire of its own making and then rises again reborn from the ashes. This myth parallels current feverish beliefs concerning the ability of stem cell therapy to regenerate tissues in diseased organs. Investigation into stem cell therapy has become one of the most intriguing areas of basic science and clinical research during the last decade. The concept of stem cell-based tissue regeneration has raised ample hopes in the eyes of health care practitioners and patients seeking repair of injuries to a variety of organs damaged by serious illnesses, which in the recent past were considered “incurable” or “irreversible.” The hope has been that such regenerative therapy would reduce associated morbidity and mortality rates. The news media and the general public have already taken an enthusiastic attitude towards this new and exciting concept of clinical therapeutics. In 2010, the US Department of Health and Human Services published an optimistic report entitled “2020: A New Vision—A Future for Regenerative Medicine.”1 However, despite this enthusiasm, a number of clinical studies have reported inconsistent findings at this point, warning of a long road before these therapies can become part of daily clinical practice.23456

Recently, the New England Journal of Medicine published 2 articles involving stem cell therapy for 5 patients with macular degeneration. In both reports, the injected stem cells were derived by laboratory manipulation from the patient’s own cells, that is, autologous.78 In the first report, Mandai et al7 reported that a sheet of stem cells derived from the patient’s skin fibroblasts were surgically placed under the retina with resultant engraftment. They noted that although the sheet of cells remained intact and viable 1 year later, there was no improvement in vision, and instead, macular edema had developed. In the second report, Kuriyan et al8 examined 3 patients who had been treated at a self-proclaimed “stem cell clinic” in the community. Each had received intraocular injections of alleged stem cells derived from the patient’s own adipose tissue. These patients suffered loss of vision associated with intraocular hypertension, hemorrhagic retinopathy, vitreous hemorrhage, and retinal detachment or lens displacement. In an accompanying editorial, George Q. Daley from Children’s Hospital and Harvard Medical School referred to the treatment received by the 3 patients whose vision deteriorated as “careless” and a “wanton misapplication of cellular therapy.”3

A number of clinical studies have employed stem cell modalities in patients with ischemic heart disease and reduced left ventricular function.456 The commonest population studied involved individuals with acute, subacute, or chronic myocardial infarction (>30 days post infarction). Adult stem cells in the form of cardiac progenitor cells, mesenchymal stem cells, adipose-derived stem cells, and bone marrow-derived stem cells are being used in 14 ongoing clinical trials.6 A simplistic view of the hypothesis for these trials is that the administered pluripotent cells would grow and differentiate into functioning myocytes when implanted into the myocardium, thereby replacing dying cardiomyocytes and improving overall ventricular function. Endothelial progenitor cells have also been tested in a smaller number of trials, under the assumption that this biological therapy would lead to neovascularization with subsequent improvement in myocardial perfusion and function. At this point in time, over 2000 patients with ischemic heart disease have been entered into clinical trials employing one form or another of stem cell therapy.5 Perhaps due to the biological nature of these infusions, and thus, the inherent variation in the constituent products, the results of these clinical trials have been in conflict with each other. Among 20 published trials of ischemic heart disease, 13 trials showed no significant improvement in left ventricular function.5 Even in trials where benefit has been demonstrated, the improvement has been quite modest.

A small number of adverse events have been observed in early trials of cell therapy. Theoretically, possible side effects that might result from stem cell injection include failure of retaining stem cells in the desired location, tumorigenesis, and adverse immunological responses if an allogeneic source of stem cells is used. To date, very few of these potential problems have been reported in the clinical trials performed, leading to the impression that this biologic therapy is safe, although not completely proven as yet to result in marked long-term amelioration of cardiac function. One meta-analysis did show that injection of bone marrow cells into the myocardium seemed apparently safe, with minimal adverse effects.9

The mixed results obtained so far involving stem cell therapy support the importance of an interplay between basic science investigators and clinical researchers to maximize the likelihood of success for future testing of one or more of the various cellular therapeutic modalities. Considerably more basic science and clinical investigation will be required before this new modality can be recommended for patients with a variety of illnesses. Future work in this area will require standardization of the protocols, with rigorous attention paid to patient medical condition, timing of administration, quantity of the biologic material infused, the various stem cell sources employed, and the technique(s) of cell delivery or infusion, including not only the cells but also any necessary matrix or growth factors that will enable the cells to engraft in the right place and synchronize with the host cells.10 One interesting discovery is that cardiosphere cells secrete vesicles containing a bundle of biological active factors, that is, exosomes, which have healing power, providing the potential for bypassing cell injection.11 Although 4 clinical trials have entered Phase III, from a clinical point of view, large scale, double-blind randomized studies with standardized cell preparations, and matured techniques of delivery will be needed to effectively evaluate the potential clinical benefit of this new therapy.

With respect to the large and growing population of so-called “stem cell clinics” that offer patients with serious diseases–for example, macular degeneration, spinal cord injury, amyotrophic lateral sclerosis, and multiple sclerosis–“proven success” at considerable personal cost, Daley stated it well in his recent editorial: “The International Society for Stem Cell Research has recently released guidelines for clinical translation of stem cells.12 The guidelines highlight the stark distinction between innovative treatments … proven in rigorous clinical trials … and the unproven interventions that are offered by practitioners who are naïve regarding the biologic complexities of stem cells or by charlatans peddling the modern equivalent of snake oil.”3

To read this article in its entirety please visit our website.

-Joseph S. Alpert, MD, Qin M. Chen, PhD

This article originally appeared in the September 2017 issue of The American Journal of Medicine.

]]> 0
Death and Dignity: Exploring Physicians’ Responsibilities After a Patient’s Death- Wed, 13 Dec 2017 13:37:59 +0000 elder woman sick in bed with husband looking on

Literature focused on care at the end of life is flourishing. The scope of this work has been broad, including how to best communicate bad news12 or discuss patient wishes at the end of life,34 as well as detailing where patients are dying and how it impacts their care5 and the value of palliative and hospice care during this process.678 As literature on end of life care grows, more attention is also being paid to the importance of caring for bereaved family members,910 highlighting the need to continue to care for those left behind.

In parallel, a newer body of work is emphasizing the importance of studying and avoiding harm associated with failures to maintain the respect and dignity of patients and their families.11 There is a drive to apply the methods of quality improvement to this realm in a similarly rigorous manner as seen with “never events,” such as wrong-site surgery, falls in the hospital, or pressure ulcers. Several factors make emotional harms difficult to address. They are often only brought to attention after a hospitalization and frequently are directed toward patient relations departments as opposed to the primary providers, thus decreasing visibility. As a consequence there is not necessarily an individual or group who “owns” the harm and can see that it is addressed. Similarly, often there is not a formal mechanism to share patient feedback or complaints with the providers involved. Furthermore, there is no regulatory oversight focused on emotional harm. A requirement to report harm to governing bodies does not exist as it does for other types of medical errors. Thus, quality improvement initiatives often focus on harm that has mandated reporting infrastructure, such as falls, and fail to focus on emotional harm.111213

This area becomes even more fraught when the harm occurs after a patient has died in the hospital. Who is responsible for ensuring the correct processing of the body of the deceased, from the time of death pronouncement to the morgue, to pathology in the event of an autopsy, to the funeral home? Who is responsible for guiding the family through this process, which might extend days (or weeks, in the case of obtaining autopsy reports) after the inpatient team has stopped caring for the patient? Ideally, at each step along this pathway, members of the health care team clearly communicate with each other and with families. Unfortunately, it is not always the case. Our institution is actively engaged in these areas of quality improvement and has been working to identify and address both failures to maintain respect and dignity as well as failures to provide high-quality end of life care. As a part of this process, several exemplary cases came to light.

Cases of Harm

Case 1

A woman with metastatic lung cancer is admitted to the inpatient medicine service overnight by a member of the house staff. Her code-status is accurately identified as “do not resuscitate,” and she dies late in the night of admission, around 3:00 AM, before being seen by an attending or the day team. The house officer calls the family to inform them of the death; when the family asks, “What do we do next?,” the physician states that he believes someone will contact them during the day. When no one calls the family by the following evening, they call the floor to which the patient was admitted. Because the admitting house officer is unavailable and no one else is aware of this patient, the family is referred to the medicine consult resident. The consult resident is also unaware of the appropriate steps that the family needs to take. Over the next several days the consult resident and family work with the admitting department and the morgue to figure out what is needed for the body to be transported to the funeral home.

The solution to this problem ended up being very simple: the family needed to identify and contact a funeral home, which would arrange transport with the hospital morgue and assume responsibility for the process. However, during normal working hours, this next step is usually explained by a nurse or social worker rather than a physician. In this case none of the residents involved understood the process enough to guide the family, leading to an erroneous though perhaps understandable assumption that someone would call the family. There was no system in place to “coach” the family or train the residents, thus the process was unnecessarily prolonged, with a great deal of uncertainty and dissatisfaction.

Case 2

An orthodox Jewish patient dies early on a Friday morning. The family requests that, per religious tradition, the body be released immediately so as to be buried before sundown. The patient is pronounced dead by the intern, who then fills out a report of death form, which is sent to the admitting office to generate a death certificate. The intern must then go to her continuity clinic, which is off site. When the death certificate is generated the intern is not present to sign it, and the body cannot be released to the funeral home without a signed death certificate. The family is upset that they are unable to observe their intended religious customs after their loved one’s death. The team of physicians caring for the patient did not realize that any physician with knowledge of the manner of death can sign the death certificate, not just the one who pronounced the patient. This simple administrative oversight caused significant emotional harm to the family.

Case 3

A generally healthy man is discharged from the hospital to an acute care rehabilitation facility. Several hours after his discharge he goes into respiratory distress and quickly dies at the facility. His family is shocked by his death and requests an autopsy. The physician informs the family that it will likely cost several thousand dollars to have an autopsy performed. The family cannot afford this cost and do not pursue the issue further, left with the mystery of his death. In fact, many academic institutions will perform autopsies at no cost to the family,1415 though the family would likely have had to pay for transport of the body back to the hospital. The family later learned the autopsy would have been done for free at the hospital and was distressed.

These cases highlight a critical time in the care of a patient and his or her family: the hours to days after death. And although a plethora of information is available to guide physicians in the time leading up to death,3416 very little has been published about what a physician should know and do during the immediate post-death period. This time is especially risky, given that at many academic institutions it is largely managed by residents who have limited institutional knowledge and evolving communication skills. The cases demonstrate that the post-death process can go awry—with physicians playing a significant role—resulting in unnecessary emotional harm to the loved ones of the deceased. Additionally, these cases represent only a fraction of such cases uncovered at one institution; the broader extent of the harms caused by physicians in the post-death process is unknown.


To read this article in its entirety please visit our website.

-James Parris, MD, PhD, Andrew Hale, MD

This article originally appeared in the August 2017 issue of The American Journal of Medicine.

]]> 0
Non-Classic Cystic Fibrosis: The Value in Family History- Mon, 11 Dec 2017 13:37:56 +0000 Chest computed tomography. Bilateral bronchial wall dilatation/thickening was found predominantly in the lower lobe, with scattered nodular patchy airspace opacities and focal areas of mosaic pattern predominantly in the right upper lobe and bilateral tree-in-bud appearance.

Chest computed tomography. Bilateral bronchial wall dilatation/thickening was found predominantly in the lower lobe, with scattered nodular patchy airspace opacities and focal areas of mosaic pattern predominantly in the right upper lobe and bilateral tree-in-bud appearance.

To the Editor:

Since the advent of the Human Genome Project, opening doors to an era of genomic medicine, the family history has become a relevant and critical tool in individualized disease prevention. The family history has lasted through the years as a key component of history-taking in medical education. Yet it continues to be underutilized owing to barriers in time and infrastructure, much due to the common underestimation of its value.1 We report a case of recurrent pneumonia in which the suspicion of non-classic cystic fibrosis was predicated upon the clinical presentation, the family history, and findings of bronchiectasis on lung imaging.

Case Presentation

A 52-year-old woman presented with cough after multiple hospitalizations for the past 2 months for pneumonia. After her most recent hospitalization her symptoms improved, but over the preceding 4 days she had complained of dyspnea and productive cough. Additionally, she reported generalized weakness, steatorrhea, and inability to gain weight. Her past medical history was significant for chronic pancreatitis, rectal prolapse, failure to thrive, and multiple pneumonias requiring antibiotics. Upon interrogation of her family history it was revealed that her granddaughter had screened positive for cystic fibrosis on newborn screening, and she had 3 maternal second cousins with cystic fibrosis. On presentation she was hypoxemic with an oxygen saturation of 74% on room air, which improved to the 90% range on 2 L of oxygen by nasal cannulation, with a respiratory rate of 22 breaths per minute. Her body mass index was 16.1 kg/m2 on admission and had ranged from 15 kg/m2 to 18 kg/m2 over the 2 years prior to presentation.

Chest computed tomography (Figure) showed central bronchiectasis with mucus plugging, as well as bilateral scattered opacities. She underwent bronchoscopy, which yielded cultures positive for methicillin-sensitive Staphylococcus aureus. She was initiated on antibiotics with cefazolin, which was transitioned to oral levofloxacin upon discharge from the hospital. On outpatient follow-up, pulmonary function tests were notable for a forced expiratory volume at 1 second (FEV1) of 69% predicted, forced vital capacity (FVC) of 85% predicted, FEV1/FVC of 64%, and forced expiratory flow at 25%-75% of FVC of 34% predicted. Stool pancreatic elastase-1 was low at 28 μg/g. A sweat test showed borderline values of 40 mmol/L on the left side and 34 mmol/L on the right side, and genetics testing revealed 1 copy of the δ-F508 mutation, as well as another change, K1080Q of uncertain clinical significance. Repeat expanded genetic analysis did not show any additional known mutations. She was subsequently diagnosed with non-classic cystic fibrosis.


Cystic fibrosis is an autosomal recessive disorder leading to severe pulmonary disease, which can be associated with multiorgan involvement due to mutations in the CFTR gene. Cystic fibrosis is typically diagnosed in childhood, with newborn screening becoming the norm, although interestingly, approximately 2% of these patients make up a subgroup known as non-classic cystic fibrosis.2 As the gold standard for the diagnosis of cystic fibrosis, sweat chloride testing of classic cystic fibrosis is defined by sweat chloride concentrations >60 mmol/L; in non-classic cystic fibrosis, however, sweat chloride concentrations range from normal (<30 mmol/L) to borderline (30-60 mmol/L).3 Genetic testing of patients with cystic fibrosis typically demonstrates 2 CFTR gene mutations. Non-classic cystic fibrosis is becoming increasingly recognized in adolescents and adults, but the diagnosis is not a simple one, given the heterogeneity of presentation and variability in multiorgan involvement outweighing costs of expensive genetic testing. Cystic fibrosis is clinical diagnosis that, in this case, was driven predominantly by the strong family history as well as the presenting symptoms. This case reinforces the spectrum of cystic fibrosis and the importance and value of a thorough family history and clinical evaluation.

To read this article in its entirety please visit our website.

-Justin K. Lui, MD, Joseph Kilch, MD, Svetlana Fridlyand, DO, Abduljabbar Dheyab, MBChB, Christine Bielick Kotkowski, MD

This article originally appeared in the August 2017 issue of The American Journal of Medicine.

]]> 0
Case of Reversible Complete Heart Block- Fri, 08 Dec 2017 13:56:20 +0000  

Skin examination was remarkable for multiple diffusely distributed erythematous patches with central clearing, consistent with erythema chronicum migrans.

Skin examination was remarkable for multiple diffusely distributed erythematous patches with central clearing, consistent with erythema chronicum migrans.


A previously healthy Caucasian 22-year-old man presented to a southern Quebec community hospital with syncopal episodes for 1 week. He was found to be hypotensive and bradycardic (heart rate 36 beats per minute), owing to third-degree atrioventricular (AV) block, which did not respond to atropine. A perfusion of isoprenaline was started, and the patient was transferred to a tertiary care center in Montreal.

The patient denied any recent travel history outside the province of Quebec. However, the patient frequently visited deeply wooded areas and was in close contact with deer and other wildlife.

On arrival, skin examination was remarkable for multiple diffusely distributed erythematous patches with central clearing, consistent with erythema chronicum migrans (Figure). The remainder of the physical examination was unremarkable.

On the basis of the history and physical examination, a diagnosis of early disseminated Lyme disease with carditis was suspected, and ceftriaxone was started. Screening Lyme serology was positive, and the blood smear was negative for Babesia/Anaplasma spp. The patient’s rash completely resolved within 24 hours of antibiotic initiation.

The transthoracic echocardiogram showed mild nonspecific mitral valve thickening and an ejection fraction of 60%. There were no other echocardiogram abnormalities. After 48 hours of observation, he underwent a temporary VVI (ventricular pacing and sensing mode) pacemaker insertion.

On ceftriaxone, the AV block converted to first degree 2 days after starting antibiotics and then completely resolved by day 11, at which point the pacemaker was removed. He completed a total 21-day course of treatment with oral doxycycline. Western blot testing for Lyme disease was positive, confirming the diagnosis.

To read this article in its entirety please visit our website.

-Samuel De l’Étoile-Morel, MD, Abeer Feteih, MD, Catherine Anne Hogan, MD, MSc, Donald C. Vinh, MD, George Thanassoulis, MD, MSc

This article originally appeared in the August 2017 issue of The American Journal of Medicine.

]]> 0