Illustration of some microbes that form the human microbiome

The Microbiome in Human Health

By | Uncategorized | No Comments

The human microbiome consists of all the microbes – bacteria, archaea, fungi, and even viruses – that live in and on human bodies and interact with human cells [1]. The term “microbiome” was first coined in 2001 [2] and it was discovered in subsequent years that this endogenous microbial population outnumbers human cells 10 to 1 and consists of 10,000 different microbial species [3]. This diversity helps account for the vast array of symbiotic human-microbial relationships scientists are only now beginning to uncover [4]. The absence of these beneficial microbes may heighten the risk for many diseases.  

 

While microbial communities exist in the oral cavity, respiratory tract, and on the skinmicrobes in the gastrointestinal tract have thus far been found to impact human health and physiology to the greatest extent [5]. Gut microbiota carry out important metabolic functions. For example, the digestion of xyloglucans, a type of cell wall polysaccharide found in lettuce and tomatoes, is performed by mutant from the Bacteroides ovuatus species that inhabits the human gut [6]. Gut microbes also produce metabolites that signal endocrine cells to produce various hormones, some of which regulate insulin sensitivity, glucose tolerance, and fat storage [7]. 

 

Dysbiosis, the imbalance of natural microflora, can result from exposure to various environmental factors and has been implicated in several diseases [8]. Crohn’s disease and ulcerative colitis have been associated with an increase of Enterobacteriaceae and loss of other symbiotic taxa [9]. Beyond gastrointestinal diseases, microbes responsible for the production of short-chain fatty acids such as butyrate can help lower the risk of heart disease [10]. Clostridium difficile infection, perhaps the most well-known example of dysbiosis, can occur when antibiotics wipe out colon bacteria that normally restrict C. difficile growth [11]. 

 

Characteristic of all these examples of dysbiosis is the reduction of microbial diversity. While it is difficult, perhaps impossible, to quantify the exact benefits generated by the various microbes that compose the microbiome, it is clear that an increase in diversity better prepares an organism to effectively respond to its environment. Reese and Dunn suggest implementing experiments whereby a population is “diluted” to produce individuals with reduced diversity, which can then be compared against normal individuals [12]. Mosca et alimplicate features of the Western lifestyle – eating behaviors, disruption of the circadian clock, and antibiotic consumption – in the loss of global microbial diversity and the resulting increase of allergies and inflammatory bowel diseases in the modern world. They explore reintroducing bacterial predators into the microbiota system to restabilize the microbial ecosystem [13]. 

 

The effort to preserve and restore microbial diversity is far from simple. Bello et al. suggest several measures that can be implemented immediately – curtailing antibiotic use, limiting caesarean sections, promoting breastfeeding – but they acknowledge that ultimately, a more ambitious strategy to systematically identify and reintroduce symbiotic bacteria is needed [14]. Part of the problem stems from the difficulty of sampling and identifying the thousands of microbes inhabiting humans from different populations. A recently developed technology called Cell Alive System uses magnetic fields and mechanical vibrations to uniformly cool microbial samples, which gives them the best chance of survival in vitroThe decrease in microbial diversity is an urgent public health problem that demands our attention. As Dr. Martin Blaser, chair of the Human Microbiome at Rutgers University puts it, “The first point is to stop the damage, then rebuild” [15].

  

References 

  1. “The Microbiome.” The Nutrition Source, 1 May 2020, www.hsph.harvard.edu/nutritionsource/microbiome/ 
  2. Prescott, Susan L. “History of Medicine: Origin of the Term Microbiome and Why It Matters.” Human Microbiome Journal, vol. 4, 2017, pp. 24–25., doi:10.1016/j.humic.2017.05.004.  
  3. “NIH Human Microbiome Project Defines Normal Bacterial Makeup of the Body.” National Institutes of Health, U.S. Department of Health and Human Services, 31 Aug. 2015, www.nih.gov/news-events/news-releases/nih-human-microbiome-project-defines-normal-bacterial-makeup-body 
  4. Ogunrinola, Grace A., et al. “The Human Microbiome and Its Impacts on Health.” International Journal of Microbiology, vol. 2020, 2020, pp. 1–7., doi:10.1155/2020/8045646. 
  5. Barton, Wiley, et al. “Metabolic Phenotyping of the Human Microbiome.” F1000Research, vol. 8, 2019, p. 1956., doi:10.12688/f1000research.19481.1 
  6. Larsbrink, Johan, et al. “A Discrete Genetic Locus Confers Xyloglucan Metabolism in Select Human Gut Bacteroidetes.” Nature, vol. 506, no. 7489, 2014, pp. 498–502., doi:10.1038/nature12907 
  7. Martin, Alyce M., et al. “The Influence of the Gut Microbiome on Host Metabolism Through the Regulation of Gut Hormone Release.” Frontiers in Physiology, vol. 10, 2019, doi:10.3389/fphys.2019.00428 
  8. Carding, Simon, et al. “Dysbiosis of the Gut Microbiota in Disease.” Microbial Ecology in Health & Disease, vol. 26, 2015, doi:10.3402/mehd.v26.26191 
  9. Durack, Juliana, and Susan V. Lynch. “The Gut Microbiome: Relationships with Disease and Opportunities for Therapy.” Journal of Experimental Medicine, vol. 216, no. 1, 2018, pp. 20–40., doi:10.1084/jem.20180448 
  10. Baxter, Nielson T., et al. “Dynamics of Human Gut Microbiota and Short-Chain Fatty Acids in Response to Dietary Interventions with Three Fermentable Fibers.” MBio, vol. 10, no. 1, 2019, doi:10.1128/mbio.02566-18 
  11. “C. Difficile Infection.” American College of Gastroenterology, 6 Feb. 2020, gi.org/topics/c-difficile-infection/.  
  12. Reese, Aspen T., and Robert R. Dunn. “Drivers of Microbiome Biodiversity: A Review of General Rules, Feces, and Ignorance.” MBio, vol. 9, no. 4, 2018, doi:10.1128/mbio.01294-18 
  13. Mosca, Alexis, et al. “Gut Microbiota Diversity and Human Diseases: Should We Reintroduce Key Predators in Our Ecosystem?” Frontiers in Microbiology, vol. 7, 2016, doi:10.3389/fmicb.2016.00455 
  14. Bello, Maria G. Dominguez, et al. “Preserving Microbial Diversity.” Science, vol. 362, no. 6410, 2018, pp. 33–34., doi:10.1126/science.aau8816.  
  15. “Disappearance of the Human Microbiota: How We May Be Losing Our Oldest Allies.” ASM.org, asm.org/Articles/2019/November/Disappearance-of-the-Gut-Microbiota-How-We-May-Be 

Considerations for Anesthesia in Low-Resource Settings 

By | Uncategorized | No Comments

There is a significant need for cost-effective, suitable equipment for anesthesia providers in low-resource areas.2 Low-resource settings are mostly found in countries defined by the World Bank as low-income countries.1 However, it is important to highlight that low-resource settings and low-income countries are not always synonymous, as there exist resource inequalities within countries.1 As Michael Dobson points out in his book Anesthesia at the District Hospital, “good anesthesia depends much more on the skills, training, and standards of the anesthetist than on the availability of expensive and complicated equipment.2 Special consideration must be given to the suitability of anesthesia equipment in economically challenged settings so anesthesiologists can administer safe and effective care to patients.2 

In areas with sufficient resources, anesthesia machines are high-tech workstations which necessitate a stable electricity supply and compressed gases (like oxygen).2 They contain sophisticated controls and can deliver a wide range of anesthetics to patients.2 The issue is that this equipment requires highly trained and skilled biomedical technologists for service and maintenance.2 In low-resource settings, where electricity is variable and compressed gasses are rare, these machines are nonfunctional.2 Therefore, anesthesiologists working in low-resource areas must depend on a different type of anesthesia equipment.2 The exact requirements for providing safe anesthesia in resource-poor settings have never been properly defined.3 Guidelines published in 1993 by the World Federation of Societies of Anesthesiologists have been the best recommendations to follow.3 The basic requirements for administering anesthesia in low-resource settings include the equipment for delivering the anesthetic, a monitoring apparatus, a cannula, rubber gloves, and medications (both anesthetic and emergency drugs).3 

Although there has been significant reduction in anesthesia-related perioperative mortality in the last 50 years, patients undergoing surgery in low-resource areas still have two to three times increased mortality risk compared with high-resource areas.4 In the presence of limited financial and logistical resources, anesthesiologists often administer the same anesthetic agents irrespective of type of surgical procedure, which contributes to the increase in anesthetic-related morbidity and mortality seen in resource-deficient areas.4 Researchers in the Ivory Coast have pushed for expanding the use of spinal anesthesia in low-resource settings, as opposed to general anesthesia.5 In addition to its cost-effectiveness, spinal anesthesia acts locally, helping to prevent airway-related complications.4,5 In low-resource areas, more focus on spinal anesthesia may prevent anesthesia-related morbidity and mortality.5 

Furthermore, according to the Lancet Commission on Global Surgery, more than 90% of people in resource-poor areas do not have access to emergency or essential surgery.6 Interestingly, the primary limitation is not trained surgeons or operating rooms.6 Instead, it is inadequate anesthesia services, known as the anesthesia gap”, that often results in absent or delayed surgical care.6 Researchers at Harvard University have advocated for the emergency use of ketamine in low-resource settings, as a short-term solution to the anesthesia gap.6 Ketamine is safer than most anesthetics because ventilation is typically well maintained and life-threatening hemodynamic disturbances are rare.6 Also, it can be administered in an emergency situation by non-anesthetist experienced providers.6 In a prospective series of more than 1,200 surgical cases in Kenya, there were no deaths or serious complications related to ketamine anesthesia.7 Ketamine is not designed to compete with formal anesthesia services, but can be used in emergency situations in low-resource settings where there are no reasonable alternatives.6,7 

Widespread, safe anesthesia is achievable, but it requires a commitment by health systems to provide anesthesiologists with the basic requirements for anesthesia.3 In working to close the anesthesia gap, it is important to recognize the unique challenges presented by low-resource settings.3 

 

References 

 

  1. Baker, T. (2015). Critical Care in Low Resource Settings. Karolinska Institutet. https://doi.org/10.1007/978-1-4939-0811-0_16 
  2. Roth, R., Frost, E., Gevirtz, C., & Atcheson, C. (2015). The Role of Anesthesiology in Global Health. Springer International Publishing. ISBN: 978-3-319-09422-9 
  3. McCormick, B., & Eltringham, R. (2007). Anaesthesia equipment for resource-poor environments. Anaesthesia, 62(2), 54-60. https://doi.org/10.1111/j.1365-2044.2007.05299.x 
  4. Bharati, S., Chowdhury, T., Gupta, N., Schaller, B., Cappellani, R., & Maguire, D. (2014). Anaesthesia in underdeveloped world: Present scenario and future challenges. Niger Med J, 55(1), 1-8. https://doi.org/10.4103/0300-1652.128146 
  5. Mgbakor, A., & Adou, B. (2011). Plea for greater use of spinal anaesthesia in developing countries. Tropical Doctor, 42(1), 49-51. https://doi.org/10.1258/td.2011.100305 
  6. Suarez, S., Burke, T., Yusufali, T., Makin, J., & Sessler, D. (2018). The role of ketamine in addressing the anesthesia gap in low-resource settings. Journal of Clinical Anesthesia, 49, 42-43. https://doi.org/10.1016/j.jclinane.2018.06.009 
  7. Burke, T., Suarez, S., Senay, A., Masaki, C., Rogo, K., & Sessler, D. et al. (2017). Safety and Feasibility of a Ketamine Package to Support Emergency and Essential Surgery in Kenya when No Anesthetist is Available: An Analysis of 1216 Consecutive Operative Procedures. World J Surg, 41(12), 2990-2997. https://doi.org/10.1007/s00268-017-4312-0 

 

 

 

 

Hospital room with mechanical ventilator system

Neurally Adjusted Ventilatory Assist for Weaning Patients from Mechanical Ventilation

By | Uncategorized | No Comments

A mechanical ventilator is a device that supports breathing by moving air into or out of a patient’s lungs. Mechanical ventilation is one of several treatments for respiratory failure, but long-term ventilator use can cause lung damage [1]When a patient is deemed ready, they are weaned from ventilation, ideally recovering their normal, spontaneous breathing faculties [2]. However, multiple factors in the weaning process can lead to higher risk of failureincluding the patient’s underlying conditions, the specialist’s handling of the recovery process, and the specific type (also called “mode”) of ventilator used [2]Advances in ventilator technology play a crucial role in improving the safety of mechanical ventilation, such as by facilitating weaning. In particular, neurally adjusted ventilatory assist may improve patient outcomes.  

 

Many modes of ventilation follow a partial assist model: the patient initiates a breath, and the ventilator detects this attempt and provides assistance, resulting in a full breath. The most common example is the “pressure support ventilation” mode (PSV), where the ventilator detects pressure changes to determine breathing attempts [3]In contrast, the mode called “neurally adjusted ventilatory assist” (NAVA) measures the electrical activity of the diaphragmusing an array of electrodes inserted into the esophagus across from the diaphragm [4]. Diaphragm movement is more accurate than changes in air pressure for detecting a patient’s attempted breaths.  

 

The accuracy of detected breaths is correlated with better patient outcomes. In the literature, “asynchrony” refers to a mismatch between patient respiratory efforts and ventilator assists. For instance, when the patient inhales, their ventilator should respond promptly with an inhalation assist. Otherwise, when the patient exhales, their ventilator might still be inhaling, leading to “wasted energy expenditure” [5]. Consequences include stress or injury on the patient’s diaphragm, errors in the assessment of patient readiness, and worse overall outcomes [5]. If NAVA is more accurate than other modes, it has the potential to improve patient recovery, weaning, and outcomes.  

 

NAVA was invented in 1999, but it began appearing in clinical trials around the late 2000sResearchers sought to compare it against a proven baseline: PSVA study from 2010 evaluated eleven patients with acute respiratory distress syndrome, finding that NAVA reduces asynchrony, stabilizes the volume of breaths, and improves patient-ventilator interactions [6]. An unaffiliated study from 2011 agreed that NAVA reduced asynchrony among patients with acute respiratory failure [7]However, neither study assessed patient outcomes directly, a task left to later research.  

 

In 2016, researchers in France studied 128 adults across 11 ICUs, comparing their outcomes and experiences with NAVA. Like previous studies, they concluded that NAVA reduces asynchrony. After leaving the ventilator, patients using NAVA reported less shortness of breathand they were less likely to return to ventilation, compared to patients using PSVHowever, NAVA did not reduce the duration of ventilation [8]. This finding conflicts with research from 2020 on Chinese patients considered “difficult to wean” from ventilation. For this subset of struggling patients, NAVA reduced the duration of ventilation and increased the proportion of patients with successful weaning [9]These findings suggest that, overall, NAVA is effective in weaning patients and supporting their outcomes.  

 

The future of NAVA appears promising, but current research has yet to explore several possibilities. First, a cost-benefit analysis could determine if providers should adopt NAVA on a wider scale. Second, clinical trials with other respiratory conditions, including severe cases of COVID-19, might improve the clinical credibility of NAVA. Finally, a redesign of NAVA could measure the brain’s respiratory center instead of the diaphragm, potentially increasing measurement accuracy and improving outcomes further.  

 

References 

 

[1] Respiratory Failure. (N.d.) Retrieved September 25, 2020 from https://www.nhlbi.nih.gov/health-topics/respiratory-failure 

[2] Boles J.-M., et al. Weaning from Mechanical Ventilation. European Respiratory Journal 2007; 29: 5. DOI: 10.1183/09031936.00010206 

[3] Dosch M.P. and Tharp D. The Anesthesia Gas Machine. March 2016. College of Health Professions, University of Detroit Mercy. Retrieved September 25, 2020 from https://healthprofessions.udmercy.edu/academics/na/agm/08.htm.  

[4Sinderby C., et al. Neural Control of Mechanical Ventilation in Respiratory Failure. Nature Medicine 2000; 5: 12. DOI: 10.1038/71012.  

[5] Thille A.W., et al. Patient-Ventilator Asynchrony during Assisted Mechanical Ventilation. Intensive Care Medicine 2006; 32. DOI: 10.1007/s00134-006-0301-8 

[6] Terzi N., et al. Neurally Adjusted Ventilatory Assist in Patients Recovering Spontaneous Breathing after Acute Respiratory Distress Syndrome: Physiological Evaluation. Critical Care Medicine 2010; 38: 9. DOI: 10.1097/CCM.0b013e3181eb3c51 

[7] Piquilloud L., et al. Neurally Adjusted Ventilatory Assist Improves Patient-Ventilator Interaction. Intensive Care Medicine 2011; 37. DOI: 10.1007/s00134-01020529 

[8] Demoule A., et al. Neurally Adjusted Ventilatory Assist as an Alternative to Pressure Support Ventilation in Adults: A French Multicentre Randomized Trial. Intensive Care Medicine 201642DOI: 10.1007/s00134-01644478 

[9] Liu L., et al. Neurally Adjusted Ventilatory Assist versus Pressure Support Ventilation in Difficult Weaning: A Randomized Trial. Anesthesiology 2020; 132. DOI: 10.1097/ALN.0000000000003207 

Image of medical team moving patient

Legislation to Protect Patients 

By | Uncategorized | No Comments

Over the past three decades, concerns about protecting patients’ rights have led to the passage of legislation meant to ensure the safety and privacy of patients. In particular, laws such as HITECH, GINA, HIPAA and EMTALA are meant to protect the privacy of patient data, provide patients with the autonomy to control the distribution of their data, and safeguard against discrimination based on that information.  

 

The Emergency Medical Treatment and Active Labor Act (EMTALA) was enacted in 1986 and required hospitals to care for and treat patients, regardless of their ability to pay. The law was created in response to patient dumping, a practice common in the mid-20th century where poor or uninsured patients were transferred from hospital to hospital because of their inability to pay [1]. A landmark 1984 study by Himmelstein et al. found that 97% of patients transferred to public hospitals had no insurance or were government-insured [2]. With the passage of EMTALA, hospitals were required to treat emergency conditions, as defined by the bill, and stabilize patients before releasing them. Failure to do so can result in a penalty of up to $50,000 for both hospitals and physicians. Importantly, malpractice insurance does not usually cover the fine, meaning that physicians who violate EMTALA must pay the fee out of their own pocket [3]. 

 

The Health Insurance Portability and Accounting Act (HIPAA) was passed in 1996. Building off of the foundational belief that patients have a right to keep personal health information private, HIPAA developed a privacy framework for a digital age. With informational tools such as privacy notices and requirements for detailed authorization requests, HIPAA was meant to inform patients about how their data would be shared [4]. Importantly, the act set a federal minimum for the protection of patient data, which individual states could then develop accordingly. 

 

As genetic testing and sequencing developed in the 1990s and early 2000s, it became apparent that legislation to protect patients would need to cover genetic information, as well. In 2008, lawmakers passed the Genetic Information Nondiscrimination Act (GINA), which prohibited health insurers and employers from requiring genetic tests, requesting genetic information from patients, or discriminating based on that information. While some of these restrictions were set in place by HIPAA, they were strengthened and standardized under GINA. A paper by Hudson et al. noted that GINA would likely reduce the “fear factor” associated with genetic information, making it more likely that patients would participate in studies that collect genetic information [5]. 

 

The Health Information Technology for Economic and Clinical Health Act (HITECH), which was signed into law in 2009, was primarily meant to accelerate the adoption of electronic health records (EHR). The law was fairly successful — a study by Adler-Milstein and Jha found that adoption of EHR among eligible hospitals rose from 3.2% before the passage of HITECH to 14.2% after the law was passed [6]. The HITECH Act also allowed patients to access their EHR which, coupled with increased adoption of EHR at medical facilities, made it easier for individuals to share their health data across organizations. 

 

One of the most recent legislative additions to the patient protection framework is the Patient Protection and Affordable Care Act, which became law in 2010. The law extended Medicaid enrollment to around 15 million people with the ultimate goal of ensuring all Americans were covered by health insurance. While healthcare coverage is the best-known part of the law, ACA also included legislation aimed at improving care to underserved populations and making information about the cost of healthcare more transparent [7]. While parts of the policy, like the individual mandate, have since been rolled back, many of the patient protection measures remain in place. 

 

References 

 

[1] Mayere Lee, Tiana. “An EMTALA Primer: The Impact of Changes in the Emergency Medicine Landscape on EMTALA Compliance and Enforcement.” Annals of Health Law, vol. 13, no. 1, 2004, pp. 145–178., https://lawecommons.luc.edu/cgi/viewcontent.cgi?article=1231&context=annals. 

[2] Himmelstein, D U, et al. “Patient Transfers: Medical Practice as Social Triage.” American Journal of Public Health, vol. 74, no. 5, 1984, pp. 494–497. doi:10.2105/ajph.74.5.494. 

[3] Zibulewsky, Joseph. “The Emergency Medical Treatment and Active Labor Act (Emtala): What It Is and What It Means for Physicians.” Baylor University Medical Center Proceedings, vol. 14, no. 4, 2001, pp. 339–346. doi:10.1080/08998280.2001.11927785. 

[4] Annas, George J. “HIPAA Regulations — A New Era of Medical-Record Privacy?” The New England Journal of Medicine, vol. 348, no. 15, 10 Apr. 2003, pp. 1486–1490. doi:10.1056/NEJMlim035027. 

[5] Hudson, Kathy L., et al. “Keeping Pace with the Times — The Genetic Information Nondiscrimination Act of 2008.” New England Journal of Medicine, vol. 358, no. 25, 2008, pp. 2661–2663. doi:10.1056/nejmp0803964. 

[6] Adler-Milstein, Julia, and Ashish K. Jha. “HITECH Act Drove Large Gains In Hospital Electronic Health Record Adoption.” Health Affairs, vol. 36, no. 8, 2017, pp. 1416–1422. doi:10.1377/hlthaff.2016.1651. 

[7] Rosenbaum, Sara. “The Patient Protection and Affordable Care Act: Implications for Public Health Policy and Practice.” Public Health Reports, vol. 126, no. 1, 2011, pp. 130–135. doi:10.1177/003335491112600118. 

 

 

Image of newborn baby

Intrathecal Morphine vs. Intrathecal Hydromorphone for Post-Cesarean Analgesia 

By | Uncategorized | No Comments

 

Anesthesia for cesarean section commonly involves a single-shot spinal injection of local anesthetic with opioid included to cover postoperative pain as part of a multimodal analgesic pathway. Intrathecal preservative-free morphine is considered the gold-standard for postcesarean analgesia by obstetric anesthesiologists given its low-risk profile; however, hydromorphone has gained attention as an alternate agent due to drug shortages. 

Hydromorphone was chosen partly due to its hydrophilicity, which causes it to persist within the cerebrospinal fluid (CSF) for several hours, providing bimodal analgesia at the segmental and supraspinal levels. Lipophilic opiates such as fentanyl and sufentanil diffuse more quickly, exhibiting a rapid segmental effect with more systemic uptake, making them less ideal in this setting. Dosing studies have shown that the effective intrathecal dose (ED90) for morphine and hydromorphone are 150mcg and 75mcg respectively for post-cesarean analgesia [8]. 

While several retrospective studies have compared the two drugs when administered in the central nervous system, there is only one prospective study published on this subject to date. Observational studies were performed by multiple groups including Beatty et al., who found no significant difference in patient pain between intrathecal morphine 100mcg and hydromorphone 40mcg. However, it is important to note these doses are not necessarily equipotent and standard guidelines for equianalgesic dosing of the two drugs in the intrathecal space are not yet available. The morphine dose was chosen based on prior studies showing absence of analgesic benefit for intrathecal doses above 75mcg, while the hydromorphone dose was merely the most common dose among participating providers [3].

Based on dosing studies by Rathmell et al. and later Sviggum et al., a 2:1 conversion ratio is commonly selected, however this ratio needs further validation [7][8]. In June 2020, Sharpe et al. published a randomized clinical trial comparing intrathecal opioids (150mcg morphine vs. 75mcg hydromorphone), and reported similar pain scores through 36 hours postpartum and no difference in breakthrough analgesics given. Regarding side effects, there was no difference in incidence of nausea/vomiting or moderate or severe sequelae. The median difference in time to first opioid dose was not statistically significant but may be relevant clinically given the 5h span for hydromorphone compared to 12h for morphine. There was also a notable difference in postpartum opioid consumption that did not reach statistical significance, possibly indicating a need for more power in this study [6]. 

It is important to note that, while the pharmacokinetic profile of hydromorphone (hydrophilicity, less rostral spread) would theoretically imply less incidence of nausea or respiratory depression, studies have not supported this hypothesis. Given how infrequent episodes of respiratory depression are in the postpartum setting, there is simply not enough data at this point to properly evaluate the comparative safety of intrathecal hydromorphone. In 2019, the Society of Obstetric Anesthesia and Perinatology published a consensus stating that intrathecal morphine is preferred if readily available given the paucity of data on intrathecal hydromorphone’s safety profile [2]. Furthermore, studies are limited by the lack of a standard method for comparing intrathecal opioid doses. Until a standard conversion ratio for intrathecal opiates becomes available, comparative studies will remain limited by lack of verification that equianalgesic doses of intrathecal opioid are being administered. Statistical differences may be found in future analyses if additional research changes the current 2:1 conversion ratio. 

  

 

References 

 

  1. Abboud TK, Dror A, Mosaad P, et al. Mini-dose intrathecal morphine for the relief of post-cesarean section pain: Safety, efficacy, and ventilatory responses to carbon dioxide. Anesth Analg. 1988;67(2):137-143. 
  2. Bauchat JR, Weiniger CF, Sultan P, et al. Society for obstetric anesthesia and perinatology consensus statement: Monitoring recommendations for prevention and detection of respiratory depression associated with administration of neuraxial morphine for cesarean delivery analgesia. Anesth Analg. 2019;129(2):458-474. doi:10.1213/ANE.0000000000004195.
  3. Beatty NC, Arendt KW, Niesen AD, Wittwer ED, Jacob AK. Analgesia after cesarean delivery: A retrospective comparison of intrathecal hydromorphone and morphine. J Clin Anesth. 2013;25(5):379-383. doi:10.1016/j.jclinane.2013.01.014.
  4. Marroquin B, Feng C, Balofsky A, et al. Neuraxial opioids for post-cesarean delivery analgesia: Can hydromorphone replace morphine? A retrospective study. Int J Obstet Anesth. 2017;30:16-22. doi:10.1016/j.ijoa.2016.12.008.
  5. Rathmell JP, Lair TR, Nauman B. The role of intrathecal drugs in the treatment of acute pain. Anesth Analg. 2005;101(5 Suppl):30. doi:10.1213/01.ane.0000177101.99398.22.
  6. Sharpe EE, Molitor RJ, Arendt KW, et al. Intrathecal morphine versus intrathecal hydromorphone for analgesia after cesarean delivery: A randomized clinical trial. Anesthesiology. 2020;132(6):1382-1391. doi:10.1097/ALN.0000000000003283.
  7. Sultan P, Halpern SH, Pushpanathan E, Patel S, Carvalho B. The effect of intrathecal morphine dose on outcomes after elective cesarean delivery: A meta-analysis. Anesth Analg. 2016;123(1):154-164. doi:10.1213/ANE.0000000000001255.
  8. Sviggum HP, Arendt KW, Jacob AK, et al. Intrathecal hydromorphone and morphine for postcesarean delivery analgesia: Determination of the ED90 using a sequential allocation biased-coin method. Anesth Analg. 2016;123(3):690-697. doi:10.1213/ANE.0000000000001229.