Wound Infiltration for Analgesia 

By | Uncategorized | No Comments

Continuous infiltration of a surgical wound after an operation is commonly used for analgesic purposes. Specific methods of wound infiltration include patient-controlled analgesia and continuous infusion [1]. Research to determine the effectiveness of wound infiltration analgesia has examined a variety of metrics, in particular VAS pain scores and opioid use. This research has sought to determine not only how wound infusion compares to other analgesic techniques, but which drug types, drug doses, and surgical situations are optimal for this technique. 

 

Most literature on this topic indicates that infiltration decreases postoperative pain as measured by VAS scores. In one trial, patients undergoing total knee arthroplasty were divided into two groups, one of which received wound infiltration for analgesia. The other received an epidural infusion. The former group had lower resting VAS scores in the 24 hours following surgery—7 compared to the epidural infusion group’s 30. The trend remained even between 48 and 96 hours, where respective VAS scores were 7.5 and 23 [2]. Similarly, in a randomized trial, patients recovering from total hip arthroplasty received either wound infiltration or epidural infusion for analgesia. For 20 hours following surgery, the groups’ VAS scores showed little difference. However, from 20-96 hours, the infiltration group had significantly lower at-rest VAS scores: for instance, 8 rather than 20 between 24 and 48 hours [3]. 

 

However, in another study, thyroid surgery patients in the infiltration group showed no significant difference in pain scores compared to the placebo group [4]. Wound infiltration also did not correlate with a significant reduction in opioid use. The total dose administered for the infiltration group was 64 mg, compared to 69 mg for the infusion group [4] . However, other research points to a significant reduction in opioid use for infiltration patients. In the above knee arthroplasty study, for instance, patients in the infiltration group were administered a mean of 7.5 mg of morphine in the 24 hours following surgery, while their counterparts received 18 mg on average [2]. In the previously mentioned study of hip arthroplasty patients, meanwhile, the infiltration group consumed a mean of 258 mg in the 96 hours after surgery, while the infusion group consumed a mean of 324 mg [3]. Finally, in a review of 203 articles examining the efficacy of the technique, researchers found that “a general reduction in pain intensity and in opioid consumption has been observed with continuous wound infiltration” [1]. 

 

In this review, Paladini et al. also found that wound infiltration has varying degrees of effectiveness for different procedures. Specifically, infiltration appeared most effective in areas with large amounts of connective and subcutaneous tissue. In addition, the effectiveness of infiltration varies depending on the type and amount of anesthetic being administered [1]. In one study, patients undergoing shoulder surgery were divided into three groups. Two received a continuous infiltration of saline, while the others received infiltration of ropivacaine at different concentrations—2 mg/mL and 3.75 mg/mL respectively. While both ropivacaine groups enjoyed lower VAS scores and consumed fewer opioids than the saline group, the group receiving the higher concentration of the drug had significantly lower VAS scores and opioid consumption than the low-concentration group.  

 

Thus, wound infiltration appears generally effective for reducing post-operative pain across a variety of metrics. However, the method may be more successful in the context of certain operations rather than others, and its success may also depend upon the dosage and type of anesthetic being used. 

 

References 

 

[1] Paladini, Giuseppe, et al. “Continuous Wound Infiltration of Local Anesthetics in Postoperative Pain Management: Safety, Efficacy and Current Perspectives.” Journal of Pain Research, vol. 13, 2020, pp. 285-294., doi:10.2147/JPR.S211234.

[2] Andersen, Karen V, et al. “A Randomized, Controlled Trial Comparing Local Infiltration Analgesia with Epidural Infusion for Total Knee Arthroplasty.” Acta Orthopaedica, vol. 81, no. 5, 2010, pp. 606–610., doi:10.3109/17453674.2010.519165. 

[3] Andersen, Karen V, et al. “Reduced Hospital Stay and Narcotic Consumption, and Improved Mobilization with Local and Intraarticular Infiltration after Hip Arthroplasty: A Randomized Clinical Trial of an Intraarticular Technique versus Epidural Infusion in 80 Patients.” Acta Orthopaedica, vol. 78, no. 2, 2007, pp. 180–186., doi:10.1080/17453670710013654 

[4] Miu, Mihaela, et al. “Lack of Analgesic Effect Induced by Ropivacaine Wound Infiltration in Thyroid Surgery.” Anesthesia & Analgesia, vol. 122, no. 2, 2016, pp. 559–564., doi:10.1213/ane.0000000000001041. 

[5] Gottschalk, Andre, et al. “Continuous Wound Infiltration with Ropivacaine Reduces Pain and Analgesic Requirement After Shoulder Surgery.” Anesthesia & Analgesia, 2003, pp. 1086–1091., doi:10.1213/01.ane.0000081733.77457.79. 

Current Research on COVID-19 Antibodies

By | Uncategorized | No Comments

Since the outbreak of SARS-CoV-2the body of scientific literature on COVID-19 has rapidly expanded [1]. Current research often focuses on COVID-19 antibodies, which provide valuable information on the continuing spread of the virus, previous infection patterns, and the immune response [1]. 

 

Widespread availability of commercial assays that detect SARS-CoV-2 antibodies has enabled researchers to examine acquired immunity to COVID-19 at the population level [3]. The four major types of antibody tests are rapid diagnostic tests (RDT), enzyme-linked immunosorbent assays (ELISA), neutralization assays, and chemiluminescent immunoassays [4]. Currently, there is no standard antibody test for detecting SARS-CoV-2 antibodies [5]. Antibody tests for SARS-CoV-2 sense the presence of IgA, IgM, or IgG antibodies produced by B cells [4]. IgM antibodies are produced soon after infection, while IgG antibodies are produced later to maintain the immune response to a specific pathogen [4]. IgA is found on mucous membranes and assists the innate immune response [4]. New clinical reports indicate that antibodies against SARS-CoV-2 form between 6 and 10 days after infection, with peak IgM antibody levels at 12 days [4]. These IgM antibodies persist for up to 35 days [4]. In contrast, IgG antibodies peak at 17 days and persist for up to 49 days [4]. 

 

Higher antibody titers have been discovered in men than women, despite women generally having more B cells and producing more antibodies than men [7]. Observed during the acute stage of SARS-CoV-2 infection, higher antibody titers in men correlate with men showing more severe symptoms and experiencing a higher fatality rate [8]. Conversely, women have shown increased resistance against SARS-CoV-2 [8]. This may be due to the enhanced nature of innate antiviral responses, such as those mediated by toll-like receptors, in women [8]. 

 

The relationship between the presence of SARS-CoV-2 antibodies and the risk of subsequent COVID-19 reinfection remains unclear [6]. Data from a recent study completed at the Oxford University Hospitals in the United Kingdom suggests that the presence of SARS-CoV-2 IgG antibodies is associated with a substantially reduced risk of reinfection for 6 months [6]. The researchers performed a prospective longitudinal cohort study of 12,541 health care workers to assess the relative incidence of positive COVID-19 tests in those who were seropositive for SARS-CoV-2 antibodies and in those who were seronegative [6]. Of the 11,364 health care workers who followed up after an initial negative antibody result, 223 received a positive COVID-19 test [6]. Of the 1,265 health care workers who followed up after an initial positive antibody result, only 2 received a positive COVID-19 test [6]. It may be possible that SARS-CoV-2 protective immunity lasts longer than 6 months [9]. In November 2020, there had been more than 30 million confirmed infections, but few documented cases of reinfection with SARS-CoV-2 throughout the world [9]. 

 

An analysis of 20,000 patients with COVID-19 in the United States concluded that convalescent plasma therapy with neutralizing antibodies is safe and may reduce mortality in critically ill patients [10]. Neutralizing antibodies can be passively transferred into patients before or after viral infection to prevent or treat disease [10]. Therapeutic neutralizing antibodies with high specificity and strong affinity to target proteins have been used to treat several viral infections, including Ebola virus and influenza virus [10]. The neutralizing antibodies against SARS-CoV-2 that have been investigated so far all target spike proteins on the surface of the coronavirus [10]. Plasma containing neutralizing antibodies from convalescent individuals infected with SARS-CoV-2 currently is being administered to severely ill patients [10]. Current research finds that transfusion of such plasma to critically ill patients has resulted in reduced or undetectable viral loads and relieved acute respiratory distress syndrome [10]. 

 

References 

 

  1. Figueiredo‐Campos, P., Blankenhaus, B., Mota, C.,. et al. (2020). Seroprevalence of anti‐SARS‐CoV‐2 antibodies in COVID‐19 patients and healthy volunteers up to 6 months post disease onset. European Journal of Immunology, 50(12), 2025-2040. doi:10.1002/eji.202048970 
  2. Altmann, D., Douek, D., & Boyton, R. (2020). What policy makers need to know about COVID-19 protective immunity. The Lancet, 395(10236), 1527-1529. doi:10.1016/s0140-6736(20)30985-5 
  3. Spellberg, B., Nielsen, T., & Casadevall, A. (2020). Antibodies, Immunity, and COVID-19. JAMA Internal Medicine. doi:10.1001/jamainternmed.2020.7986 
  4. Kopel, J., Goyal, H., & Perisetti, A. (2020). Antibody tests for COVID-19. Baylor University Medical Center Proceedings, 34(1), 63-72. doi:10.1080/08998280.2020.1829261 
  5. Weinstein, M., Freedberg, K., Hyle, E., & Paltiel, A. (2020). Waiting for Certainty on Covid-19 Antibody Tests—At What Cost?.New England Journal of Medicine. doi:10.1056/NEJMp2017739 
  6. Lumley, S., O’Donnell, D., Stoesser, N., et al. (2020). Antibody Status and Incidence of SARS-CoV-2 Infection in Health Care Workers. New England Journal of Medicine. doi:10.1056/nejmoa2034545 
  7. Robbiani, D., Gaebler, C., Muecksch, F.,et al. (2020). Convergent antibody responses to SARS-CoV-2 infection in convalescent individuals. bioRxiv. doi:10.1101/2020.05.13.092619 
  8. Jin, J. M., Bai, P., He, W., et al. (2020). Gender differences in patients with COVID-19: Focus on severity and mortality. Frontiers in Public Health, 8, 152. doi:10.3389/fpubh.2020.00152 
  9. Tillett, R., Sevinsky, J., Hartley, P., et al. (2020). Genomic evidence for reinfection with SARS-CoV-2: a case study. The Lancet Infectious Diseases. doi:10.1016/S1473-3099(20)30764-7 
  10. Jiang, S., Zhang, X., Yang, Y., Hotez, P., & Du, L. (2020). Neutralizing antibodies for the treatment of COVID-19. Nature Biomedical Engineering, 4(12), 1134-1139. doi:10.1038/s41551-020-00660-2 

Classification of A Nonroutine Surgical Event

By | Uncategorized | No Comments

Despite efforts to improve perioperative patient safety over the past two decades, medical errors remain a significant cause of morbidity and mortality [1]. Safety improvement research has long been hindered by the weak relationship between healthcare interventions and adverse outcomes like morbidity and mortality [2]. These outcomes are rare in clinical research, limiting statistical power to demonstrate associations [2]. As a result, traditional methods of measuring morbidity and mortality have indicated inconsistent relationships with healthcare interventions [2]. Recently, an alternative to rare outcome measures, known as the “nonroutine event,” has been proposed [2]. 

 

A nonroutine event is defined as any aspect of clinical care perceived by clinicians or observers as a deviation from optimal care for a patient in a specific clinical scenario [3]. The concept of a nonroutine event includes not only the occurrence or near-occurrence of patient injury but also flawed care processes such as missing or broken equipment, delayed lab tests, insufficient training, and interpersonal communication errors [4,5]. It encompasses incidents that may not be directly linked to patient injury, which previously have been unreliably documented by reporting systems [4]. The nonroutine event reporting system is modeled after safety processes in the nuclear power industry where any deviation from optimal operating procedures is reported and investigated [4]. The foundation of this safety concept is represented by the principle of the “accident triangle” that relates frequent, low-importance events to infrequent, high-importance events such as morbidity and mortality [4]. 

 

The nonroutine event concept is broader than previous measurements used to assess clinical performance and medical error [2]. Because most nonroutine events do not involve errors by the care provider and few lead to patient injury, nonroutine events allow researchers to study underlying system processes without the negative implications of medical error [2,5].  

 

Initially, nonroutine events were used to retrospectively analyze workflow disruptions in anesthesia teams [5]. A 2002 study completed by researchers affiliated with the University of California, San Diego investigated the prevalence of nonroutine events in anesthesia care [6]. Anesthesiologists spend roughly 45% of the initial set-up time at the start of a normal workday on drug and fluid tasks, such as obtaining and filling syringes [6]. In observing anesthesiologists complete 68 drug and fluidrelated tasks, the researchers noted several nonroutine events including: difficulty finding anesthesia supplies, providers bumping into or tripping over IV poles or lines, malfunctioning infusion pumps, and blood leaking from IVs [6]. The results of the study suggested that many anesthesia drug and fluid tasks are inefficient, which may promote medical error [6]. 

 

The observed high incidence of nonroutine events has made it possible to collect prospective data to improve safety systems [6]. Today, nonroutine events are also used to assess the workflow of various surgical teams and team performance in the operating room [5]. A 2019 study completed at Children’s National Hospital in Washington, D.C. investigated the incidence of nonroutine events during pediatric trauma resuscitation [1]. The researchers reviewed 39 resuscitations and identified 337 nonroutine events [1]. The most frequent nonroutine event was failure to stabilize the cervical spine [1]. The results of the study highlighted common errors during pediatric trauma resuscitation that may lead to adverse outcomes [1]. 

 

Medical errors compromising patient safety and resulting in patient harm remain a significant health burden [1]. The nonroutine event concept provides a system to collect detailed information about types of deviations from optimal care and to show associations with long-term patient outcomes [4]. 

 

References 

 

  1. Webman, R., Fritzeen, J., Yang, J., Ye, G., Mullan, P., & Qureshi, F. et al. (2016). Classification and team response to nonroutine events occurring during pediatric trauma resuscitation. Journal of Trauma and Acute Care Surgery, 81(4), 666-673. doi:10.1097/ta.0000000000001196 
  1. Lane-Fall, M., & Bass, E. (2020). “Nonroutine Events” as a Nonroutine Outcome for Perioperative Systems Research. Anesthesiology, 133(1), 8-10. doi:10.1097/aln.0000000000003125 
  1. Wacker, J. (2010). Managing Non-Routine Events in Anesthesia–A Concept to Measure and Improve Anesthesia Quality. Human Factors, 52(2):282-294. doi:10.1177/0018720809359178 
  1. Liberman, J., Slagle, J., Whitney, G., Shotwell, M., Lorinc, A., Porterfield, E., & Weinger, M. (2020). Incidence and Classification of Nonroutine Events during Anesthesia Care. Anesthesiology, 133(1), 41-52. doi:10.1097/aln.0000000000003336 
  1. Law, K. E., Hildebrand, E. A., Hawthorne, H. J., Hallbeck, M. S., Branaghan, R. J., Dowdy, S. C., & Blocker, R. C. (2019). A pilot study of non-routine events in gynecological surgery: Type, impact, and effect. Gynecologic Oncology, 152(2), 298-303. doi:10.1016/j.ygyno.2018.11.035 
  1. Weinger, M. (2002). Human Factors Research in Anesthesia Patient Safety: Techniques to Elucidate Factors Affecting Clinical Task Performance and Decision Making. Journal of The American Medical Informatics Association, 9(90061), 58S-63. doi:10.1197/jamia.m1229 

Retrospective vs. Prospective Cohort Studies

By | Uncategorized | No Comments

Cohort design is a type of research design where investigators follow subjects over time, tracking their development through a set of health-related metrics [1]. As opposed to experimental investigations where researchers intervene to alter the conditions of studied populations, cohort studies involve no such intervention [2]. Instead, studies begin with identifying subjects to place them into two groups: exposed and non-exposed populations [3]. Over time, these groups are studied to determine the incidence, prevalence, prognosis, and potential causes of the central condition [3].  

 

Depending on what investigators hope to study, cohort studies are either prospective or retrospective. In a prospective cohort study, the cohort is selected and measured for various risk factors and exposures before the outcome occurs [2]. An example of a prospective study would be one where investigators seek to measure the likelihood of psoriasis patients experiencing side-effects due to anesthesia. A group of patients with psoriasis would be identified. These people would be tested to ensure that they do not already have conditions that complicate the administration of anesthesia. Then, the researchers would return to these patients over an extended period, tracking whether they have experienced side-effects during procedures since the start of the experiment. 

 

Conversely, retrospective studies aim to analyze a cohort that has already experienced the given outcome [2]. Experimenters collect all their data from records [1]. They will begin with each patient’s initial exposure and status at baseline and continue to follow up into the future, accumulating further data [1]. One retrospective study tracked the occurrence of pneumonia following the contraction of HIV by 2,628 women [4]. Every six months, the subjects would fill out a survey and provide a blood sample [4].  

 

Although the aforementioned study was retrospective, it was combined with data from a simultaneous prospective study to create a more complete picture of pneumonia incidence in HIV-positive individuals [4]. Neither prospective nor retrospective studies are infallible, so combining them may allow researchers to sidestep individual shortcomings 

 

Generally, prospective designs are considered more robust sources of valid evidence, but this is not always the case and can often come at a cost [5]. Prospective studies can provide researchers with accurate data collection that accounts for exposures, endpoints, and confounders [5]. But this high level of detail requires a vast investment of time and money. Follow-up periods are typically long, and investigators must follow-up with a large number of people [5]. Additionally, prospective designs run a greater risk of loss to follow-up, leading to lost data and reduced internal validity [5]. Especially in the context of rare diseases, where populations are already small, prospective studies may prove too inefficient and inappropriate for meaningful observation [5]. 

 

While retrospective methods offer a more time- and cost-efficient approach to cohort designs, the quality of available data can be a concern in these studies [5]. Because investigators have to work with existing data that may not have taken into account certain risk factors or exposures, these studies are more likely to be affected by confounding variables [2]. Information, recall, and selection biases may also reduce the validity of the results [3].  

 

Despite each approach’s shortcomings, many of these concerns can be avoided if investigators meticulously design their studies with these shortcomings in mind. Researchers believe that either form of cohort design, if designed carefully and thoroughly, can result in generalizable, accurate results with important implications for the study of medicine [5]. 

 

References 

 

[1] M. S. Setia, “Methodology Series Module 1: Cohort Studies,” Indian Journal of Methodology, vol. 61, no. 1, p. 21-25, Jan-Feb 2016. [Online]. Available: https://doi.org/10.4103/0019-5154.174011. 

 

[2] I. Oliveira, “Cohort studies: prospective and retrospective designs,” Students 4 Best Evidence via Cochrane, March 2019. [Online]. Available: https://bit.ly/3fN22jE. 

 

[3] X. Wang and M. W. Kattan, “Cohort Studies: Design, Analysis, and Reporting,” Chest Journal, vol. 158, no. 1, p. S72-S78, July 2020. [Online]. Available: https://doi.org/10.1016/j.chest.2020.03.014. 

 

[4] S. R. Cole et al., “Combined analysis of retrospective and prospective occurrences in cohort studies: HIV-1 serostatus and incident pneumonia,” International Journal of Epidemiology, vol. 35, no. 6, p. 1442-1446, Aug 2006. [Online]. Available: https://doi.org/10.1093/ije/dyl176. 

 

[5] A. M. Euser et al., “Cohort Studies: Prospective versus Retrospective,” Nephron Clinical Practice, vol. 113, no. 3, p. c214-c217, Oct 2009. [Online]. Available: https://doi.org/10.1159/000235241. 

 

Managing Spinal Anesthesia-Induced Hypotension in Obstetrics

By | Uncategorized | No Comments

Prevention and management of spinal anesthesia-induced hypotension is essential for preventing complications in the perioperative/peripartum period. In 2018, an international consensus statement was published that detailed guidelines for managing hypotension related to spinal anesthesia for cesarean sections. In summary, the publication recommended the following: vasopressors should be used routinely and preemptively, alpha-agonist drugs such as phenylephrine are preferred as first-line agent due to abundance of data on their use, left uterine displacement and colloid preloading or crystalloid co-loading should be routinely performed, and systolic blood pressure goal should be kept within 90% of baseline.

 

Researchers also recommended that a phenylephrine infusion should be started at 25-50mcg/min just after spinal injection (with a lower dosing recommended for pre-eclamptic patients who exhibit less hypotensive response), and tachycardia and bradycardia should be avoided and treated with fluids or beta-agonist respectively. Significant bradycardia with hypotension may warrant use of ephedrine or an anticholinergic, and circulatory collapse should be promptly treated with epinephrine.2

 

Of note, when compared to physician-controlled infusions, smart pumps and double-drug infusions may yield better hemodynamics. At least three studies have suggested that norepinephrine, when delivered via smart pump, may improve maternal and fetal physiology, but studies comparing phenylephrine and norepinephrine using standard pumps are lacking.3-5 Such modalities perform optimally when combined with continuous non-invasive blood pressure monitoring, however, patient safety in the case of artifactual measurements needs more study as well. Regarding monitoring, standard ASA monitors are required and non-invasive blood pressures are ideally taken every 1-2 minutes as equipment/resources allow. Regarding resource-poor care areas, it is considered unreasonable to proceed with spinal blockade without vasopressor and anticholinergics readily available. For the novice provider, a fixed-rate vasopressor infusion with concurrent boluses as needed has been found to be an effective alternative to provider-managed titration of the infusion.

 

Patients with cardiac disease should be receive individualized care (choice of vasopressor, monitors, anesthetic technique, etc.) based on the entire clinical picture, taking into account their baseline physiology and expected changes related to surgery/labor, anesthesia, and delivery. Single-shot spinal blocks in the setting of cardiac disease pose increased risk of hemodynamic instability compared to combined low-dose combined spinal/epidural or epidural-only techniques. This is due to the quick-onset of sympathectomy seen with full-dose spinal anesthesia. Controlled titration of neuraxial blockade is recommended for the majority of these types of cases.

 

Results of a recently published survey from Ireland indicate that phenylephrine is the most widely used vasopressor currently. A concerning finding is that ~80% of the 15 reporting centers did not routinely maintain heart rate at baseline or use the rate as a surrogate for cardiac output. Following publication of the aforementioned consensus statement, two of the reporting centers changed practice to use phenylephrine primarily. Of note, a significant number of centers reported not using phenylephrine infusions due to fear of precipitating bradycardia and/or low cardiac output. Only 3 centers had a departmental protocol for management of spinal anesthesia-induced hypotension and only 2 changed practice based on the consensus statement, heralding a need for more support, resources, and assessment of the barriers to implementation. Furthermore, some aspects of the guideline can be improved when more evidence becomes available, such as the recommendations on ephedrine for bradycardia, smart infusions, or fluid pre/co-loading.1

 

Potential advances in the management of spinal hypotension include the search for optimal vasopressors or combinations of drugs; advances in monitoring to allow rapid assessment of risk of hypotension, cardiac output, volume status, etc.; and genetic studies to predict individual responses to vasopressors.

 

References

 

1. ffrench-O’Carroll R, Tan T. National survey of vasopressor practices for management of spinal anaesthesia-induced hypotension during caesarean section. International Journal of Obstetric Anesthesia. 2020.  doi:10.1016/j.ijoa.2020.09.003

 

2. Kinsella SM, Carvalho B, Dyer RA, et al. International consensus statement on the management of hypotension with vasopressors during caesarean section under spinal anaesthesia. Anaesthesia. 2018;73(1):71-92.  doi:10.1111/anae.14080

 

3. Ngan Kee WD. Norepinephrine for maintaining blood pressure during spinal anaesthesia for caesarean section: A 12-month review of individual use. International Journal of Obstetric Anesthesia. 2017;30:73-74. doi:10.1016/j.ijoa.2017.01.004

 

4. Ngan Kee WD, Khaw KS, Tam Y, Ng FF, Lee SW. Performance of a closed-loop feedback computer-controlled infusion system for maintaining blood pressure during spinal anaesthesia for caesarean section: A randomized controlled comparison of norepinephrine versus phenylephrine. Journal of Clinical Monitoring and Computing. 2017;31(3):617-623. doi:10.1007/s10877-016-9883-z

 

5. Ngan Kee WD, Lee SWY, Ng FF, Tan PE, Khaw KS. Randomized double-blinded comparison of norepinephrine and phenylephrine for maintenance of blood pressure during spinal anesthesia for cesarean delivery. Anesthesiology. 2015;122(4):736-745. doi:10.1097/ALN.0000000000000601