We're always astonished, Rick and I opine on PodMed this week, when sacred cows are slaughtered in medicine, when some precept we've held dear is subjected to rigorous study and discredited. Such is the case in this week's New England Journal of Medicine, where surgical checklists, those darlings of the patient safety world, had no impact on mortality or surgical complications. Sacre bleu! How can it be so? Let's take a look at the evidence.
Our friends to the north, residents of Ontario, Canada, provided researchers with a 'natural experiment,' a plethora of data from acute care hospitals gathered before and after mandatory implementation of surgical safety checklists. The Ministry of Health and Long Term Care mandated use of surgical safety checklists beginning in July 2010, and hospitals could employ a list of their own devising, the WHO checklist (previously validated in a number of observational studies) or the Canadian Patient Safety Institute checklist. Each hospital is required to report compliance with surgical safety checklists to a publicly reported database, in which the hospital is individually identified.
Three month intervals were studied for all 133 surgical hospitals in Ontario, one concluding three months before checklist implementation, and one commencing three months after such a list was employed. All surgical procedures performed during each period of study were included in the analysis. Outcome measures included operative mortality, defined as mortality occurring during hospitalization or within 30 days of the procedure, complications occurring within 30 days of surgery, length of hospital stay, rate of readmission within 30 days of discharge, and emergency department visits within 30 days of discharge.
Comorbidities, the patient's socioeconomic status, sex, age and several other factors were also considered. A total of 101 hospitals were deemed eligible for analysis, revealing an adjusted risk of death of 0.71% before checklist implementation to 0.65% afterward. Other outcome measures were similarly unaffected by utilization of a surgical safety checklist, with such a paucity of impact persisting with multiple means of analysis and utilization of different factors. Well. Seems a pretty supportable conclusion. What are possible explanations?
Compliance could be an issue, the authors indicate, citing previous studies looking at actual rates of compliance and outcomes and showing a linear relationship between the two. Training prior to implementation has also been shown to help, as does team training. The authors also invoke the Hawthorne effect, where people who know their work is being scrutinized perform better than under normal conditions. They also speculate that previous research demonstrating the very significant impact of checklists may have relied upon extensive checklists covering virtually the entire period prior to admission to discharge, or when checklists are implemented along with extensive training of care teams. Finally, I would cite our colleague Peter Pronovost, a patient safety guru, who told me that once one source of potential errors is minimized or eliminated, it opens the door for other types of errors, an assertion borne out in research in many areas of safety. For now, Rick and I agree, it seems very unlikely that such lists will be abandoned and perhaps this research will suggest a fruitful area for further endeavor.
Other topics this week include a new blood test for Alzheimer's disease in Nature Medicine, questions about whole genome sequencing in JAMA, and the problem of discontinued randomized clinical trials in the same journal. Until next week, y'all live well.