Clinical Decision Support (CDS) tools play a crucial role in healthcare, particularly in optimizing antibiotic prescribing and reducing infections. These tools are essential to Antimicrobial Stewardship (ASP) and Infection Prevention (IP) programs. A recent study highlights the application of established CDS evaluation frameworks, such as the “Five Rights” and “Ten Commandments,” to real-world tools like the Vancomycin Best Practice Advisory and Clostridioides difficile (C. difficile) order panel. The findings emphasize the importance of user engagement and efficiency in designing and implementing these tools to improve antibiotic use and diagnostic stewardship. The results suggest that with a structured approach, these tools can significantly enhance quality improvement initiatives in ASP and IP programs. CDS tools contribute to better patient outcomes and more efficient healthcare delivery by focusing on effective implementation and user-centered design.
In an exclusive email interview, Mary Rochelle Smith, MD, MPH, Julie J Lee, MD, MPH, and Amy Chang, MD, PharmD, from Stanford University School of Medicine, shared valuable insights on the development and implementation of CDS tools in ASP and IP programs. They discussed the challenges of maintaining tool adaptability as clinical guidelines evolve, strategies for engaging clinicians in high-pressure environments, and how to evaluate the impact of CDS interventions on patient outcomes.
Contagion: How do you ensure CDS tools remain adaptable as clinical guidelines evolve, especially for conditions like antimicrobial resistance?
Smith: “Routine maintenance by CDS tool creators/owners of any CDS intervention/tool is vital to ensuring that it remains up to date and in line with recent national and institutional guidelines and local resistance patterns. This includes scheduled structured evaluation of the tool using a framework such as the ‘5 Rights’ of CDS or ’10 Commandments” that we outlined in our article to optimize the functionality of the intervention.”
Lee: “Agreed with the above. It’s critical to 1) maintain a comprehensive database of active AMR CDS tools, supported by a multidisciplinary team, including a physician who stays up-to-date with infectious disease guidelines and local/regional trends, 2) establish clear standards for updating tools, such as clinical content, intervention formats, or order options, and retiring those that are no longer necessary or aligned with current guidelines, with all changes documented in a detailed changelog, and 3) implement a methodical process to update all stakeholders when tools are updated or retired, ensuring consistent use and adherence to evolving guidelines.”
Chang: “Yes, agreed with the above. One additional note to add: Our CDS committee at Stanford frequently encounters alerts that are out of date. Because there are hundreds of different alerts operating at any given time, you cannot rely on a governing body like a CDS committee to point out where guidelines need to be updated; nor would this necessarily be their area of expertise. As such, we really rely on the requesting operational team, in this case the ASP or IPC team, to keep their own inventory of alerts they own and periodically re-examine alerts when local/national guidance changes.”
Contagion: In your experience, what factors help maintain clinician engagement with CDS tools, particularly in high-pressure environments like ICUs or emergency departments?
Smith: “Clinician engagement is maintained through a longitudinal collaborative process with most successful interventions. Some factors that can help are: Designing the CDS tool in a way that it is no longer an obstacle added to their workflow, but rather a tool to ultimately optimize their workflow.”
Chang: “User-centered design, or in other words, involving the end users at every stage of development, is a valuable approach that can help to improve not only the user interface, but also the overall perception of an alert and engagement with the process you are trying to improve.”
Lee: “Facilitating easy access and feedback to CDS tools is also important as this approach allows clinicians to feel supported and ensures the tools are refined based on their input. I would strongly recommend assigning a dedicated team to the feedback process and respond to feedback.”
Smith: “Building relationships with clinicians and finding a champion within those targeted units to support the CDS tool. Also building trust and goodwill by being receptive to removing other CDS interventions in their units that are ineffective or cumbersome for providers.”
Lee: “Adapting CDS intervention formats to the needs and realities of high-pressure environments will be critical for seamless workflow and adoption.”
Smith: “Incentivizing the use of the CDS tool by tracking metrics that can be tied to quality improvement measures can also maintain clinician engagement.”
Lee: “Sharing these improvements in metrics and quality outcomes with end users is important to highlight how these tools are truly driving better decisions and outcomes, as it fosters trust and aligns clinicians with the greater purpose of using CDS tools effectively.”
Contagion: How do you incorporate patient outcomes into evaluating the effectiveness of CDS interventions in antimicrobial stewardship?
Smith: “This would likely vary significantly and be specific for each stewardship CDS intervention and the goal outcome of the intervention. For example, if there is a CDS intervention such as an alert for IV to PO conversion, then an outcome measure such as Length of Stay (LOS) could be a helpful outcome metric to track. If your intervention is revising a C. difficile order panel to appropriate ordering on high-risk patients and discouraging inappropriate ordering on low-risk patients, then an outcome metric such as hospital acquired C. difficile infections could be tracked.”
Lee: “That is the question of the day. With all the complexities involved in patient care, it can be difficult to incorporate patient outcomes and pinpoint the effectiveness of CDS interventions for antimicrobial stewardship in isolation. For instance, comparing hospital-acquired C. difficile rates before and after an intervention might help, but it’s essential to acknowledge confounding variables, such as temporal factors, that can influence outcomes.”
Lee: “The key lies in approaching this like a well-thought-out study design. First, identify the key patient outcomes you want to measure, such as reductions in CLABSIs, along with the composite metrics or process measures that contribute to those outcomes. Then, design the CDS tool to address these specific goals and ensure you have the means to track the necessary data. Finally, it’s crucial to define and confirm your success metrics before launching the intervention; otherwise, you may risk chasing data that may not even exist or is difficult to capture.”
What You Need To Know
Regular updates ensure CDS tools remain aligned with current guidelines and local resistance patterns.
User-centered design and collaboration are crucial for sustaining clinician involvement and optimizing workflow.
Patient outcomes and continuous feedback are essential for evaluating and refining CDS tool effectiveness.
Chang: “‘Success’ is overall a difficult to define concept within CDS in general. Similar to any QI intervention, you will have some outcome measures that will establish your final ‘goal’, like the metrics mentioned above. However, on the way to that goal, there will likely be several process metrics that help you iterate upon your CDS tool and improve engagement over time. As an example, we have the vancomycin interruptive alert mentioned in our article. How would you define success of this pop-up alert? Should it be every time vancomycin is discontinued? Probably not, given that it may be perfectly reasonable for front-line providers to see the alert, and upon reviewing criteria, realize the patient still requires vancomycin. Instead, perhaps an initial process measure would be to attempt to measure each time the provider ‘appropriately’ interacted with the OPA by either discontinuing the order or identifying criteria that suggests vancomycin should continue. Meanwhile, you would also want to measure your outcome metric – in this case vancomycin DOT – which you would expect to decrease overall, but may take longer time to show meaningful effect. Lastly, but often forgotten — in order to iterate upon the process, teams should obtain end-user feedback through surveys, focus groups, or feedback tools provided by the electronic medical record system. This process may reveal some design issues that may lead to decreased engagement and/or errors in design. It is important to recognize the unintended consequences of interruptive pop-up alerts leading to alert fatigue and increased medical errors. Unfortunately, because there is no gold standard metric to evaluate for this, we really rely on optimized design that takes into account the end-users workflow and has maximal engagement (e.g. high ‘acceptance’ rates).”
Overall, CDS tools play a key role in ASP and IP programs by improving antibiotic use, reducing infections, and supporting diagnostic decisions. A structured approach to the development and evaluation of CDS tools ensures alignment with clinical guidelines and enhances tool effectiveness. Regular updates, clinician engagement, and feedback are essential for maintaining the utility of these tools. As CDS tools evolve, they can improve patient outcomes, support decision-making, and contribute to more efficient healthcare delivery.