Key performance indicators in pre-hospital response to disasters and mass casualty incidents: a scoping review

This systematic review offers a comprehensive synthesis of the existing KPIs used in the evaluation of the prehospital response to disasters and MCIs. The crucial need for standardized terminology, uniform data collection tools, and established benchmarks for assessing prehospital responder performance was highlighted.

Before delving into the content of the KPIs, the geographical distribution of included work is worth mentioning. While a global distribution of studies was anticipated, the majority of included articles were produced by Swedish research teams. To the authors’ understanding, this may have been facilitated by the necessity to implement the Swedish national preparedness plan and by the presence of KAMEDO (Katastrofmedicinska Organisationskommitten), a Swedish Organisation for Studies and Reports from International Disasters organized and funded by the Swedish National Board of Health and Welfare [21]. However, this raises the question of the applicability of these KPIs in other countries with different physical geography, environments, resources, and legal and regulatory frameworks. Echoing the relevance of such question is the study performed in Afghanistan [24]. The study’s aim was to test if the performance indicators for prehospital command and control developed for civilian use can be used in a military training setting. The finding that two KPIs were deemed non-relevant is significant, as it suggests that caution should be exercised when applying the same indicators universally and without reservation, given that not all prehospital emergency care is provided in optimal conditions [31].

Another aspect of interest is that the authors of some of the included papers were able to elaborate a series of KPIs only upon examination of after-action reports of MCIs that occurred in a time span of 22 years [21]. A possible explanation of why such a long period of time had to be reviewed to produce the aforementioned KPIs could be that MCI after-action reports are typically performed for purposes other than performance evaluation. Indeed, they often simply summarize different aspects of the response, while actions and decisions taken at the operational and tactical levels are rarely registered in a thorough and complete manner, thus preventing the comprehensive identification of indicators of performance [9, 21].

When attempting to determine the most studied element of the response phase, management, formation of guidelines (for either response in general or specifically to the evacuation of patients) and communication (whether it be the first or second report) were the most frequently examined. This finding may be explained by the fact that these areas are often identified as having shortcomings. Any intervention that could improve the standardization of prehospital response to MCIs and enhance communication efficiency could have a significant impact on the success of disaster management [32].

It becomes clear from the studies covered in this review that notwithstanding the introduction of multiple frameworks to enable uniform disaster research and evaluation [33], lack in use of consistent (or any) terminology across the various phases of a disaster persists. The epitome of this issue is the use of performance indicators of time: while all authors are either applying or proposing new KPIs, only two have been proposing definitions. Such a discovery contributes to the general confusion and sets back even more the search of commonly set, accepted, and used guidelines in the response evaluation.

Although the World Association for Disaster and Emergency Medicine (WADEM) has published a policy document on evaluation and research where the question of adopting a more evidence-based to disaster medicine research is raised [34], in all 14 articles included in this review the need for further validation of the indicators studies and used, is always highlighted. That leads, though, to the point that no validated set of KPIs on which to base further research, currently exists. This observation further underscores the need to improve the science behind the development, validation, and use of indicators.

When examining the use of quantitative and qualitative KPIs, it is clear that there is a discrepancy in the number of articles focusing on the former as opposed to the latter. Specifically, despite all 14 articles studied temporal performance indicators, 10 were looking into indicators of process [14, 16, 17, 19, 21,22,23,24, 27, 30], either that is accuracy or respect of the sequence of steps of which such process is comprised. An example that could function as the embodiment of such anomaly is the first report to the dispatch centre (METHANE). The focus appears to be primarily on the timely arrival of communication, rather than the accuracy of the report's content. However, this does not necessarily indicate that communicating something earlier is more important than communicating it correctly. The authors believe that a more plausible explanation for this discrepancy is the difficulty of evaluating communication quality. While the timing of communication can be assessed using time stamps and stopwatches, properly evaluating the quality of communication requires a validated training curriculum and a validated set of KPIs. Unfortunately, the latter still seems to be out of reach [22, 29].

To conclude, upon studying the articles included in this research, the reader may find it difficult to trace the origin and rationale behind many of the proposed benchmarks. Additionally, some authors only provided benchmarking for a portion of the examined performance indicators [16, 25,26,27, 30]. As previously mentioned, setting a value against which individual responders or the overall system performance can be evaluated is always a challenge [21]. However, the concept of benchmarks is inherent to the usefulness of a proposed indicator as long as it is explicit that the indicators are not being used to single out failures and to identify scapegoats, but rather to identify areas where improvements can be made [21].

Limitations

First, this review has only included articles in English. It is possible that other pertinent research in languages other than English was skipped over for this review. Secondly, a quality assessment of the included studies was not performed using a validated tool but by merely using the reviewers’ experience in research, this decision was taken because of the small number of papers that examined KPIs on the prehospital response found.

留言 (0)

沒有登入
gif