Computer International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) algorithms: a review

The use of computerized ISNCSCI algorithms has been around for many years but many were developed and used internally for specific projects or not maintained [7, 10,11,12,13]. Today the International SCI community has free access to the updated online format of the EMSCI and Praxis algorithms.

A key reason why these two algorithms are broadly used by the SCI community is their support and integration into large research networks, which in contrast to other algorithms developed, ensures the long-term provision of updated and accurate tools by their developers. Other reasons include the very important initial validation which is the prerequisite for high classification accuracy. For this step, a substantially large dataset is mandatory so that many types of cases can be considered by the algorithm as well as by SCI/ISNCSCI experts, who clarify how to interpret the ISNCSCI rules correctly. Registries like EMSCI and Praxis have broad inclusion criteria which ensures that typical ISNCSCI cases, including cases with classification challenges such as not testable scores, are used for development and validation with human experts. After achieving this milestone, the public interfaces must be developed and maintained, which requires further long-term resources. Finally, the adoption of new ISNCSCI revisions requires substantial resource, which ongoing registries are more likely to provide than specific research projects.

The EMSCI and Praxis algorithms have different interfaces, features, and levels of integration ability for other projects including databases and EMRs and have been and will continue to be developed independently.

One of the big advantages of independent developments is the identification of cases in which they arrive at different classification results (Fig. 2A, C). Both teams collaborate fruitfully on the scientific level, e.g. to identify a problem in the motor level definition to be addressed in future ISNCSCSI revisions [23]. This helps to inform ASIA’s International Standards Committee about the potential need for clarification or correction of certain aspects of ISNCSCI.

Osunronbi recommends that, “Utilizing ISNCSCI calculators can reduce classification errors and may help clinicians with simple but time-consuming tasks … clinicians should not rely exclusively on the ISNCSCI calculators, as human experts may be better than computational algorithms at dealing with complex cases of ISNCSCI classifications such as the presence of non-SCI conditions, and multi-level SCI“, and indeed this is a limitation of computerized ISNCSCI classification algorithms [27]. Although these algorithms can reduce classification errors, they can only be as accurate as the bedside exam scores entered, and cannot provide accurate classification in cases where complex clinical reasoning is required. ASIA’s International Standards Committee has recently emphasized the necessity of well-trained clinical assessors to ensure correct classifications [36]. Both web applications clearly outline this limitation, recommending classification still be performed or reviewed by a skilled examiner, though they continue to improve in the types of cases they are able to classify, with updates to reflect changes introduced with the eighth edition of ISNCSCI, and facilitate reclassification of exams using the updated standards. These algorithms share many similarities and have been successful in being broadly used internationally in both clinical care and research.

The first metric of reach identified a broad number and type of algorithm users with many accessing them using the indirect web application interfaces. This reflects the challenges many have in performing the classification component of the ISNCSCI exam. Clinicians require a simple, easy to access tool to support this skill and researchers require a tool to enable flagging of exams that may have been erroneously classified. This is further supported by Armstrong’s evaluation of ISNCSCI worksheet classification by trained clinicians in three multicenter randomized control trials, who concluded that, “continued training and a computerized algorithm are essential to ensure accurate scoring, scaling and classification of the ISNCSCI and confidence in clinical trials” [26].

For the second metric, usefulness, a key feature is looking at the validation of the algorithms themselves in improving accuracy of clinical classification. Multiple studies that have used these validated ISNCSCI algorithms to evaluate assessor accuracy have shown significant error rates in manual classification [14, 26, 27]. Armstrong reported one or more errors on 74.5% of worksheets across three clinical trials, with errors mostly involving incorrect motor (30.1%), sensory levels (12.4%), ZPP (24.0%) and AIS (8.3%) [26]. Schuld et al. reported on the results of a retrospective computerized reclassification of 420 manually classified ISNCSCI exams and found the lowest agreement in motor levels (62%), motor ZPPs (80.8%) and AIS (83.4%) with AIS B most often misinterpreted as AIS C and vice versa (AIS B as C: 29.4% and AIS C as B: 38.6%) [14]. In a neurosurgical unit where senior clinicians provide formalized but not standardized ISNCSCI orientation training to junior doctors, Osunronbi found an error rate of 17.7% (N = 249) in senior clinicians which may have led to higher error rate in the more junior clinicians they provided training to (30.2%, N = 119) [27]. Though this is not the ideal ISNCSCI training structure, it accurately reflects the real-world scenario at many hospitals. These studies suggest that nonexperts should receive proper training before using the ISNCSCI in clinical practice, but also highlight the usefulness of validated computer based ISNCSCI algorithms as an additional tool to improve classification accuracy even for trained clinicians.

Perceived usefulness, reported by algorithm users reflects that the ISNCSCI algorithm was also useful in significantly increasing their awareness and use of the ISNCSCI, improving their understanding of the classification rules, ability to assess and classify exams, and also their perceived confidence in classifying. Being confident is one of the most important personal factors influencing clinical decision making and successful assessment [37].

The final metric, use, reflects implementation of the algorithms, and three themes emerged. The first theme, use for education, is shown by the Praxis User Survey which demonstrated that the Praxis ISNCSCI algorithm is used to learn the ISNCSCI classification rules and for educating others. Due to the heterogeneity and complexity of SCI, the ISNCSCI exam is complex, with both theoretical and hands on training required to become competent. ASIA provides many tools to support training (International Standards Training e-Learning Program (INSTeP), ISNCSCI booklet, motor/sensory exam guides), but none of these tools provides real-time exam-specific feedback on classification and support for questions. In a review of trainee perception of medical training technologies, web-based learning was perceived as most valuable when associated with real-time feedback, a simple interface, and extended time for completion, with E-learning interventions that are perceived as lacking interactivity being viewed less favorably [38]. This aligns with the features rated as valuable by respondents (ask questions about a classification they do not understand and access to support for conducting and classifying an ISNCSCI assessment) and represents an area for potential enhancement by making the computational decision process more transparent. Algorithm-supported education in combination with hands-on training and existing tools provided by ASIA, comprise a comprehensive training package.

The second theme, the need for algorithms to ensure data quality, is evidenced by the extensive use of these algorithms both through the publicly available web applications as well as through integration into other registries, databases, clinical trials, and EMRs. Maintaining a high level of quality of ISNCSCI examinations is essential in clinical trials where the classification is often used as inclusion/exclusion criteria, to stratify groups, and as a primary outcome. It is also of utmost importance within networks like EMSCI and Praxis. The use of a standardized computer program to accurately classify ISNCSCI datasets allows clinical trials an additional data quality check, where discrepancies between clinical classification and computer calculated classification can be verified with study sites. It also allows networks like EMSCI and Praxis to ensure high data quality and provide education on classification to their network sites. The differences seen in types of use reported by the scientific literature versus the Praxis User Survey may relate to the fact that the former is probably biased to reflect a researcher perspective while participants of the latter were mainly clinicians.

Interestingly, the third theme was the variety of unintended uses found. These included informing the ASIA International Standards Committee, supporting clinical documentation, conducting bedside exams, and using the resulting worksheet to improve patient self-tracking and motivation. Given the wide variety of unintended uses, future research may be warranted to further explore and engage patients and clinicians to determine their needs and the value of additional features as well as actual demand by these users.

There are several limitations associated with this work which must be considered. Metrics for the evaluation were based on citations using Google Scholar which relies on the authors to include the citation. There may be other studies that used these algorithms without referencing them, resulting in under-reporting use. There is no standardized comprehensive evaluation of both algorithms available so some results are generalized. The Praxis algorithm user survey was conducted on a sample of convenience and was posted on the algorithm web application, which could bias the results. A prospective formal evaluation of both algorithms, targeting centers known to treat individuals with SCI to determine ISNCSCI algorithm use, would be helpful to better understand the breadth of use and inform future enhancements. Future activities planned for the EMSCI and Praxis algorithms include continuing to enhance features for users (e.g. development of an iOS/Android app to address identified limitations of internet access and smartphone compatibility) as informed by how these algorithms are being used and user feedback. A key future direction to be considered by both algorithms will be investigating appropriate ways to incorporate the new Expedited–ISNCSCI which is an abbreviated ISNCSCI designed for use by trained clinicians in screening and follow-up scenarios [39].

In conclusion, the use of validated, computerized classification tools is an effective way to decrease ISNCSCI classification errors due to human error and ensures a consistent set of classification rules is clearly defined. Computerized ISNCSCI algorithms will never replace the role of well-trained clinicians in ISNCSCI classification. They allow reclassification of ISNCSCI datasets with updated versions of the ISCNSCI, and support rapid classification of large datasets. They will continue to support the ASIA International Standards Committee in evaluating the impacts of possible future revisions to make evidence-informed modifications and highlight classification rules which may need further clarification. These algorithms have evolved to be used around the world as a valuable tool to support education, clinical documentation, communication between clinicians and their patients, and ISNCSCI data quality.

留言 (0)

沒有登入
gif