Digital pathology has emerged as a valuable tool in clinical oncology. However, the development of computational pathology models has introduced the need to ensure their interpretability and generalizability across various diagnostic tasks and cancer types. Additionally, processing vast histopathology datasets, which often contain both images and biomedical descriptive text, presents a considerable challenge. Two studies published in Nature Medicine, by Chen et al. and Lu et al., now tackle these two challenges by introducing a general-purpose foundation model and a visual-language foundation model, respectively, that use large-scale computational pathology imaging datasets to achieve a series of different tasks.
Chen et al. leveraged over 100 million images from more than 100,000 diagnostic whole-slide images across 20 different tissue types to develop a self-supervised framework, called UNI. The authors report that UNI performed successfully in 34 different pathology tasks, including both slide-level classification, such as breast cancer metastasis detection and brain tumor subtyping, and region of interest-level classification tasks, such as colorectal tissue and polyp classification, prostate adenocarcinoma tissue classification and pan-cancer tissue classification. Notably, the authors benchmarked their model against existing frameworks, such as CPath, Res-Net-50 and REMEDIS, and found that UNI outperformed all by a wide margin through the full list of tasks.
留言 (0)