Cancer is the term for the uncontrollable growth of tissues in a particular bodily part [1]. Skin cancer seems to be one of the dangerous diseases that is spreading vastly and quickly around the world. Skin cancer is a condition where uncontrollable growth of abnormal skin cells occurs [2]. Early detection and precise diagnosis are crucial for determining possible cancer therapies. Melanoma is very dangerous and also it is the deadliest form of skin cancer. Just 1% of all skin cancer cases come under the category of melanoma, but according to the statement given by the American Cancer Society melanoma will have a higher death rate [3]. The cells in which melanoma develops are known as melanocytes. When healthy melanocytes begin to multiply uncontrollably and create a malignant growth, the condition starts. It typically develops on sun-exposing regions of the body, like lips, hands, neck, and face. These types of cancers can only be treated if detected as early as possible they would propagate to several body parts otherwise and cause the victim to suffer a painful death [4].
Computer-aided design can be used for identifying and diagnosing cancer diseases. CAD can also be used for detecting advanced tumor disease in a cost-effective manner. By including various types of imaging techniques, the detection of cancers can be assessed. Evaluating and analyzing it is a more time-consuming process and causes error-prone. Mainly because the skin lesion images are very complicated.
The classification of skin lesions has also been focused by using machine learning techniques. The automatic skin lesion classification helps physicians to enable fast access to identifying the cancer. Machine learning can be used only when there are expert people, and it is a very time-consuming process for the selection of adequate features. At the start of the preprocessing steps, the loss of data will take place which may reduce the quality of classification. By considering the sample example a poor segmentation outcome frequently results in a poor feature extraction outcome and, as a result, a low categorization accuracy.
The most effective way to detect skin cancer can be done by using Deep Learning techniques. Deep Learning is one of the subfields of machine learning by including artificial neural network algorithms. Deep learning is majorly used in various types of domains. In deep learning, preprocessing and classification are the major components to be considered. In the preprocessing phase, the intensity of the image can be increased by removing the inconsistencies among images. In this procedure, the image will be scaled to fit into the required training model. Mostly more medical professionals have been effectively using deep learning techniques to obtain phenomenal results in many challenging situations. The layers in the various deep learning techniques rely on the classification of pixel by pixel from the lesion images. Deep learning can be used to analyze large-scale datasets more effectively and efficiently. In some situations, these algorithms may give wrong classification results. First, the broad application of various deep learning techniques for skin lesion cancer classification has been hampered by data imbalance and based upon the large volume of labeled images in the dataset [5]. These algorithms frequently lead to misdiagnosis when used to identify skin cancers that are uncommon in the training dataset [6]. Furthermore, when working with high-resolution images (like pathological images with millions of pixels), deep learning models frequently result in substantial computing costs and additional training time [7]. Additionally, due to the varied circumstances, various sounds in the picture will be produced. As a result, these methods’ robustness and generalizability should also be considered [8]. So, the appropriate model for deep learning should be selected based on the size of the dataset.
Xinrong Lu et al. [9] proposed a model for melanoma detection based on Xception Net in convolutional neural networks. In this, they suggested a system for detecting skin cancer on the basis of dermoscopic pictures. The suggested model is based on an upgraded version of XceptionNet that made use of depth-wise separable convolutions and swish activation functions. Comparing this system to the original Xception and other dome designs, the network’s classification accuracy is shown to have improved. The proposed method gives better accuracy when compared to the other comparative methods. The suggested method can be implemented with the help of other simulated state-of-the-art skin cancer diagnosis methods. The cost of training XceptionNet models was very expensive. Convolutions still make it numerically inefficient. These convolutions occur across the depth but not only in spatially.
Titus J. Brinker et al. [10] proposed an artificial intelligence technique for histologic melanoma by 18 international expert pathologists. Pathologic melanoma is classified using an inevitably arbitrary integration of a number of histologic traits. They compared CNN’s ability to differentiate between melanomas and nevi. In order to train and evaluate ensembles of three individual CNNs, two experienced dermatopathologists labeled 50 individual images of melanomas and 50 individual nevi on a single hematoxylin eosin-stained whole slide image. On slides from a different collection of images, the classifiers might not perform similarly.
Rasmiranjan Mohakud et al. [11] proposed a convolutional neural network classifier with hyper-parameter optimization for skin cancer identification using the grey wolf optimization method. It is suggested to use an automatic hyper-parameter optimized convolution neural network to identify the type of skin cancer. By using an appropriate encoding technique, their strategy optimized the hyperparameters of CNN using the Grey Wolf Optimization algorithm. By contrasting the model’s performance with that of genetic algorithm-based hyper-parameter optimized CNN and particle swarm optimization on the ISIC skin lesion multi-class data set, the model’s efficacy is confirmed. It requires expensive equipment and a lot of processing capacity.
S Bharathi et al. [12] proposed a method for melanoma recognition from nevus images. The input picture is first processed so that the noise (skin lesion) is removed from the image using a median filter and is segmented using an improved K-means clustering method. A distinct feature vector is created by extracting the required textural and chromatic characteristics from the lesion. Adaptive neuro-fuzzy reasoning system (ANFIS) and feed-forward neural network are both used to separate melanoma and nevus. (FFNN). In this study, 1023 skin pictures from the DERMIS dataset, including 104 melanoma and 917 nevus images, were used. When neural network weights or parameters are unstable, the classification output is incorrect.
Sarah Haggenmuller et al. [13] proposed a convolutional neural network for the classification of skin cancer: a review of studies incorporating human experts. The study aimed to systematically analyze melanoma and evaluate their potential clinical relevance by examining three key factors: test set characteristics (holdout/out of distribution data set, composition), test setting (experimental/clinical, inclusion of metadata), and representativeness of participating clinicians. The criteria for inclusion were fulfilled by a total of 19 studies. Two dermatopathological studies used digitized histological whole slide pictures, while six of them mainly focused on classifying the clinical images. Of these, 11 CNN-based strategies focused on categorizing dermoscopic pictures. Most of the test groups were made up of holdout images that did not accurately represent the variety of patient populations and melanoma subtypes seen in real-world settings.
Pacheco et al. [14] suggested an attention-based mechanism for combining images and metadata for the classification of skin cancer in a deep-learning model. They talk about how combining image and metadata characteristics in deep learning models for skin cancer classification can be difficult. They propose a new algorithm called the metadata processing block (MetaBlock), which uses metadata to assist data categorization by improving the extracted important features that are categorized from the images through the classification pipeline. The attention-based mechanism method known as the metadata processing block (MetaBlock) uses the metadata to improve the feature maps that are extracted from images in order to improve data classification. When we are not using the meta block, a concatenation baseline should be used to achieve precision. The current techniques require several steps in sample processing and take a lot of time to identify CTCs as indicators of metastatic development.
Dyachenko et al. [15] proposed melanoma cell detection using optical clearing and spectrum imaging in whole blood samples. These were used to discover and identify CTCs. This method was validated using imaging of affected melanoma cells and suspensions of mouse melanoma cells of line B16F10 alone and in combination with blood. The method of cleaning rodent blood optically with biocompatible chemical agents was used to show a technique for increasing detection. The results indicate that the proposed diagnostic method has the ability to detect CTCs in whole blood samples from melanoma patients quickly. This methodology works only by considering the single cancer type cells and also when identifying the melanocytic cells in the blood layer it decreases the blood scattering.
Song et al. [16] proposed an end-to-end multitask deep learning framework for skin lesion analysis. The suggested method can detect, classify, and segment skin lesions all at the same time. A loss function based on the focal loss and the Jaccard distance is suggested to minimize the class imbalance problem in the dataset (which is common in medical image datasets) while also improving segmentation performance. For improving the efficiency of feature learning a phase joint strategy has been used in framework training. Training of the network gradually decreases by introducing the vanishing gradient effect.
Lisheng Wei et al. [17] proposed a technique for the detection of skin cancer in dermoscopic images by using an ensemble lightweight deep learning network. It has two standard lesion classification networks and feature discrimination network feature extraction components. The first module (Lightweight CNN) of the model receives two groups of training samples (positive and negative sample pairs). The outputs from the feature extraction module result in two sets of feature vectors which can be then used for performing the training or classifying the two networks. It provides better performance for all the tasks of segmentation as it operates with a smaller number of training samples. Learning may decrease in the middle layers, as there is a possibility that network learning will ignore the layers where hided features are indicated.
Abder-Rahman H. Ali et al. [18] proposed the rule known as the ABCD rule for melanoma detection. This method can be useful for automatically identifying dermoscopic images through a pipeline of stages like segmentation, feature extraction, and classification. Color variation majorly describes the greater number of shades that are present with the border of the skin lesion image. Generally, the melanoma lesions consist of two or more colors whereas the benign lesions consist of uniform colors. Combining the features of ABCD (AB, AC, AD, BC) has the greater possibility of identifying the skin lesion images easily. Using various machine learning approaches like SVM helps to identify or classify the lesions in either a symmetrical or asymmetric manner. In order to get more accuracy for the asymmetry various metrics have been merged into a single vector. This approach is less used because some dermoscopic images have different noises like lightening changes and bubble or hair occlusion.
留言 (0)