Many different stomach disorders, from mild erosive gastritis to advanced cancer, harm the health of a significant portion of the global population. The presence of diffuse or map-like redness in the stomach, together with or without atrophy and mucosal layer erosion, are characteristics of gastritis (Zhang et al., 2022). In Asia, particularly in China, Japan, and Korea, chronic erosive gastritis is a reasonably frequent condition that confounds medical professionals (Goh, 2011). Chronic erosive gastritis, also known as atrophic gastritis, always manifests as a variety of gastrointestinal symptoms and a histological alteration in the stomach mucosa, which lowers the quality of life. Numerous inflammatory lesions in the stomach’s mucous membrane are a defining feature of chronic erosive gastroenteritis. Chronic erosive gastroenteritis is an ulcer-like stomach inflammation marked by many lesions in the mucous lining (Palaniappan, 2013). Their symptoms may include weakness, a loss of appetite, mild nausea, vomiting, and a heavy, burning feeling in the pit of the stomach. Peptic ulcers are produced by erosive gastritis, which can continue damaging the nearing tissues while growing more significantly and broader (White et al., 2022). If the proper diagnosis and treatment are not performed, internal bleeding from severe ulcers may eventually occur, which could cause anemia.
Moreover, an example of peptic ulcer disease is gastric ulcers, which are open sores on the stomach’s lining. Along with the stomach, the intestine can develop ulcers in a section of it. Gastric ulcers may erode our stomach or small intestine’s blood vessel wall. In addition to eating a hole through the lining and becoming infected, ulcers can also grow due to inflammation or scarring, which may prevent food from passing through the digestive tract. Additionally, nodular gastritis, metaplastic gastritis, and open-type atrophic gastritis are linked to stomach cancer, whereas erosive gastritis is associated with obesity and hypoadiponectinemia (Pop et al., 2022). Therefore, during the endoscopic examination, paying attention to the diagnostic signs of gastritis is essential. If patients are examined and treated in the early stage of gastric diseases, the 5-year survival rate can be as high as 90%. However, the early gastric disease detection rate is only about 10%. Without timely diagnosis and proper treatment, long-term inflammation will aggravate the risk of harmful results in a patient’s life (Abbasi-Kangevari et al., 2022). Gastroscopy is the most effective technical tool for identifying and screening numerous gastrointestinal diseases. Gastroscopy allows endoscopists to see stomach lesions by inserting a thin, flexible tube into the stomach. Pathological biopsies of suspected lesions affect the state of the examined part and can confirm a diagnosis. Examining stomach lesions is the preferred method. However, due to exhaustion from lengthy workdays or inexperience, endoscopists may make mistakes during gastroscopy.
To improve gastroscopy diagnosis, numerous imaging techniques have been developed, including 3D imaging, auto-fluorescence imaging (AFI), magnifying endoscopy (ME), and narrow-band imaging (NBI). There is a need for a computer-aided autonomous framework to improve gastroscopy efficiency and quality in daily clinical practice, becoming a “third eye” for endoscopists. Deep learning technology has recently permeated several areas of medical study and has taken center stage in modern science and technology. Deep learning technology can fully utilize vast amounts of data, automatically learn the features in the data, accurately and rapidly support clinicians in diagnosis, and increase medical efficiency. In the field of gastroscopic image analysis, traditional machine learning and deep learning methods have been widely used in disease classification (Lu et al., 2020) and detection (Wong et al., 2012; Wong et al., 2020). Zhang et al. (Zhang et al., 2021) collected gastric images of 308 patients and used the DenseNet model to classify images into atrophic gastric and non-atrophic gastric images. The accuracy of their model combined with serological indicators is 99.25%, with a sensitivity of 96.17%. Qiu et al. (Qiu et al., 2022) classified gastroscopic images into five classes: advanced gastric cancer, early-stage gastric cancer, precancerous lesions, and normal and benign lesions using a convolutional neural network, and the overall accuracy of recognition reached 94.1%. Park et al. (Park et al., 2018) used the transfer learning technique to classify 787 gastric endoscopy images into normal and abnormal classes. After applying the transfer learning technique, the accuracy of the three pre-trained models, including ResNet-50, Inception V3, and VGG- 16, is 98%, 97%, and 98%, respectively. Apart from healthcare, machine learning (Tang et al., 2022) is also used in various fields (Wahid et al., 2021) and also in various domains of life (Zhao et al., 2022) (Ayoub et al., 2022). The authors’ proposed system optimizes treatment and prevents severe kidney stone illness, and they obtained 0.86 sensitivity using a 3D U-Net model. A spherical multi-output Gaussian process may be implemented to model and monitor the 3D surfaces of stones (Hussain et al., 2022). By studying the literature, we observed that the rapid creation of crucial tools for medical diagnostics is being fueled by artificial intelligence (AI), which is quickly becoming a crucial concept in medicine (Wong et al., 2020) (Wei et al., 2017). Deep learning (DL) is now widely employed in medical imaging (Wong et al., 2012) as a critical machine learning method in the field of computer vision (Lu et al., 2020).
Pre-trained deep learning models learned on massive datasets have demonstrated their superiority to conventional approaches as the processing capacity of modern hardware continues to grow. Therefore, from a deep learning perspective, transfer learning can be used to solve the image categorization problem. The study found that the transfer learning technique is used to achieve several cutting-edge achievements in image classification (Simonyan and Zisserman, 2014). We utilized the benefits of pre-trained deep learning models to enhance the diagnosis of capsule gastroscopy images. Our deep learning framework for classifying capsule gastroscopy images into three categories—standard gastroscopic image, chronic erosive gastritis image, and gastric ulcer image—was proposed in this study and is based on transfer learning. We used pre-trained models included VGG- 16, ResNet- 50, and Inception V3 and adjusted their hyper parameters to fit our classification task.
Materials and MethodsTo improve capsule gastroscopy image classification, there is a need for a computer-aided autonomous framework to classify capsule gastroscope images into three categories automatically. Deep learning technology has recently permeated several areas of medical study and has taken center stage in modern science and technology (Litjens et al., 2017). Deep learning technology can fully utilize vast amounts of data, automatically learn the features in the data, accurately and rapidly support clinicians in diagnosis, and increase medical efficiency (Ngiam and Khor, 2019). In this research, we proposed a deep learning framework based on transfer learning to classify capsule gastroscopic images into three categories: normal gastroscopic images, chronic erosive gastritis images, and gastric ulcer images. We used VGG- 16, ResNet-50, and Inception V3 pre-trained models, fine-tuned them, and adjusted hyper parameters according to our classification problem by using transfer learning. The proposed framework to address the mentioned research gap is shown in Figure 1. All experiments in this paper are conducted on Intel(R) Celeron(R) CPU N3150 @ 1.60 GHz. The operating system is Windows 64-bit, Python 3.6.6, TensorFlow deep Learning framework 1.8.0, and CUDA 10.1.
FIGURE 1. Proposed framework to classify capsule gastroscope images.
Dataset statisticsIn this study, we gathered 211 patients’ capsule gastroscopic imaging data from Shenzhen University General Hospital, Shenzhen University, China. For each category of capsule gastroscopic images, a total of 1140 lesion samples were randomly selected from 380 different image regions to maintain the balance of disease samples. Then, using a random selection approach, we divide the data in the ratio of 70% and 30% in each type of disease is split into a training set and a test set. Finally, there were 228 test set images and 912 images from the training set. The sample dataset is shown in Figure 2.
FIGURE 2. Sample Dataset. (A) Representing the normal gastroscopy images, while (B,C) representing the images of chronic erosive and gastric ulcer images respectively.
Moreover, we also described the statistics of our dataset in table form as shown in Table 1.
TABLE 1. Statistics of our dataset in each category.
Feature extractionWhen extracting features, we begin with a pre-trained model and only modify the weights of the last layer, from which we generate predictions. Because we alter the output layer and use the pre-trained CNN as a fixed feature extractor, it is known as feature extraction (Hinterstoisser et al., 2018). Convolution neural networks learn the edge features of the input image and some or all objects—high-level semantic features—successively as the number of convolution steps increases. The convolution layer and full connection layer in the convolution neural network can be utilized to extract the deep features of the image; however, the convolution layer has a multi-dimension, making it challenging to calculate the subsequent dimensionality reduction. However, with a straightforward calculation, the entire connection layer can be viewed as a one-dimensional vector. To represent the deep features of the image, a full connection layer is added before the output layer of the backbone network.
VGG-16The Visual Geometry Group (VGG) at the University of Oxford developed and trained the convolution neural network model called as VGG- 16 neural network (Guan et al., 2019). The number 16 in VGG- 16 indicates that there are 16 weighted layers. This network has 138 million parameters, which is quite a lot. We used Keras (Poojary and Pai, 2019) to fine-tune the VGG- 16 pre-trained model. We fine-tuned this model according to our dataset to classify capsule gastroscopic images into three categories: normal gastroscopic images, chronic erosive gastritis images, and gastric ulcer images. We reproduced the entire architecture of the VGG- 16 model, excluding the output layer, to produce a new Sequential model. We prevent the output layer from being trained or altered when we feed it our dataset by freezing the weights and other trainable parameters in each layer. Furthermore, we included our new output layer in accordance with our dataset to categorize capsule gastroscopic images into three groups. The complete model architecture and hyper parameter details are shown in Table 2 and Figure 3.
TABLE 2. Hyper Parameters details used in VGG-16 model according to our dataset.
FIGURE 3. Accuracy and Loss graph using the VGG- 16. (A) Representing the training and validation accuracy while (B) representing the training and validation loss of VGG- 16 model according to our dataset.
ResNet-50ResNet-50 is a convolutional neural network with 50 layers (Hussain et al., 2021). A pre-trained version of the network that has been trained on more than a million photos is present in the ImageNet database (Deng et al., 2009). ResNet-50 is a 50-layer residual network in which we endeavor to learn residuals rather than features. In order to solve the problem of the vanishing/exploding gradient, this architecture introduced the concept called Residual network. In this network, we use a technique called skip connections. The skip connection connects activations of a layer to further layers by skipping some layers in between. The residual blocks create an identity mapping to activations earlier in the network to thwart the performance degradation problem associated with deep neural architectures. Another sort of convolutional neural network that it needs input an images size of 224 by 224 and 3 RGB in the ResNet-50. The complete model architecture and hyper parameter details are shown in Table 3.
TABLE 3. Hyper Parameters details used in ResNet-50 model according to our dataset.
Inception V3Convolutional neural network model Inception-v3 (Wang et al., 2019a) has 48 layers and is also a pre-trained model. A subset of the more than a million images in the ImageNet database was used to train this network. The Google Inception CNN model (Bhatia et al., 2019), which was initially created for the ImageNet Recognition Challenge, is now in its third iteration. Using Inception V3, we were able to reduce the output layer’s dimensions to one, flatten it, and add a sigmoid layer for classification along with a fully connected layer with 1024 hidden units, Relu activation function, and a dropout rate of 0.4. To avoid over-fitting. This method of data augmentation (Shorten and Khoshgoftaar, 2019) operates entirely within memory. The complete model architecture and hyper parameter details are shown in Table 4.
TABLE 4. Hyper Parameters details used in Inception V3 model according to our dataset.
One of the best techniques for reducing over fitting is to increase the size of the training dataset (Thanapol et al., 2020). The training images were automatically resized using an augmented image dataset. Our pre-trained deep learning model avoid over-fitting by using dropout layer which is another regularization technique that prevents neural networks from over fitting (Srivastava et al., 2014). Regularization methods like L1 and L2 reduce over fitting by modifying the cost function but on the contrary, the Dropout technique modifies the network itself to prevent the network from over fitting. With the help of data augmentation a lot of similar images can be generated. This helps in increasing the dataset size and thus reduce over fitting. The reason is that, as we add more data, the model is unable to over fit all the samples, and is forced to generalize.
Result and discussionGastroscopy is the primary technique and industry standard for diagnosing and treating numerous stomach problems. The capsule gastroscope is a new screening tool for gastric diseases. However, several elements, including the image quality of capsule endoscopy, the doctor’s experience, and fatigue, limit its effectiveness (Namikawa et al., 2020). Early identification is necessary for high-risk factors for carcinogenesis, such as atrophic gastritis (AG) (González and Agudo, 2012). In this research work, to improve the gastroscopy diagnosis, we proposed a deep learning framework based on transfer learning to classify capsule gastroscope images into three categories: normal gastroscopic images, chronic erosive gastritis images, and gastric ulcer images. 211 patients’ capsule gastroscopic imaging data were gathered from Shenzhen University General Hospital, Shenzhen University, China For each category of capsule gastroscopic images, 1140 lesion samples were randomly selected from 380 distinct image regions to maintain the balance of disease samples. Then, using a random selection approach, we divide the data in the ratio of 70% for training and 30% for the testing set. We used VGG- 16, ResNet-50, and Inception V3 pre-trained models, fine-tuned them, and adjusted hyper parameters according to our classification problem.
Our trained VGG- 16 model achieved 94.81% accuracy, Inception V3 achieved 92.53% accuracy, and Resnet-50 achieved 90.23% in classifying capsule gastroscopic images into three categories. We assessed our model’s performance using accuracy and loss graphs. Figures 4, 6A reported the training and validation accuracy and (b) training and validation loss using VGG- 16 and ResNet-50 models, respectively, according to our dataset. Similarly, Figure 5 represents the training loss and training accuracy, and (b) represents the validation loss and validation accuracy by using the Inception V3 model in classifying capsule gastroscopic images into three categories.
FIGURE 4. Accuracy and Loss graph using the Inception V3. (A) Representing the training loss and training accuracy while (B) representing the validation loss and validation accuracy of Inception V3 model according to our dataset.
FIGURE 5. Accuracy and Loss graph using the ResNet-50. (A) Representing the training and validation accuracy while (B) representing the training and validation loss of ResNet-50 model according to our dataset.
The most popular method for visualizing the representation of experimentally gathered statistical data confusion matrices is used to solve classification issues in machine learning and deep learning (Wong et al., 2012). Figure 6 represents the performance of three models by using the 3 × 3 confusion matrix. Here (a) illustrates the performance of the VGG- 16 model for classifying capsule gastroscopic images into three categories. Similarly, (b) represents the performance of Inception V3 in terms of the confusion matrix, and (c) illustrates the performance of the ResNet-50 model for classifying capsule gastroscopic images in three categories.
FIGURE 6. Confusion matrix representation of the performance of our models. (A) Representing the performance of VGG- 16, while (B) representing the Inception V3 and (C) representing the ResNet-50 according to our dataset.
Deep learning technology has recently permeated several areas of medical study and has taken center stage in modern science and technology. Deep learning technology can fully utilize vast amounts of data, automatically learn the features in the data, accurately and rapidly support clinicians in diagnosis, and increase medical efficiency. We used three pre-trained deep learning models to improve the gastroscopy diagnosis, including VGG- 16, Inception V3, and ResNet-50.
We fine-tuned these models according to our dataset to classify capsule gastroscopic images into three categories: normal gastroscopic images, chronic erosive gastritis images, and gastric ulcer images. Our trained VGG- 16 model achieved 94.81% accuracy, Inception V3 achieved 92.53% accuracy, and Resnet-50 achieved 90.23% in classifying capsule gastroscopic images into three categories.
Moreover, we also compared the performance of our proposed approach with previously proposed studies as shown in Table 5.
TABLE 5. Comparative accuracy of proposed approach with previously proposed studies.
ConclusionDeep learning technology has recently permeated several areas of medical study and has taken center stage in modern science and technology. Deep learning technology can fully utilize vast amounts of data, automatically learn the features in the data, accurately and rapidly support clinicians in diagnosis, and increase medical efficiency. Many different stomach disorders, from mild erosive gastritis to advanced cancer, have a negative impact on the health of a significant portion of the global population. Gastroscopy is the most effective technical tool for identifying and screening numerous gastrointestinal diseases. However, due to exhaustion brought on by lengthy workdays or inexperience, endoscopists may make mistakes during gastroscopy. We applied three pre-trained deep learning models, including VGG- 16, Inception V3, and ResNet-50, to enhance the gastroscopy diagnosis. The data was gathered from 211 patients at Shenzhen University Hospital (Shenzhen University Clinical Medical Academy, Shenzhen University, China). To preserve the balance of disease samples, a total of 1140 lesion samples were randomly chosen from 380 different image regions for each category of capsule gastroscopic images. Our trained VGG- 16 model achieved 94.81% accuracy, Inception V3 achieved 92.53% accuracy, and Resnet-50 achieved 90.23% in classifying capsule gastroscopic images into three categories: normal gastroscopic image, chronic erosive gastritis images, and gastric ulcer image. Our suggested framework will help prevent incorrect diagnoses brought on by low image quality, individual experience, and inadequate gastroscopy inspection coverage, among other factors. As a result, the suggested approach will raise the standard of gastroscopy. Investigation of the gastrointestinal functions (Wong et al., 2017) can be enhanced based on variable drug introduction and the reaction may be further analyzed. Advanced bioinformatics algorithms (Li et al., 2017a; Deb et al., 2018) may be utilized to understand the effect of different biochemical environment on gastrointestinal related diseases, which provides valuable information to assist in healthcare (Li et al., 2017b) enhancement.
Limitation and future workDue to challenges getting well-annotated data, there is frequently a dearth of training picture collections required for model reconstruction in real-world applications, particularly in the medical industry. Transfer learning, secondary training, fine-tuning, and comparison with the outcomes of self-designed networks were therefore some of the techniques most frequently applied in the works under analysis. Therefore, even though the results of studies have the potential for deep learning associated with different kinds of gastric tissue images, additional studies may need to be carried out clearly and transparently, with database accessibility and reproducibility, in order to develop useful tools that aid health professionals.
Data availability statementThe original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statementThe studies involving human participants were reviewed and approved by Ethics Committee of Shenzhen University General Hospital.
Author contributionsAll authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
FundingShenzhen Natural Science Fund (the Stable Support Plan Program, No.20200826225552001), and Natural Science Foundation of Shenzhen University General Hospital, SUGH2020QD015 and supported by Shenzhen Nanshan District General Practice Alliance.
Conflict of interestThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s noteAll claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
ReferencesAbbasi-Kangevari M., Ahmadi N., Fattahi N., Rezaei N., Malekpour M. R., Ghamari S. H., et al. (2022). Quality of care of peptic ulcer disease worldwide: A systematic analysis for the global burden of disease study 1990–2019. PloS one 17 (8), e0271284. doi:10.1371/journal.pone.0271284
PubMed Abstract | CrossRef Full Text | Google Scholar
Ayoub M., Hussain S., Khan A., Zahid M., Wahid J. A., Zhifang L., et al. (2022). A predictive machine learning and deep learning approach on agriculture datasets for new moringa oleifera varieties prediction. PakJET. 5 (1), 68–77. doi:10.51846/vol5iss1pp68-77
CrossRef Full Text | Google Scholar
Bhatia Y., Bajpayee A., Raghuvanshi D., Mittal H. (2019). “Image captioning using Google's inception-resnet-v2 and recurrent neural network,” in 2019 Twelfth International Conference on Contemporary Computing (IC3), Noida, India, , 1–6.
CrossRef Full Text | Google Scholar
Deb S., Tian Z., Fong S., Wong R., Millham R., Wong K. K. L. (2018). Elephant search algorithm applied to data clustering. Soft Comput. 22 (1), 6035–6046. doi:10.1007/s00500-018-3076-2
CrossRef Full Text | Google Scholar
Deng J., Dong W., Socher R., Li L. J., Li K., Fei-Fei L. (2009). “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, , 248–255.
CrossRef Full Text | Google Scholar
Goh K. L. (2011). Gastroesophageal reflux disease in asia: A historical perspective and present challenges. J. Gastroenterol. Hepatol. 26, 2–10. doi:10.1111/j.1440-1746.2010.06534.x
PubMed Abstract | CrossRef Full Text | Google Scholar
González C. A., Agudo A. (2012). Carcinogenesis, prevention and early detection of gastric cancer: Where we are and where we should go. Int. J. Cancer 130 (4), 745–753. doi:10.1002/ijc.26430
PubMed Abstract | CrossRef Full Text | Google Scholar
Guan Q., Wang Y., Ping B., Li D., Du J., Qin Y., et al. (2019). Deep convolutional neural network VGG- 16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: A pilot study. J. Cancer 10 (20), 4876–4882. doi:10.7150/jca.28769
PubMed Abstract | CrossRef Full Text | Google Scholar
Hinterstoisser S., Lepetit V., Wohlhart P., Konolige K. (2018). “On pre-trained image features and synthetic images for deep learning,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops.
Hussain S., Ayoub M., Jilani G., Yu Y., Khan A., Wahid J. A., et al. (2022). Aspect2Labels: A novelistic decision support system for higher educational institutions by using multi-layer topic modelling approach. Expert Syst. Appl. 209, 118119. doi:10.1016/j.eswa.2022.118119
CrossRef Full Text | Google Scholar
Hussain S., Yu Y., Ayoub M., Khan A., Rehman R., Wahid J. A., et al. (2021). IoT and deep learning based approach for rapid screening and face mask detection for infection spread control of COVID- 19. Appl. Sci. 11 (8), 3495. doi:10.3390/app11083495
CrossRef Full Text | Google Scholar
Li J., Fong S., Wong R. K., Millham R., Wong K. K. L. (2017). Elitist binary wolf search algorithm for heuristic feature selection in high-dimensional bioinformatics datasets. Sci. Rep. 7, 4354. doi:10.1038/s41598-017-04037-5
PubMed Abstract | CrossRef Full Text | Google Scholar
Li J., Liu L., Fong S., Wong R. K., Mohammed S., Fiaidhi J., et al. (2017). Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data. PLoS ONE 12, e0180830. doi:10.1371/journal.pone.0180830
PubMed Abstract | CrossRef Full Text | Google Scholar
Litjens G., Kooi T., Bejnordi B. E., Setio A. A. A., Ciompi F., Ghafoorian M., et al. (2017). A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88. doi:10.1016/j.media.2017.07.005
PubMed Abstract | CrossRef Full Text | Google Scholar
Lu Y., Fu X., Chen F., Wong K. K. (2020). Prediction of fetal weight at varying gestational age in the absence of ultrasound examination using ensemble learning. Artif. Intell. Med. 102, 101748. doi:10.1016/j.artmed.2019.101748
PubMed Abstract | CrossRef Full Text | Google Scholar
Namikawa K., Hirasawa T., Yoshio T., Fujisaki J., Ozawa T., Ishihara S., et al. (2020). Utilizing artificial intelligence in endoscopy: A clinician’s guide. Expert Rev. Gastroenterol. Hepatol. 14 (8), 689–706. doi:10.1080/17474124.2020.1779058
PubMed Abstract | CrossRef Full Text | Google Scholar
Palaniappan V. (2013). Histomorphological profile of gastric antral mucosa in Helicobacter associated gastritis (Tirunelveli: Medical College, Tirunelveli). Doctoral dissertation.
Pannu H. S., Ahuja S., Dang N., Soni S., Malhi A. K. (2020). Deep learning based image classification for intestinal hemorrhage. Multimed. Tools Appl. 79 (29), 21941–21966. doi:10.1007/s11042-020-08905-7
CrossRef Full Text | Google Scholar
Park S. J., Kim Y. J., Park D. K., Chung J. W., Kim K. G. (2018). Evaluation of transfer learning in gastroscopy image classification using convolutional neual network. J. Biomed. Eng. Res. 39 (5), 213–219.
Poojary R., Pai A. (2019). “Comparative study of model optimization techniques in fine-tuned CNN models,” in 2019 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, United Arab Emirates, (IEEE), 1–4.
CrossRef Full Text | Google Scholar
Pop R., Tăbăran A. F., Ungur A. P., Negoescu A., Cătoi C. (2022). Helicobacter pylori- induced gastric infections: From pathogenesis to novel therapeutic approaches using silver nanoparticles. Pharmaceutics 14 (7), 1463. doi:10.3390/pharmaceutics14071463
PubMed Abstract | CrossRef Full Text | Google Scholar
Qiu W., Xie J., Shen Y., Xu J., Liang J. (2022). Endoscopic image recognition method of gastric cancer based on deep learning model. Expert Syst. 39 (3), e12758. doi:10.1111/exsy.12758
CrossRef Full Text | Google Scholar
Shorten C., Khoshgoftaar T. M. (2019). A survey on image data augmentation for deep learning. J. Big Data 6 (1), 60–48. doi:10.1186/s40537-019-0197-0
CrossRef Full Text | Google Scholar
Simonyan K., Zisserman A. (2014). Two-stream convolutional networks for action recognition in videos. Adv. neural Inf. Process. Syst. 27.
Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R. (2014). Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 (1), 1929–1958.
Tang Z., Wang S., Chai X., Cao S., Ouyang T., Li Y. (2022). Auto-encoder-extreme learning machine model for boiler NOx emission concentration prediction. Energy 256, 124552. doi:10.1016/j.energy.2022.124552
CrossRef Full Text | Google Scholar
Thanapol P., Lavangnananda K., Bouvry P., Pinel F., Leprévost F. (2020). “Reducing overfitting and improving generalization in training convolutional neural network (CNN) under limited sample sizes in image recognition,” in 2020-5th International Conference on Information Technology (InCIT), Chonburi, Thailand, , 300–305.
CrossRef Full Text | Google Scholar
Wahid J. A., Shi L., Gao Y., Yang B., Tao Y., Wei L., et al. (2021). Topic2features: A novel framework to classify noisy and sparse textual data using LDA topic distributions. PeerJ. Comput. Sci. 7, e677. doi:10.7717/peerj-cs.677
PubMed Abstract | CrossRef Full Text | Google Scholar
Wang C., Chen D., Hao L., Liu X., Zeng Y., Chen J., et al. (2019). Pulmonary image classification based on inception-v3 transfer learning model. IEEE Access 7, 146533–146541. doi:10.1109/access.2019.2946000
CrossRef Full Text | Google Scholar
Wang S., Xing Y., Zhang L., Gao H., Zhang H., Huang J. (2019). SmoPSI: Analysis and prediction of small molecule binding sites based on protein sequence information. Comput. Math. Methods Med. 2019, 1926156. doi:10.1155/2019/1926156
PubMed Abstract | CrossRef Full Text | Google Scholar
Wang W., Yang X., Li X., Tang J. (2022). Convolutional-capsule network for gastrointestinal endoscopy image classification. Int. J. Intell. Syst. 37, 5796–5815. doi:10.1002/int.22815
CrossRef Full Text | Google Scholar
Wei L., Wan S., Guo J., Wong K. K. (2017). A novel hierarchical selective ensemble classifier with bioinformatics application. Artif. Intell. Med. 83, 82–90. doi:10.1016/j.artmed.2017.02.005
PubMed Abstract | CrossRef Full Text | Google Scholar
White J. R., Ragunath K., Atherton J. C. (2022). “Peptic ulcer disease,” in Yamada's atlas of gastroenterology (New York, United States: Wiley Online Library), 141–152.
CrossRef Full Text | Google Scholar
Wong K. K., Fortino G., Abbott D. (2020). Deep learning-based cardiovascular image diagnosis: A promising challenge. Future Gener. Comput. Syst. 110, 802–811. doi:10.1016/j.future.2019.09.047
CrossRef Full Text | Google Scholar
Wong K. K. L., Tang L. C. Y., Zhou J., Ho V. (2017). Analysis of spatiotemporal pattern and quantification of gastrointestinal slow waves caused by anticholinergic drugs. Organogenesis 13 (2), 39–62. doi:10.1080/15476278.2017.1295904
PubMed Abstract | CrossRef Full Text | Google Scholar
Wong K. K., Sun Z., Tu J., Worthley S. G., Mazumdar J., Abbott D. (2012). Medical image diagnostics based on computer-aided flow analysis using magnetic resonance images. Comput. Med. Imaging Graph. 36 (7), 527–541. doi:10.1016/j.compmedimag.2012.04.003
PubMed Abstract | CrossRef Full Text | Google Scholar
Zhang C., Xiong Z., Chen S., Ding A., Cao Y., Liu B., et al. (2022). Automated disease detection in gastroscopy videos using convolutional neural networks. Front. Med. (Lausanne). 9. doi:10.3389/fmed.2022.846024
PubMed Abstract | CrossRef Full Text | Google Scholar
Zhang J., Yu J., Fu S., Tian X. (2021). Adoption value of deep learning and serological indicators in the screening of atrophic gastritis based on artificial intelligence. J. Supercomput. 77 (8), 8674–8693. doi:10.1007/s11227-021-03630-w
CrossRef Full Text | Google Scholar
Zhao C., Lv J., Du S. (2022). Geometrical deviation modeling and monitoring of 3D surface based on multi-output Gaussian process. Measurement 199, 111569. doi:10.1016/j.measurement.2022.111569
留言 (0)