A deep learning algorithm proposal to automatic pharyngeal airway detection and segmentation on CBCT images

1 INTRODUCTION

Artificial intelligence (AI) is a general term that describes machines that mimic the cognitive functions of human intelligence. Currently, AI applications are used in areas such as object detection, image classification, speech understanding and language translation.1, 2

Machine learning is the branch of artificial intelligence where algorithms are trained using mathematical and statistical methods to perform tasks by learning patterns from data rather than gaining these skills through coding.1, 3 Artificial neural networks, a sub-branch of machine learning, have been inspired by biological neural networks. Various artificial neurons are linked together to form a network of processor units in layers. Between the input and output layers, there is a certain number of (at least one) hidden layers that are responsible for the network's decision making. Therefore, the artificial neural network architecture of machine learning algorithms consisting of multiple hidden layers is called ‘deep learning’.2 One type of multilayer network is the ‘convolutional neural network’ (CNN), which has proven itself, particularly in image analysis.1, 4

Computer-based diagnostics have gained momentum in the field of healthcare due to CNNs powerful advantage in distinguishing images that detect lesions that cannot be seen by the human eye.5 CNNs have been successfully used in image-based diagnosis in which the segmentation of the brain tumour and the detection of lung lesions are carried out automatically.6, 7

In the diagnostic performance studies implemented in dentistry, CNNs were designed to determine the root fracture line from periapical radiographs and cone-beam computed tomography (CBCT), to determine the apical lesion and its volume, to detect ameloblastomas and keratocystic odontogenic tumours based on panoramic images and to detect periodontal bone loss in panoramic radiographs.8-11

Although its applications in dentistry and orthodontics are relatively new, artificial intelligence has been successfully applied in the decision of tooth extraction, evaluation of facial attractiveness in patients who have been treated for cleft lip and orthodontic cephalometric analysis for determination of cervical vertebral maturation stage.12-14

The configuration and dimensions of the upper airway are determined by anatomical structures such as the soft tissue surrounding the pharynx, muscles and the craniofacial skeleton. The morphology of the pharynx affects the airway volume, facial growth pattern, chewing patterns and leads to the risk of obstructive sleep apnoea (OSA).15, 16

Moreover, it has been stated that skeletal deficiency may be a predisposing factor for airflow obstruction in children, and the positioning of the lower and upper jaw behind causes narrowing in the anteroposterior aspect of the airway. Among the various treatment protocols applied in orthodontics, placing the mandible forward with orthopaedic appliances used in Class II patients and the protrusion or expansion of the maxilla in Class III patients through orthognathic surgery changes the soft tissue and skeletal structure, while also changing the airway volume.17-20

One of the methods used in the analysis of airway volume is computed tomography. This method provides lower cost, lower radiation dose and faster imaging for the patient, and the ability to examine the airway in a 3-dimensional view.21 Numerous software is available to analyse the data obtained from the CBCT scan as part of the manual or semi-automatic volumetric measurement process.22 However, they are laborious and time-consuming and some of this software is commercial for routine clinical applications. To the best of our knowledge, a fully automatic airway volume detection algorithm has not been developed yet. Hence, this study aimed to generate and evaluate an automatic detection algorithm for pharyngeal airway on CBCT images using a deep-learning artificial intelligence system that will provide a fast, easy and error-free method.

2 MATERIAL AND METHODS

Using retrospective data from the archive of the Near East University Faculty of Dentistry, a power analysis (G power) was conducted for detection of pharyngeal airways with a statistical power of 90%, the significance level of 0.05 α, and a probability of type II error as 0.2 (β). The power analysis indicated at least 265 CBCT images were required to conduct the study. Thus, this study was conducted with the use of randomly selected high-quality 306 CBCT images from the CBCT archive of the Near East University Faculty of Dentistry.

The research protocol was approved by the Near East University Scientific Research Evaluation Ethics Committee (decision date and number: 30.07.2020/81-1140) and was conducted following the regulations of the Helsinki Declaration. Patients or their legal guardians gave their informed consent before radiography, and the consent forms were reviewed and approved by the institutional review board of the faculty. Subjects with evidence of current orthodontic treatment, gross skeletal asymmetries or bone disease, cleft lip-palate, erupted or supernumerary teeth overlying incisor apices were excluded from the study.

CBCT scans were obtained using a Newtom 3G (Quantitative Radiology srl). The device automatically adjusts the radiation dose according to the age and height of the patient. All images were recorded in 120 kVp and 3-5 mA, 12 in (13.48 cm) imaging field, with an axial slice thickness of 0.3 mm and isotropic voxels. All CBCT scans were obtained according to the standardized, strict scanning protocol used in the clinic the study was carried out in. Patients were placed in a horizontal position, checked to ensure that their mouths were closed in a normal, natural occlusive position and instructed to lie still throughout the length of the scan.

The tomography data were saved in DICOM format and anonymized. The CBCT orientation of the coronal view was oriented with the midsagittal plane at the midline of the head and the right and left orbita parallel to the ground. In the sagittal view, the line joining the anterior nasal spina (ANS) and the posterior nasal spina (PNS) was oriented linearly. The axial view was oriented so that the palate line (ANS-PNS) aligned perpendicularly to the ground.

2.1 Image evaluation

Open-source version 3.8 ITK-SNAP software (www.itksnap.org) was used for the segmentation of the pharyngeal airway. 3D CBCT segmentation is a semi-automatic process of the formation of 3D geodesic snakes to create a 3D volume. Although mostly automatic, two steps require human interaction. The first step is the selection of the thresholding values that isolates the anatomic region which is to be segmented. The second step is the placement of seed regions that determine where the seeds for the active contour model are to be formed.23

Pharyngeal airway boundaries were determined from CBCT data (Figure 1).24 To isolate the airway, the lower threshold value was brought to the lowest value by the researcher and the upper threshold value was fixed where the airway image boundaries were most clearly seen in each CBCT image. The airway was semi-automatically filled in all three views. DICOM images typically capture the subject's head. A typical image comprises a stack of 500 axial slices, each of size of 512 × 512 pixels.

image

Pharyngeal airway boundaries. Anteriorly, perpendicular to the sagittal plane, an anterior, vertical plane passing through PNS; posterior border of the vomer, PNS, soft palate, base of the tongue, and anterior wall of the pharynx. The posterior wall of the pharynx is used as a posterior border. Lateral, including the lateral walls of the pharynx, and pharyngeal lateral protrusions, were used as the lateral border. Inferiorly, used a horizontal plane parallel to the place drawn from the base of the epiglottis. Superiorly, the highest point of the nasopharynx was used as the upper border, coinciding with the posterior choana and congruent with the anterior border

The annotation process consisted of segmenting the pharyngeal airway using ITK-SNAP then saving the segmented pharyngeal airway as a separated DICOM image. The algorithm which was implemented during the study loads both images during preprocessing and uses the second DICOM containing the segmented airway as the gold standard. In total, 306 CBCT images were annotated and randomly divided into 70% training, 15% validation and 15% test sets. From 306 CBCT images, 214 images were in the training set, 46 images were in the validation set and 46 images were in the test set.

2.2 Model Pipeline The present study's approach consisted of the following steps, Preprocessing the CBCT image. Classifying each voxel in the image as airway or background. Extracting airway volumetric image. 2.3 Preprocessing

A portion of the images which were not related to the study was removed from the CBCT scans because of computational constraints. Using the sagittal view, the top 15% of the scan which corresponds to the portion of the brain was removed. Furthermore, the slices of the axial view were reduced to 256 × 256 squares by cutting out 50% of the frame. Then, each axial slice along with the gold standard was split into 128 × 128 pieces. Resizing CBCT images was avoided since that would result in the deviation of the calculated volume of the airway (Figure 2).

image

Preprocessing the CBCT image

2.4 Semantic Segmentation

The aim of this study expressed in terms of the machine learning problem is semantic segmentation; semantic segmentation is the process of classifying each pixel in an image with a label. In this study, the algorithm classifies each pixel as an airway or background. To achieve this, a U-Net architecture was used to carry out the deep learning process. U-Net is an encoder-decoder style neural network that is used to solve semantic segmentation problems end to end. Encoding path also known as down convolution helps the model capture semantic information, and decoding path also known as upsampling helps the model recover spatial information (Figure 3).5

image

Proposed segmentation algorithm based on U-Net like architecture

2.5 Implementation

The present study's algorithm was based on MATLAB implementation of U-Net. All training and experiments were done using NVIDIA®GeForce® RTX 2080 Ti GPU. The network was trained with SGD Adam optimizer. As a loss function generalized dice loss was used. The batch size was set to 32. The learning rate was set to 10−4. The network was trained for up to 10 epochs, and the model with the best validation loss was chosen for testing.

2.6 Statistical analysis

All airway volume measurements were carried out by the same researcher (Ç.S.), and the repetition of the measurements was performed after two weeks. An Intraclass Correlation Coefficient (ICC) (95% confidence interval) with a significance level of 0,05 was utilized to detect the reliability of the intra-observer variability between the first and second measurements of the researcher and also between the measurements of AI and the researcher. The segmented results which were obtained automatically were compared with manually segmented results aimed at estimating the accuracy of the learned classifiers.

To assess the performance of the segmentation procedure, Dice Similarity Score (DSC) and Intersection Over Union (IoU) metrics, and confusion matrix were calculated. DSC, which is the most used measure in verifying semantic segmentation models, was calculated to evaluate the segmentation performance. IoU for the gold standard and prediction were used to measure the airway localization capabilities of the model. In addition to DSC and IoU, the accuracy metric is also calculated which is defined by the fraction of correct predictions of the model. The IoU between the segmentation result and the gold standard was measured and the voxel accuracy showed a true positive rate (voxels correctly classified as airway) of the voxel-wised prediction.

Another method that gives insight into the performance of the neural network is to generate the confusion matrix which reports correctly and incorrectly predicted voxel labels (Table 1).

TABLE 1. Voxel level confusion matrix for the test set of 46 CBCT Predicted/Truth Airway Background Airway (TP) 0.925 (FP) 0.075 Background (FN) 0.001 (TN) 0.999 Note TP (True Positive) is the number of pixels correctly classified as airway, TN (True Negative) is the number of pixels correctly classified as background, FP (False Positive) is the number of pixels wrongly classified as airway and FN (False Negative) is the number of pixels wrongly classified as background. 3 RESULTS

The human observer found the average volume of the pharyngeal airway to be 18.08 cm3 (SD, 0.52) and the artificial intelligence to be 17.32 cm3 (SD, 0.50) (Table 2). While ICC between researcher measurements was found to be 0.986 (ranged from 0.978-0.988), ICC between researcher and AI measurements was found to be 0.985 (ranged from 0.981-0.989). Almost all measurements were found to be highly reproducible.

TABLE 2. Segmented airway volume analysis Volume Mean SD Minimum Maximum Human 18.08 cm3 0.52 9.73 cm3 34.57 cm3 AI 17.32 cm3 0.50 8.80 cm3 36.25 cm3

The calculated Dice ratio across all slices of all CBCT images was 0.919, and the mean accuracy of 0.961 providing excellent accuracy. The calculated weighted IoU of 0.993. The proposed algorithm showed accurate automatic segmentation of the pharyngeal airway on CBCT (Figure 4).

image

Views of 3D segmentation of the pharyngeal airway from different angles. A, Segmentation of human observer B, Segmentation of AI model

4 DISCUSSION

This study presented a method for automatically segmenting airway volumes on CBCT images. The method allowed accurate detection and segmentation in CBCT images with high detection accuracy. Automated segmentation correlated well with manual segmentations and showed reliable segmentation results in different patients.

In literature, various 3D organ segmentation studies have been carried out making use of CNNs for segmentation and detection. Similar to the present research, Huff et al,25 used a CNN based on the U-Net architecture to achieve ventricular volume segmentation. They used a U-Net model for each ventricle type using a total of 300 CT head scans. The dice score for segmentation of the left lateral was 0.92, the right lateral was 0.92 and the third ventricle was 0.79.

Zhou,26 implemented semantic segmentation using CNN, which classifies each pixel according to an anatomical label using a two-step process to create anatomical volumes. The first step provided a coarse recognition of the anatomical tag, and the second step provided a fine segmentation over the course recognition zone. The IoU scores of their study proved to be 99% and 88% for the left kidney, 84%, and 65% for the pancreas.

Guo et al,27 applied the Fully Convolutional Network (FCN) to segment the liver. Similar to U-Net, the FCN is used for semantic segmentation. Its methods integrate fully convolutional network predictions into active contour models. They implemented 73 live CT scans to train the models and evaluated the results. The dice score of the liver volume was 95.8%.

In a study in which Blanc-Durand et al,6 segmented a 3D brain tumour, the CBCT images were resized as part of the preprocessing. Although this process improves train performance, it causes information loss. In this study, CBCT images were divided into smaller pieces rather than resized. After calculating the volume by applying segmentation to the thumbnails, the original CBCT size was preserved by reassembling the thumbnails. As a result, all of the information was preserved.

In orthodontic treatment, dental arches and bone structures in the jaw and face, muscles, joints, sutures, tongue, hyoid bone and respiratory tract are the most affected structures. A nasal breathing individual may resort to mouth breathing due to a blockage in the nasal and pharyngeal airway. Mouth breathing during growth may alter craniofacial morphology.28 For this reason, orthodontic treatments may be decisive for the treatment of these patients. Likewise, a disorder in the upper respiratory tract may be involved in the aetiology of orthodontic problems.28 A study showed that pharyngeal airway dimensions were significantly greater in nasal-breathers than in mouth-breathers.29 In growing individuals, airway volume increases after adenotonsillectomy of hypertrophic adenoids and tonsils caused by naso and/or oropharyngeal obstruction.30

Functional treatment and orthognathic surgery are a topic that has been studied to try to find any correlation between the two and upper airway dimensions, in this regard orthopaedic treatment in Class II skeletal pattern showed a significant relation in the cases of the retrognathic mandible. The posterior position of the mandible will force the muscles of the tongue to be positioned in the most possible posterior position which will lead to a decrease in the upper airway dimension.18 A recent study has also suggested a relationship between skeletal facial pattern and the dimensions of the upper airway, if the ANB angle increase by one unit a decrease by 0.261 unit.31

Surgical intervention will cause changes in the airway volume depending on the type of surgery planned. Relapse is a huge concern after any orthognathic surgery. Calculating the number of changes in airway volume will play a role key factor during the process of treatment planning, as the soft tissue will apply continuous force due to its inherent flexibility which might affect the stability of the changes.32

Analyses of airway volumes are required to determine oral and pharyngeal adaptations to changing respiratory conditions and to evaluate the airway before and after functional orthopaedic treatment and orthognathic surgery. Automated segmentation significantly reduced airway segmentation time and approached clinical practice requirements by eliminating the need for manual intervention, which is a laborious and time-consuming process for routine clinical practice and is inconsistent from practitioner to practitioner.

The pharyngeal airway volume in 60 patients with Class I and III malocclusions was evaluated using the same airway boundaries established for this study. The volume averages were found to be 27.87 cm3 and 32.58 cm3 in Class I and Class III patients, respectively.24 Besides, a study which was aimed to evaluate the relationship between pharyngeal airway volume, the shape and facial morphology from CBCT data, the average airway segment volume was found to be 20.3 cm3.15 In another study with the same airway borders, the pharyngeal airway volume of 300 patients was examined and the mean volume was found to be 20.59 cm3.33 This result showed 2.51 cm3 more mean volume than the results of our present study. The difference may have resulted due to different programmes used in the measurements and the human factor during the measurement process.

Further volume comparison has not been carried out since a review of the literature yielded that no more studies have been carried out which focus on similar airway boundaries that were determined in this study.

There are several limitations to this study. The CBCT data set was obtained using a CBCT machine with the same image acquisition protocol. Since there are multiple CT scanners and image acquisition protocols, different machines and protocols may be required to increase the accuracy of the developed artificial intelligence model. In a recent study, it has been suggested that randomized controlled studies can be performed with more data and a multi-centre data pool, instead of data obtained from a single centre (university hospital) provider to generalize the results.34

Another limitation is the 2D processing of the CBCT images was because of insufficient computer GPU. In future studies, preprocessing can be completed by directly giving CBCT images without splitting the pictures into pieces. A 3D convolutional network shows better accuracy of segmentation compared with a 2D convolutional network.26

Similar to other software based on threshold value in image segmentation, ITK-SNAP semi-automatic is easily and widely used; however, the threshold value is relative as it is ascertained subjectively based on the image clarity and thus may vary.

Since the soft tissue contrast of CBCT is low, determining a threshold value that separates the voxels corresponding to air and soft tissue boundaries could not be determined precisely during the segmentation process. This affects the airway volume. This is one of the limitations of CBCT.

5 CONCLUSION

The current study automatically performed pharyngeal airway segmentation from CBCT images with a high similarity rate. Measurement correlation between AI and observer was as high as the correlation between the observers; thus, AI models based on deep learning techniques can be used for fast, easy and error-free segmentation of pharyngeal airway volume from CBCT images for clinical application.

ACKNOWLEDGEMENTS

The authors did not receive support from any organization for the submitted work.

CONFLICT OF INTEREST

The authors have no conflicts of interest to declare that are relevant to the content of this article.

AUTHORS' CONTRIBUTIONS

Çağla Sin: Data analysis, Manuscript writing, Nurullah Akkaya Software development, Seçil Aksoy Data collection, Kaan Orhan Project Development, Ulaş Öz Project Development.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

REFERENCES

1Chartrand G, Cheng PM, Vorontsov E, et al. Deep learning: a primer for radiologists. Radiographics. 2017; 37(7): 2113- 2131. 2LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553): 436- 444. 3Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017; 37(2): 505- 515. https://doi.org/10.1148/rg.2017160130 4Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2017; 60(6): 84- 90. 5Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2015: 234- 241. 6Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO. Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study. PLoS One. 2018; 13(4):e0195798. 7Wang H, Zhou Z, Li Y., et al. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res. 2017; 7(1): 11. 8Johari M, Esmaeili F, Andalib A, Garjani S, Saberkari H. Detection of vertical root fractures in intact and endodontically treated premolar teeth by designing a probabilistic neural network: an ex vivo study. Dentomaxillofac Radiol. 2017; 46(2): 20160107. 9Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J. 2020; 53(5): 680- 689. 10Poedjiastoeti W, Suebnukarn S. Application of convolutional neural network in the diagnosis of jaw tumors. Healthc Inform Res. 2018; 24(3): 236- 241. 11Krois J, Ekert T, Meinhold L, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep. 2019; 9: 8495. 12Jung SK, Kim TW. New approach for the diagnosis of extractions with neural network machine learning. Am J Orthod Dentofacial Orthop. 2016; 149(1): 127- 133. 13Patcas R, Timofte R, Volokitin A, et al. Facial attractiveness of cleft patients: a direct comparison between artificial-intelligence-based scoring and conventional rater groups. Eur J Orthod. 2019; 41(4): 428- 433. 14Amasya H, Yildirim D, Aydogan T, Kemaloglu N, Orhan K. Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: comparison of machine learning classifier models. Dentomaxillofac Radiol. 2020; 49(5):20190441. 15Grauer D, Cevidanes LS, Styner MA, Ackerman JL, Proffit WR. Pharyngeal airway volume and shape from cone-beam computed tomography: relationship to facial morphology. Am J Orthod Dentofacial Orthop. 2009; 136(6): 805- 814. 16Claudino LV, Mattos CT, Ruellas ACO, Sant' Anna EF. Pharyngeal airway characterization in adolescents related to facial skeletal pattern: a preliminary study. Am J Orthod Dentofacial Orthop. 2013; 143(6): 799- 809. 17Linder-Aronson S, Leighton BC. A longitudinal study of the development of the posterior nasopharyngeal wall between 3 and 16 years of age. Eur J Orthod. 1983; 5(1): 47- 58. 18Isidor S, Di Carlo G, Cornelis MA, Isidor F, Cattaneo PM. Three-dimensional evaluation of changes in upper airway volume in growing skeletal Class II patients following mandibular advancement treatment with functional orthopedic appliances. Angle Orthod. 2018; 88(5): 552- 559. 19Kilinç AS, Arslan SG, Kama JD, Ozer T, Dari O. Effects on the sagittal pharyngeal dimensions of protraction and rapid palatal expansion in Class III malocclusion subjects. Eur J Orthod. 2008; 30(1): 61- 66. 20Irani SK, Oliver DR, Movahed R, Kim YI, Thiesen G, Kim KB. Pharyngeal airway evaluation after isolated mandibular setback surgery using cone-beam computed tomography. Am J Orthod Dentofacial Orthop. 2018; 153(1): 46- 53. 21Aboudara C, Nielsen I, Huang JC, Maki K, Miller AJ, Hatcher D. Comparison of airway space with conventional lateral head films and 3-dimensional reconstruction from cone-beam computed tomography. Am J Orthod Dentofacial Orthop. 2009; 135(4): 468- 479. 22Weissheimer A, Menezes LM, Sameshima GT, Enciso R, Pham J, Grauer D. Imaging software accuracy for 3-dimensional analysis of the upper airway. Am J Orthod Dentofacial Orthop. 2012; 142(6): 801- 813. 23Yushkevich PA, Piven J, Hazlett HC, et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. NeuroImage. 2006; 31(3): 1116- 1128. 24Hong JS, Oh KM, Kim BR, Kim YJ, Park YH. Three-dimensional analysis of pharyngeal airway volume in adults with anterior position of the mandible. Am J Orthod Dentofacial Orthop. 2011; 140(4): e161- e169. 25Huff TJ, Ludwig PE, Salazar D, Cramer JA. Fully automated intracranial ventricle segmentation on CT with 2D regional convolutional neural network to estimate ventricular volume. Int J Comput Assist Radiol Surg. 2019; 14(11): 1923- 1932. 26Zhou X. Automatic segmentation of multiple organs on 3D CT images by using deep learning approaches. Adv Exp Med Biol. 2020; 1213: 135- 147. 27Guo X, Schwartz LH, Zhao B. Automatic liver segmentation by integrating fully convolutional networks into active contour models. Med Phys. 2019; 46(10): 4455- 4469. 28Cuccia AM, Lotti M, Caradonna D. Oral breathing and head posture. Angle Orthod. 2008; 78(1): 77- 82. 29Alves M Jr, Baratieri C, Nojima LI, Nojima MC, Ruellas AC. Three-dimensional assessment of pharyngeal airway in nasal- and mouth-breathing children. Int J Pediatr Otorhinolaryngol. 2011; 75(9): 1195- 1199. 30de Magalhães P, Bertoz A, Souki BQ, et al. Three-dimensional airway changes after adenotonsillectomy in children with obstructive apnea: Do expectations meet reality? Am J Orthod Dentofacial Orthop. 2019; 155(6): 791- 800. 31Shokri A, Miresmaeili A, Ahmadi A, Amini P, Falah-Kooshki S. Comparison of pharyngeal airway volume in different skeletal facial patterns using cone beam computed tomography. J Clin Exp Dent. 2018; 10(10): e1017- e1028. 32Park SB, Kim YI, Son WS, Hwang DS, Cho BH. Cone-beam computed tomography evaluation of short- and long-term airway change and stability after orthognathic surgery in patients with Class III skeletal deformities: bimaxillary surgery and mandibular setback surgery. Int J Oral Maxillofac Surg. 2012; 41(1): 87- 93. 33Aksoy S Three-dimensional evaluation of paranasal sinuses and anatomic variations with upper airway anatomy using CBCT [dissertation]. Lefkoşa: Institute of Health Sciences, Near East University; 2013. 34Allareddy V, Rengasamy Venugopalan S, Nalliah RP, Caplin JL, Lee MK, Allareddy V. Orthodontics in the era of big data analytics. Orthod Craniofac Res. 2019; 22(Suppl 1): 8- 13.

留言 (0)

沒有登入
gif