The Medline (PubMed search engine) and Scopus database were comprehensively searched following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [10] to ensure a rigorous approach (Fig. 1). The PRISMA checklist, available in Supplementary materials (Supplementary Table 1), was utilized to facilitate the systematic review process. We have collected articles that were published up until May 1st, 2023. The literature search was independently performed by two reviewers using a combination of the following keywords: “lingual nerve injury”, “lingual sensory impairment”, “lingual nerve damage”, “bilateral sagittal split osteotomy”, “BSSO”, “ramus osteotomy”, “mandibular osteotomies”, “prevalence”, “incidence”, “rate”. In conjunction with the primary search, a thorough examination of the reference lists from the identified studies was conducted to identify any additional articles that may have been overlooked. The collected studies were meticulously organized and stored using the Zotero reference management software (version 6.0.18) [11]. We ensured the credibility of our dataset by diligently removing any duplicate references. Following the initial search, two independent investigators thoroughly examined the remaining articles. The study selection process consisted of two distinct stages. Initially, we meticulously reviewed the titles and abstracts of the articles, eliminating those that did not meet our predetermined criteria for inclusion. In the second stage, we obtained the full texts of the remaining articles and conducted a comprehensive evaluation. Differences in study selection were resolved through iterative discussions and consensus-building among the team members. In instances where there were differing opinions or interpretations, the team engaged in thorough deliberations to reach a shared understanding and agreement on whether a particular study met the predetermined inclusion criteria. This collaborative approach ensured a transparent and unified decision-making process throughout the study selection phase.
Fig. 1Flow chart depicting the systematic search results from the relevant studies’ identification and selection
Criteria for study selection and data extractionIn our selection process, we focused on observational studies (cross-sectional, cohort) specifically examining the prevalence rates of lingual nerve injury following BSSO procedures. We did not impose any restrictions on publication dates. Case reports, case series with less than five participants, review articles, randomized clinical trials, animals studies, letters to the editor, books, expert opinion, conference abstracts, studies with no full-text available, studies not written in English, studies regarding other mandibular osteotomies [12], articles regarding the prevalence of lingual nerve injury per operation sites [13] and articles containing data derived from surveillance databases were excluded. In articles with overlapping populations, the most recent or most complete publication was considered eligible. The following variables were obtained from each study: the first author’s name, year of publication, study design, continent of origin, study period, total patients, proportion of males, mean age, patients with postoperative lingual nerve injuries and diagnostic procedure performed.
Quality assessmentTo evaluate the quality of the studies included, two investigators independently assessed them using the National Heart, Lung, and Blood Institute (NHLBI) Quality Assessment tool for Observational Cohort and Cross-Sectional Studies. The evaluation process entailed a thorough examination of each study to identify any methodological or survey implementation weaknesses that could impact internal validity. During the assessment, the investigators considered fourteen specific questions to gauge the quality of each study. They were provided with response options such as “yes,” “no,” “cannot determine” (e.g., in instances where the data presented uncertainties or contradictions), “not reported” (e.g., in cases where data were not reported or were incomplete), or “not applicable” (e.g., when a question did not pertain to the specific type of study under evaluation). By evaluating these questions, the investigators categorized the risk of bias for each study as either “low,” “moderate,” or “high,” enabling an overall assessment of the study’s quality [14]. By conducting this rigorous quality appraisal, our aim was to ensure that only studies demonstrating a moderate or high level of internal validity were included in our analysis.
Statistical analysisStatistical analysis was carried out using RStudio (version: 2022.12.0 + 353) software (RStudio Team (2022) [15]. The meta-analysis was conducted through metafor package [16]. The DerSimonian and Laird random-effects model was used to estimate the pooled prevalence and its respective 95% confidence intervals (CI) (a random-effects model assumes each study estimates a different underlying true effect). Freeman-Tukey double arcsine transformation was performed [17]. Heterogeneity presence between studies was evaluated through visual inspection of the forest plot and by using the Cochran’s Q statistic and its respective p value. The Higgins I2 statistic and its respective 95% CI were used for quantifying the magnitude of true heterogeneity in effect sizes. An I2 value of 0-40%, 30-60%%, 50-90% and 75-100% indicated not important, moderate, substantial and considerable heterogeneity, respectively [18]. To determine if the potential outlying effect sizes were also influential, screening for externally studentized residuals with z-values larger than two in absolute value and leave-one-out diagnostics were performed [19]. Due to paucity of data regarding categorical and continuous variables, such as proportion of males, mean age and duration of surgery subgroup and meta-regression analysis were not performed [20]. Unless otherwise stipulated, the statistical significance was established at p = 0.05 (two-tailed). Tests to evaluate publication bias, such as Egger’s test [21], Begg’s test [22] and funnel plots, were developed in the context of comparative data. They assume studies with positive results are more frequently published than studies with negative results, however in a meta-analysis of proportions there is no clear definition or consensus about what a positive result is [23]. Therefore, publication bias in this current meta-analysis was assessed qualitatively.
留言 (0)