Measurement Error

Pediatric critical care involves monitoring several measured variables. Although we often act on changes that occur in physiology (e.g., change in arterial blood pressure [ABP]) or changes in biochemistry (e.g., serum sodium concentration [Na+]s), it is often more difficult to determine whether such a certain change is clinically important, or simply related to measurement error. Here, we consider how a measurement is made and the potential sources for inherent error. Such Measurement Error is the difference between a measured value and the true or actual value of that parameter. In clinical research, these errors are often cited as one explanation for the lack of reproducibility in key findings (1,2).

Measurement error depends on Bias and Noise (Fig. 1). Bias is the average difference between a measured value and the actual value. Noise is the variation in serial measurements (3). There are four important features of bias and noise. First, noise can be reduced by averaging repeated measurements, but bias cannot. Second, an investigator or clinician can use the minimally important change (MIC) to define the change in a measured analyte that is unlikely to be due to noise alone. Third, noise may differ based on additional covariates and the phenomenon of heteroscedasticity should be considered. For example, in ultrasound measurements, variations in a structure’s dimensions often differ across a population in an individual, in relation to that patient’s body surface area (BSA). Finally, the degree of bias in each measurement may differ because of the degree of pathology; such differential bias can affect how any given measurement is interpreted.

F1Figure 1.: Illustrations adapted from reference (3). Part I: Bias and noise. Part II: Differences in bias and noise in datasets in which illustrates: an example of high noise but low bias (A); low noise and high bias (B); both high bias and high noise (C); and low bias and low noise (D).

In this “Editorial Notes, Methods, and Statistics” article for Pediatric Critical Care Medicine we illustrate sources of measurement error in three commonly used clinical modalities in the PICU—oscillometric BP, laboratory tests, and point-of-care ultrasound (POCUS)—and present strategies for decreasing such error.

OSCILLOMETRIC BLOOD PRESSURE

BP can be measured using noninvasive or invasive techniques. The most common modality, noninvasive automatic BP (NIBP), uses oscillometry to estimate the systolic BP (SBP), mean arterial pressure (MAP), and diastolic BP (DBP). An oscillometry-based BP device includes a motor to insufflate the cuff with air and a pressure sensor. The pressure sensor detects pressure changes related to pulse waveforms traveling through the cuff. The cuff is insufflated until the waveforms disappear. The cuff pressure is then slowly decreased as the pressure fluctuations associated with each pulse waveform are measured (Fig. 2).

F2Figure 2.: Illustrations of oscillometry (adapted from reference [4]). The cuff pressure (CP) is increased until pulse waveforms are lost on the pressure sensor, and then slowly decreased. A, The oscillometric pressure (OP), or the pressure changes in the cuff is related to pulse waveform transmission and can be plotted. B, Using the Mean Amplitude Algorithm and Derivative Algorithm, the change in OP for a given CP can then be used to find the mean arterial pressure (MAP), systolic blood pressure (SBP), and diastolic blood pressure (DBP).

The algorithm used in each NIBP device is proprietary, and a number have been described (4–6). One of the most common algorithms, the maximum amplitude algorithm, takes MAP to be the pressure at which the oscillometric envelope is at its maximum amplitude. The SBP and DBP are then derived variables; that is, the pressure is calculated as some proportion of the maximum envelope, ranging from 0.45 to 0.73 for SBP and 0.69 to 0.83 for DBP. Although MAP is accurately determined with this method (5), the algorithm does not account for changes in pulse pressure in different severities or types of illness (e.g., narrowing of pulse pressure before falling in MAP in the early stages of hypovolemia). An alternative approach is derivative oscillometry, in which the maximum and the minimum derivative of the oscillometric envelope are used to estimate the DBP and SBP, respectively (6).

Bias

Several studies have illustrated the bias in NIBP measurements compared with invasive ABP measurements. In a prospective observational study of 40 PICU patients, Phillips NIBP monitors had good agreement with ABP during normotension; the MAP and DBP were only 2.6 mm Hg and 5 mm Hg lower, respectively, by NIBP compared with ABP (7). For reference, the U.S. Association for the Advancement of Medical Instrumentation defines an acceptable error between the oscillometric and reference BP measurement as less than 10 mm Hg. The difference was more pronounced, though, in hypertension and hypotension. The SBP, DBP, and MAP in hypertension were 10 mm Hg lower, 7 mm Hg lower, and 8 mm Hg lower, respectively, by NIBP. During hypotension, the SBP and MAP were 13.6 mm Hg higher and 5 mm Hg higher, respectively, by NIBP, without any significant difference in the DBP. This phenomenon, called “differential bias,” refers to the dependence of bias on certain characteristics, such as the degree of hypertension or hypotension. Similar evidence of bias has been described with Drager (8) and BioZ (9) oscillometric devices.

Noise

The noise of individual NIBP measurements may be more significant than bias. In a study of 30 adult patients with simultaneous NIBP (from either Hewlett Packard M300A or M1008A devices) and ABP measurements, the authors found that while the bias between the NIBP and ABP devices was relatively small (–2.5 and –5.3 mm Hg for the M300A and M1008A, respectively), the error for individual measurements was relatively high, with more than 30% of measurements exhibiting a greater than 10 mm Hg discrepancy between the NIBP and ABP measurements (10). Such an effect could lead to clinicians overestimating or underestimating the degree of pathology in their patients. This variability questions the validity of individual NIBP measurements; however, as described above, such noise will decrease when averaged across repeated measurements. Understanding the propensity of clinical devices for such error should alert the clinician to the importance of repeated measurements when assessing any change in their patient, rather than relying on a single measurement, particularly when the device’s propensity for imprecision is high.

CLINICAL SCALES

The concept of MIC allows clinicians and researchers to determine the threshold at which a change in a measured analyte is likely to be clinically relevant. MIC is the minimal change, as determined by the clinician or patient, in a specific outcome metric, such as pain level, that the evaluator considers clinically important either because of pathology or a therapeutic response. Both distribution-based methods (which rely on the statistical characteristics of the measured variable in a population) and anchor-based methods (which rely on an external criterion to determine if a change is important) exist for calculating the MIC (11). One such distribution-based method incorporates the se of measurement (sem) of the patient’s reported outcome compared with control patients, given a specific probability of type 1 and type 2 error (12). For example, in a study of 261 adults who underwent hip or knee replacement for osteoarthritis, patients who felt better 6 months after surgery reported a decrease in their pain on a 20-point Likert scale of 6.2 points, compared with a decrease of 2.9 points in patients who felt no change because of surgery. Then, the MIC for pain in this population is 6.2 minus 2.9, or 3.3 points (12).

Noise

To distinguish the MIC from measurement error, the MIC must be larger than the noise in the measurement, which can be calculated as the sem. The sem calculates the variability in repeated measurements within a population for a specific measurement technique. Terwee et al (12) showed that the specific value of measurement must be at least four times the sem to gain a type 1 error rate of less than 0.05 and a Type 2 error rate of less than 0.2 (Fig. 3). In the osteoarthritis study, the sem for pain was 2.1, meaning a change in the pain score of 8.4 (i.e., 2.1 × 4) was required to be statistically significant under the study’s parameters, a more than two-fold larger change than the MIC. That is, the measurement must exceed both the MIC and four times the sem to be both statistically and clinically relevant.

F3Figure 3.: Relationship between measurement error and minimally important change (MIC) (adapted from reference [12]). sem = se of measurement.LABORATORY ANALYTES

Much like the MIC has been used for patient-reported outcomes, in laboratory testing the concept of “Reference Change Value” (RCV) has been used to define the range in which changes in a measured analyte are likely to be due to nonpathologic variation. Such noise is primarily from two sources: the imprecision of the laboratory device (i.e., the coefficient of variation in the analytical procedure, CVA) and the natural variation of a patient’s analytes, such as serum creatinine or hematocrit, for nonpathologic reasons (i.e., the coefficient of variation within the individual, CVI). Clinicians must consider both factors when determining whether the change in a laboratory biomarker is clinically significant. The RCV incorporates both such sources of noise (13).

The RCV95% is defined as the difference in serial results that must be exceeded for the change to be clinically significant with 95% confidence. The calculation is shown in Table 1 (Equation 1). Table 2 lists the RCV95% for common laboratory tests and the change required to be considered clinically significant (14–16). For example, a change in [Na+]s from 130 to 133 mmol/L is likely related to random fluctuations, whereas a change from 130 to 135 mmol/L is more likely to represent a clinically significant change. Interestingly, for many of these tests, the difference in the analyte value that could be attributable to clinically significant changes is smaller than the reference range for that analyte. This suggests that serial test results may be classified as normal by the laboratory reference ranges while still reflecting a clinically significant change.

Name of Equation Equation Notes Reference change value

ReferenceChangeValue(%)=2∗Z∗CVa2+CVi2

Z represents the desired z score for determining significance (1.96 for p < 0.05), CVA represents the imprecision of the laboratory device, and CVI represents the natural variation of a patient’s analytes, such as serum creatinine or hematocrit, for nonpathologic reasons International normalized ratio

INR=[PT(Patient)PT(Control)]ISI

PT is the prothrombin time, and the International Sensitivity Index is a scaling factor specific to the laboratory’s reagents and instruments

CVA = coefficient of variation in the analytical procedure, CVI = coefficient of variation within the individual, IQR = interquartile range, ISI = International Sensitivity Index, PT = prothrombin time.


TABLE 2. - Approximate Reference Change Values (14–16) Test Coefficient of Variation in the Analytical Procedure (%) Coefficient of Variation Within the Individual (%) RCV95% (%) RCV95% of Reference Range Sodium 0.9 0.5 2–5 4.1 mEq/L Potassium 1.3 4.2 11–20 0.5 mEq/L Chloride 1.2 1.1 2–5 4.8 mmol/L Bicarbonate 2.6 4.0 13.3 3.2 mEq/L Blood urea nitrogen 3.0 13.9 34.5 4.1 mg/dL Creatinine 3.1 4.5 11–20 0.18 mg/dL Glucose 2–5 6–10 11–20 11–20 mg/dL WBC count 1.1 15.9 43.9 3500/mm3 Hemoglobin 1.0 2.7 8.0 1.2 g/dL Hematocrit 0.7 2.8 8.0 3.7% Platelets 2.7 7.8 21.5 53,750/mm3

RCV = reference change value.

The RCV for serum creatinine is used in an updated definition for acute kidney injury (AKI), termed the “pediatric reference change value optimized for AKI in children” (pROCK) criteria (17). Compared with the “Pediatric Risk, Injury, Failure, Loss, End-Stage Renal Disease” (pRIFLE) and the “Kidney Disease Improving Global Outcomes” (KDIGO) definitions of AKI, in which a patient could be diagnosed with AKI based on percentage increases in serum creatinine alone, the pROCK definition requires both an increase in serum creatinine by 30% from baseline as well as an absolute increase of at least 20 µmol/L, the analyte’s RCV. A recent multicenter retrospective study compared the pROCK criteria to the pRIFLE and KDIGO definitions of AKI in a correlation with severity of illness and outcome of AKI (18). The study found that, by accounting for the inherent variability in serum creatinine measurements, AKI as defined by the pROCK criteria was associated with increased morbidity, including sepsis use of mechanical ventilation or continuous renal replacement, and increased mortality.

Bias

The RCV does not account for bias of a measurement technique. Many sources of bias related to the measurement technique exist, including imperfect calibration of measurement instruments, changes in the environment, and variations in test performance between laboratory personnel. Such bias is a random phenomenon that decreases over time with serial quality testing of laboratory equipment and fluctuates with changes in the environment over time and variations in personnel performing the test. For calibration error, desirable quality standards for laboratory equipment typically require a bias of less than 0.25 of the biological sd of an analyte. Given the random nature of the bias introduced by these sources; however, it is difficult to account for them in measures of clinical significance, such as RCV (19).

For measurement techniques that are more prone to laboratory error, such as when changes in the reagent type or machine calibration may affect the calculation of a specific laboratory parameter, the relative change of the analyte may be more informative than the raw calculation itself. For example, the calculation of the prothrombin time (PT)—the time for citrated blood to form a fibrin clot in the presence of calcium, tissue factor, and phospholipid—is particularly dependent on the reagents and measurement modality used in a specific laboratory. As such, local reference standards for the PT or calculation of the international normalized ratio can allow for relative changes between PT to be compared across laboratories (Table 1, Equation 2).

The control PT value used in the equation is calculated based on at least 30 plasma samples from healthy reference subjects handled identically to the patient’s sample, and the International Sensitivity Index is a scaling factor specific to the laboratory’s reagents and instruments (20).

POINT-OF-CARE ULTRASOUND

Noise in medical imaging is related to a device’s optical resolution, that is, the minimum distance that two imaged points can be separated and still distinguished. A higher resolution allows measurement with less noise, resulting in greater intra-rater and inter-rater reliability. Resolution in ultrasound images depends on the pixelation of the image and the physics of ultrasound image acquisition.

Noise

Noise related to image pixelation stems from two sources: inaccuracy in feature edge detection related to the pixelation process itself; and human error in the detection of the feature edge. Combined, these two sources of error are proportional to the size of the pixels—the larger the pixel size, the worse the resolution—and for a 2D image, they are of the order of two times the pixel size. For a linear probe with a typical pixel size of 0.05 mm per pixel, this corresponds to a measurement error of approximately 0.1 mm (21). The magnitude of pixelation error; however, is typically smaller than acquisition error.

Noise from the image acquisition process depends on the axial and longitudinal resolution, as well as variations in probe placement. The axial resolution (Fig. 4), or the ability to resolve two distinct points separated along the path of the ultrasound beam, is determined by the spatial pulse length, or the wavelength of the ultrasound beam multiplied by the number of waves per pulse. The spatial pulse length is dependent on the piezoelectric characteristics of the ultrasound probe—the greater the mass of the transducer, the lower its frequency of oscillation. The axial resolution is equal to half the spatial pulse length. The frequency of POCUS is typically 2–15 MHz and a pulse typically consists of 2-3 ultrasound waves. A specific frequency is often selected based on the distance of the image structure from the probe; that is, attenuation of the ultrasound beam is proportional to the distance between the probe and imaged structure and inversely proportional to the ultrasound frequency. Therefore, lower-frequency probes are often used to image deeper structures, despite the lower spatial resolution associated with their use. Given that the average speed of ultrasound in soft tissue is about 1540/ms (22), this range of available ultrasound frequencies corresponds to wavelengths ranging from 0.1 to 0.8 mm. As such, the axial resolution of ultrasound is at best around 0.1 mm (for a 15 MHz probe with two waves per pulse length) to 1.2 mm (for a 2 MHz probe with three waves per pulse length).

F4Figure 4.:

Ultrasound resolution. Axial resolution (A) is limited by the spatial pulse length of the ultrasound, whereas lateral resolution (B) is limited by the width of the ultrasound beam, which converges to its most narrow point at the near-field length before diverging.

The lateral resolution (Fig. 4), or the ability to resolve two points separated by a distance perpendicular to the beam path, is proportional to the width of the ultrasound beam. The width of the ultrasound beam changes throughout the path of the imaged structure. The ultrasound beam is the widest close to the transducer, with a width approximately the width of the transducer. The beam then converges to its narrowest point at the “Near Field Length,” where it is approximately half of the width of the transducer. This near-field length is proportional to the transducer size squared and inversely proportional to the beam’s wavelength. After the near-field length, the beam diverges. While transducer size varies based on the machine and probe frequency, it is typically on the order of 1 mm. Lateral resolution is often poorer than axial resolution.

Imaging structures whose size is of the order of the ultrasound’s resolution limit are likely to lead to increased noise. One illustration of this noise is in the use of POCUS to measure optic nerve sheath diameter (ONSD) as a surrogate for intracranial pressure (ICP). ONSD varies based on the patient’s age and ICP, from less than 3 mm for infants to more than 6 mm for older children with elevated ICP. Assuming the use of a 10 MHz probe, these measurements’ axial and lateral resolutions are on the order of 0.2 mm and 0.8 mm, respectively. This low resolution relative to the fluctuations in ONSD related to changes in ICP may partially account for the modest inter-rater reliability of this test and variation in optimal cutoffs found for detecting elevated ICP between studies (23–25).

Bias

Although noise in ultrasound measurements may be related to the process of imaging itself, bias can be introduced when attempting to draw conclusions based on the size of an imaged structure outside of a normative population. In prenatal ultrasound estimation of gestational age, for example, fetuses that are small for gestational age (SGA) are often subject to underestimation of their gestational age. In one study of 1,135 pregnancies, a correction factor of 1.5 weeks was necessary to account for the systematic underestimation of ultrasound-measured gestational age in SGA fetuses (26). Furthermore, assessment models that fail to consider changes in variance that occur because of body size, a critical issue in pediatrics, may similarly lead to spurious conclusions. This phenomenon, called “heteroscedasticity,” is particularly important in pediatric echocardiography, in which the variance in the size of many structures, such as coronary artery diameter (27), varies with BSA. In echocardiography, the determination of pathologic changes in the size of an imaged structure relies on the calculated z score or deviation from the mean size of the structure in a reference population with respect to the sd within the population. If left unaccounted for, however, the degree of pathology may be underestimated or overestimated in patients at the extremes of BSA (28). A few techniques exist for removing heteroscedastic behavior from linear regression models, including logarithmic transformation or linear regression of the residuals (27).

Furthermore, inconsistent probe placement can contribute to both bias and noise in ultrasound measurements. For example, axial measurements of the quadriceps femoris thickness, a marker of nutritional status in critically ill patients, are typically performed with the probe perpendicular to the skin. Fanning the probe a few degrees from the perpendicular could lead to inaccurate measurements. As a result, nutritional studies that include these measurements rely on standardized measurement protocols (to reduce bias) and serial measurements (to reduce noise) (29). Similarly in issue of Pediatric Critical Care Medicine, the study by Burton et al (30) described the measurement of the laryngeal air column width difference to predict postextubation stridor. Their measurements involved standardized probe placement with reference to nearby structures and serial measurements by each evaluator.

With these principles in mind, ultrasound measurement error can be reduced through best practice protocols (21). Such practices include standardizing probe use and frequency ranges for specific imaging applications, placing the structure in the center of the image (thereby increasing lateral resolution), and orienting the feature to be measured parallel to the path of the ultrasound beam, thereby optimizing the effect of axial resolution.

HANDLING MEASUREMENT ERROR

An understanding of a measurement technique’s propensity for noise or bias can help mitigate these sources of error in clinical studies. First, a priori knowledge of a technique’s inherent noise can help contextualize the significance of results. In a study of ONSD, for example, a difference in normal and elevated ICP states of 0.5 mm may be found to be statistically significant across a population. Still, it may not be clinically relevant, given that this difference is within the resolution limit of the measurement technique. Similarly, defining the MIC, in the case of patient-reported outcomes, or RCV, in the case of a measured analyte, can help clinicians separate noise from true pathology when interpreting these results. Furthermore, when present, recognizing heteroscedasticity can allow relative differences in measurements across a population to be compared.

Second, knowledge of a technique’s bias, in which repeated measurements will not decrease the error associated with the technique, can help to avoid making spurious conclusions. When possible, recalibrating the device, such as during routine laboratory quality testing, can reduce the development of bias in a device. Recalibration may not always be possible, however. For example, the differential bias in Philips NIBP measurements, leading the device to underestimate the degree of hypertension or hypotension, may help contextualize a patient’s individual measurements. An understanding of this phenomenon may move a clinician toward earlier initiation of invasive BP measurements.

Although individual clinical measurements may be important for drawing diagnostic conclusions, understanding the error associated with those measurements may be just as crucial for determining their significance.

ACKNOWLEDGMENTS

The authors would like to thank Dr. David Kantor for his suggestions in editing this article.

REFERENCES 1. Ritterman JB: To err is human: Can American medicine learn from past mistakes? Perm J. 2017; 21:16–181 2. Niven DJ, McCormick TJ, Straus SE, et al.: Reproducibility of clinical research in critical care: A scoping review. BMC Med. 2018; 16:26 3. Kahneman D, Olivier S, Cass R S. Noise: A Flaw in Human Judgment. First edition. New York: Little, Brown Spark, 2021. 4. Chandrasekhar A, Yavarimanesh M, Hahn JO, et al.: Formulas to explain popular oscillometric blood pressure estimation algorithms. Front Physiol. 2019; 10:1415 5. Forster FK, Turney D: Oscillometric determination of diastolic, mean and systolic blood pressure—a numerical model. J Biomech Eng. 1986; 108:359–364 6. Forouzanfar M, Dajani HR, Groza VZ, et al.: Oscillometric blood pressure estimation: Past, present, and future. IEEE Rev Biomed Eng. 2015; 8:44–63 7. Holt TR, Withington DE, Mitchell E: Which pressure to believe? A comparison of direct arterial with indirect blood pressure measurement techniques in the pediatric intensive care unit. Pediatr Crit Care Med. 2011; 12:e391–e394 8. Meidert AS, Dolch ME, Mühlbauer K, et al.: Oscillometric versus invasive blood pressure measurement in patients with shock: A prospective observational study in the emergency department. J Clin Monit Comput. 2021; 35:387–393 9. Landgraf J, Wishner SH, Kloner RA: Comparison of automated oscillometric versus auscultatory blood pressure measurement. Am J Cardiol. 2010; 106:386–388 10. Bur A, Herkner H, Vlcek M, et al.: Factors influencing the accuracy of oscillometric blood pressure measurement in critically ill patients. Crit Care Med. 2003; 31:793–799 11. Turner D, Schünemann HJ, Griffith LE, et al.: The minimal detectable change cannot reliably replace the minimal important difference. J Clin Epidemiol. 2010; 63:28–36 12. Terwee CB, Roorda LD, Knol DL, et al.: Linking measurement error to minimal important change of patient-reported outcomes. J Clin Epidemiol. 2009; 62:1062–1067 13. Fraser CG: Reference change values. Clin Chem Lab Med. 2011; 50:807–812 14. Nunes LA, Brenzikofer R, de Macedo DV: Reference change values of blood analytes from physically active subjects. Eur J Appl Physiol. 2010; 110:191–198 15. McCormack JP, Holmes DT: Your results may vary: The imprecision of medical measurements. BMJ. 2020; 368:m149 16. Cho J, Seo DM, Uh Y: Clinical application of overlapping confidence intervals for monitoring changes in serial clinical chemistry test results. Ann Lab Med. 2020; 40:201–208 17. Xu X, Nie S, Zhang A, et al.: A new criterion for pediatric AKI based on the reference change value of serum creatinine. J Am Soc Nephrol. 2018; 29:2432–2442 18. Zeng J, Miao H, Jiang Z, et al.: Pediatric reference change value optimized for acute kidney injury: Multicenter retrospective study in China. Pediatr Crit Care Med. 2022; 23:e574–e582 19. Oosterhuis WP, Bayat H, Armbruster D, et al.: The use of error and uncertainty methods in the medical laboratory. Clin Chem Lab Med. 2018; 56:209–219 20. Zehnder JL: Clinical use of coagulation tests. UptoDate June 2023. Available at: https://medilib.ir/uptodate/show/1368 Accessed September 2023 21. Goldstein A: Errors in ultrasound digital image distance measurements. Ultrasound Med Biol. 2000; 26:1125–1132 22. Shin HC, Prager R, Gomersall H, et al.: Estimation of average speed of sound using deconvolution of medical ultrasound data. Ultrasound Med Biol. 2010; 36:623–636 23. Wang L, Feng L, Yao Y, et al.: Optimal optic nerve sheath diameter threshold for the identification of elevated opening pressure on lumbar puncture in a Chinese population. PLoS One. 2015; 10:e0117939 24. Padayachy LC, Padayachy V, Galal U, et al.: The relationship between transorbital ultrasound measurement of the optic nerve sheath diameter (ONSD) and invasively measured ICP in children: Part I: Repeatability, observer variability and general analysis. Childs Nerv Syst. 2016; 32:1769–1778 25. Oberfoell S, Murphy D, French A, et al.: Inter-rater reliability of sonographic optic nerve sheath diameter measurements by emergency medicine physicians. J Ultrasound Med. 2017; 36:1579–1584 26. Harland KK, Saftlas AF, Wallis AB, et al.: Correction of systematic bias in ultrasound dating in studies of small-for-gestational-age birth: An example from the Iowa Health in Pregnancy Study. Am J Epidemiol. 2012; 176:443–455 27. Dallaire F, Dahdah N: New equations and a critical appraisal of coronary artery Z scores in healthy children. J Am Soc Echocardiogr. 2011; 24:60–74 28. Mawad W, Drolet C, Dahdah N, et al.: A review and critique of the statistical methods used to generate reference values in pediatric echocardiography. J Am Soc Echocardiogr. 2013; 26:29–37 29. Hoffmann RM, Ariagno KA, Pham IV, et al.: Ultrasound assessment of quadriceps femoris muscle thickness in critically ill children. Pediatr Crit Care Med. 2021; 22:889–897 30. Burton L, Loberger J, Baker M, et al.: Pre-Extubation Ultrasound Measurement of In Situ Cuffed Endotracheal Tube Laryngeal Air Column Width Difference: Single-Center pilot Study of Relationship With Post-Extubation Stridor in Under 5-Year-Olds. Pediatr Crit Care Med. 2024; 25:222–230

留言 (0)

沒有登入
gif