Utilization of facial fat grafting augmented reality guidance system in facial soft tissue defect reconstruction

Research design

Twenty artificial cases of unilateral irregular facial soft tissue defect models (Guangyunda, Shanghai, China) were randomly divided into two groups, designated as Group A and Group B. The artificial models of facial soft tissue defects are composed of materials resembling human skin, subcutaneous tissue, and bone, with potential spaces between each layer. Additionally, the defect was created by reducing the volume of soft tissue on the affected side compared to the healthy side. In Group A, an AR navigation system was established to guide surgeons in the reconstruction of soft tissue defects by filling them, while Group B underwent conventional methods for filling and reconstruction.

Intervention strategiesGroup A: Development of an augmented reality-guided system for facial fat grafting to assist in defect filling and reconstruction

Utilizing Hololens 2 (Microsoft, Washington, United States) to develop an AR-guided system for facial fat grafting, as depicted in Fig. 1. The working principle mainly involves the Hololens 2 camera capturing real-world images input to the computer, and through marker-based registration and tracking, merging the computer-generated three-dimensional virtual surgical plan with objects in the real world, presenting them in the surgeon’s field of view to enhance spatial perception during the surgical process. The following steps outline the construction of the AR guidance system for facial fat grafting:

Fig. 1figure 1

Schematic Diagram of an Augmented Reality-Guided System for Facial Fat Grafting. a: Simulated Head Model, b: Computed Tomography Scanning, c: Mobile Computer, d: Virtual Digital Model Image, e: HoloLens 2, f: Augmented Reality Fusion Image

Designing a tracking registration device utilizing artificial markers

The methodology of registering virtual images by accurately and swiftly determining camera poses and calculating model view matrices through the strategic placement of specific artificial markers within authentic environments is denoted as marker-based tracking registration technology [18]. Quick Response (QR) codes, often serve as the carriers of information for marker-based tracking and registration technology and can be rapidly read and identified by Hololens. In this study, QR codes were generated using https://www.the-qrcode-generator.com, as illustrated in Fig. 2A.

In AR, virtual heads obtained from CT scans have their respective coordinate systems, distinct from those in the real environment. The prerequisite for merging the two is to unify them into the same coordinate system. This necessitates an object that can be recognized when using the CT scan head model and also identified by Hololens in the real environment, serving as an intermediate “link.” Thus, in this step, we designed and fabricated a QR code-mountable bracket fixed to the head model, following these steps:

(1)

Equipped with four slots for QR codes;

(2)

The vertices of the QR code slots are positioned equidistantly along the origin and x, y, z axes of the coordinate system, as depicted in Fig. 2B;

(3)

The resin bracket produced using 3D printing technology is installed onto the simulated head model (Fig. 2C).

Fig. 2figure 2

Design of the Tracking and Registration Apparatus Based on Artificial Markers. A: QR code, A: Equidistant positioning of QR code slot vertices along the origin and x, y, z axes of the coordinate system, B: Installation of the 3D-printed bracket on the simulated head model

Three-dimensional digital modeling of the simulated head model using digital software

After obtaining medical Digital Imaging and Communication (DICOM) data of the simulated head model connected to the scaffold using computed tomography (GE Healthcare, Fairfield), the data were imported into the digital software RadiAnt DICOM Viewer (Medixant, Poland) to obtain a full-resolution 3D model in STL format. Subsequently, the generated STL file was processed using digital software Blender (Blender Foundation, Netherlands) to create a three-dimensional virtual model. Within the three-dimensional virtual model, the origin (o) and axes (x, y, z) of the scaffold were established, constructing the Coord-A coordinate system (Fig. 3).

Fig. 3figure 3

Three-dimensional Digital Model of the Simulated Head Model

Achieving “virtual-real fusion” in augmented reality display through the artificial marker tracking registration method

Importing the three-dimensional virtual model into Unity3D software (Unity Technologies, San Francisco, United States), the operator wears HoloLens 2, and its camera captures and identifies QR codes, obtaining the coordinates (Coord-B) of the QR codes in the HoloLens coordinate space (Fig. 4A). Coord-A and Coord-B represent the three-dimensional coordinates of the simulated head model in different spaces. By real-time tracking of the simulated head model in the actual scene and unifying Coord-A and Coord-B into the same coordinate system through the transformation matrix M (trs), the operator can seamlessly observe the three-dimensional virtual model superimposed on the simulated head model in the real world through HoloLens 2, achieving “virtual-real fusion” (Fig. 4B). The term “virtual-real fusion” refers to the AR system analyzing data to obtain scene position information and accurately overlaying computer-generated three-dimensional virtual images onto specific locations in the real scene, thereby achieving a perfect fusion of virtual objects with the real world [19].

Fig. 4figure 4

Achieving Augmented Reality Fusion using Artificial Marker-based Tracking and Registration Method. A: Hololens 2 identifies the QR codes in the real space to obtain coordinates Coord-B. B: Display of augmented reality fusion from the perspective of augmented reality

Validation of tracking and registration accuracy in facial fat grafting augmented reality guidance system

Select 5 points on the cheeks, nose, forehead, and chin of the head model to fix the QR codes, as shown in Fig. 5. After completing the aforementioned steps 1–3, designate the origin of the bracket as the origin (0,0,0) of the head model coordinate system, and record the coordinates of each QR code in the Coord-A coordinate system (x1, y1, z1). Then, Hololens identifies each QR code in the real environment, defining the origin of the bracket as the origin (0,0,0) of the head model coordinate system, and records the coordinates of each QR code in the Coord-B coordinate system (x1’, y1’, z1’). By comparing the coordinates (x1, y1, z1) with (x1’, y1’, z1’), verify whether the virtual head model and the physical head model are fully integrated in the real world. Proceed to the next step when the accuracy is less than 1 mm.

Fig. 5figure 5

Schematic Diagram for Tracking and Registration Accuracy Verification

Development of virtual surgical plans for facial soft tissue defect reconstruction using augmented reality

Construct the sagittal plane of the head model by utilizing anatomical landmarks such as the inner canthi and nasal tip points. Establish a sphere in the healthy facial region on one side, where the region of intersection between the sphere and the face represents the area to be mirrored, as illustrated in Fig. 6A. The surgeon wears the HoloLens 2 and activates the program menu to select a set of vertices for mirroring by adjusting the size and position of the sphere (Fig. 6B). Referring to the aforementioned sagittal plane, the selected set of vertices is mirrored to the contralateral defect area using a cloning algorithm to achieve a symmetrical facial contour. The visual display of the differences between the three-dimensional virtual filling scheme and the actual facial structure is achieved, as depicted in Fig. 6C. It should be noted that, based on the operator’s clinical experience, it is also possible to increase the volume on the mirrored basis to compensate for a certain degree of absorption after fat filling.

Fig. 6figure 6

Virtual Filling Plan for Facial Fat Grafting Developed Using Augmented Reality. A: Intersection region between the Sphere and Facial Surface, indicating the area to be mirrored. B: Adjustment of Sphere Size and Position to Select Vertex Set for Mirroring. C: Visualization of the discrepancy between the three-dimensional virtual filling scheme and the actual facial structure

Facial fat grafting augmented reality guidance system-assisted filling

Under real-time guidance from the visual differences between the three-dimensional virtual filling plan presented by the AR system and the actual facial features of the patient, the operator performed the filling procedure at the facial soft tissue defect site, as depicted in Fig. 7A.

Fig. 7figure 7

Facial Soft Tissue Defect Filling Procedure. A: Assisted by the augmented reality-guided system for facial fat grafting. B: Utilizing conventional methods for facial soft tissue defect filling

Group B: Facial soft tissue defect filling with conventional methods

Using CT to obtain DICOM data of the facial defect model, a preoperative three-dimensional virtual filling plan was generated using digital software. The operator referred to the virtual plan and performed the filling procedure on the defect, as shown in Fig. 7B.

Assessment of filling accuracy and time

CT scans of the filled head models were conducted, and the software Unity3D (Unity Technologies, San Francisco, United States) was employed to register and overlay the virtual plan onto the postoperative models. Sampling was performed in the facial filling area, as illustrated in Fig. 8, to obtain two coordinates: one representing the point in the virtual surgical plan and the other corresponding to the point at the same location in the postoperative model. The accuracy of the filling was analyzed by conducting hundreds of samplings and calculating the distance between the two sets of coordinates.

Fig. 8figure 8

Sampling of the facial recipient area to assess the accuracy of the filling

Additionally, the filling operation duration for both groups was recorded.

Statistical analysis

Independent sample t-tests were performed using IBM SPSS Statistics 17.0 software (IBM Corp., Armonk, New York, USA) to compare the differences in surgical accuracy and filling time between Groups A and B, with p < 0.05 considered statistically significant.

留言 (0)

沒有登入
gif