Knowledge-embedded spatio-temporal analysis for euploidy embryos identification in couples with chromosomal rearrangements

Introduction

With the development of assisted reproductive technology (ART) during the past four decades, the clinical pregnancy rate of infertile couples undergoing in vitro fertilization (IVF) and embryo transfer (ET) has risen from 20% to 40%. In response to the unsatisfactory performance of the routine embryo selection method based on the day 3 (D3) human embryo grading system and the Gardner blastocyst grading system (1999), artificial intelligence was used to analyze and select embryos with high developmental potential or to predict the ploidy status of each blastocyst. However, the majority of reported studies focused on young women or couples without chromosomal abnormalities.

Several studies demonstrated that early development morphology is linked with embryonic euploidy.[1,2] The use of time-lapse incubators that deliver continuous information on embryo development enables to utilize large amounts of sequence images for continuous and thorough morphological evaluation.[3] Previous time-lapse research largely focused on multiple static photos at various periods throughout in vitro embryonic culture process, acquiring the human-annotated or machine-learned morphokinetic parameters.[4–8] In 2021, Lee et al[9] reported that the end-to-end deep learning model (I3D) was utilized to determine the euploid status of blastocysts identified by preimplantation genetic testing for aneuploidy (PGT-A) for chromosomal aberrations in normal chromosomal populations, with an area under the curve (AUC) of 0.74. This indicates that employing deep learning to evaluate the euploid status of blastocysts is feasible.

Few studies have focused on the prediction of chromosomal abnormalities in couples with an aberrant chromosomal structure, a group with a high probability of generating imbalanced gametes and a high risk of miscarriage. Preimplantation genetic testing for chromosomal structural rearrangement (PGT-SR) is the only technology now available for assessing whether blastocysts originated from couples with the abnormal chromosomal structure are aneuploid. PGT-SR needs to get the chromosomal rearrangement status of the couple and employs single nucleotide polymorphism microarray/next-generation sequencing (SNP/NGS) technology to determine the euploidy status of the embryos. PGT-SR was more time-consuming and needed stricter indications than PGT-A. (Specifically, the couples should have chromosomal rearrangements or structural abnormality.) Consequently, there have been few studies investigating the relationship between embryo morphology and embryo euploidy in chromosomally aberrant individuals. In 2018, Reignier et al[10] collected the time-lapsed videos of 67 embryos with parental chromosomal rearrangements, as well as the euploidy statuses of the embryos determined by PGT-SR, and developed prediction algorithms for embryo euploidy based on the morphology and dynamic parameters of the embryos. Howevers, the AUC of the prediction algorithms was low.

In this study, we aimed to develop a deep learning-based system to predict the euploidy status of embryos using time-lapse videos and clinically predictive variables. We initially developed a novel deep learning-based time-lapse video analysis model, an attentive multi-focus selection network (AMSNet), for the prediction of blastocyst development in real time. Based on AMSNet, we fused additional clinically predictive variables and developed the second deep learning model, attentive multi-focus video and clinical information fusion network (AMCFNet), to assess the euploidy status of embryos. The effectiveness of the AMCFNet model was validated in embryos with parental chromosomal structure abnormality. This work was able to show medical proof that a deep learning-based system could apply imaging data of the early development of human embryos to predict the euploidy status of embryos in a population with aberrant chromosomal structure or chromosomal rearrangements.

Methods Data collection

This retrospective study was conducted at the Reproductive Center of The First Affiliated Hospital of Sun Yat-Sen University. All data used in this study were collected with informed consent from patients and all procedures and protocols were approved by the institutional review board for Clinical Research and Animal Trials of the First Affiliated Hospital of Sun Yat-Sen University (No. IRB[2019]479). We retrospectively collected the time-lapse videos, clinical data, and PGT detection results of 368 PGT cycles from February 2020 to May 2021. Detailed information about the study design is shown in Figure 1.

F1Figure 1:

Flowchart of embryos screening from couples who had PGT-A or PGT-SR treatment cycles from February 2020 to May 2021. AMCFNet: Attentive multi-focus video and clinical information fusion network; AMSNet: Attentive multi-focus selection network; PGT-A: Preimplantation genetic testing for aneuploidy; PGT-SR: Preimplantation genetic testing for chromosomal structural rearrangement.

Controlled ovarian stimulation (COS), embryo culture, and time-lapse recording

COS was accomplished by gonadotropin-releasing hormone agonist suppression protocol, gonadotropin-releasing hormone antagonist flexible protocol, or micro-stimulation protocol. The oocytes were harvested by puncture under the guidance of transvaginal ultrasound after 36 h using 250 mg recombinant human chorionic gonadotropin injection (Zhuhai Lizhu Biomedical Technology Co., Ltd, Zhuhai, China). All oocytes were inseminated by intracytoplasmic sperm injection (ICSI). Zygotes displaying two pronuclei (2PN) were cultured in the EmbryoscopeTM or Embryoscope PlusTM time-lapse incubator (Vitrolife, Copenhagen, Denmark). Culture conditions were set at 37°C, 6% CO2, and 5% O2. The time-lapse culture system continuously captured images of each embryo at seven different focal planes every 10 min or 15 mins until the formation of the blastocyst at the expansion stage or before the blastocyst biopsy.

Embryo biopsy and euploidy or aneuploidy status determination of embryos

At the expanding blastocyst stage, 5–10 trophectoderm (TE) cells were extracted using biopsies. Following the biopsy, each blastocyst was vitrified using a Kitazato vitrification kit (Kitazato Biopharma Co., Ltd., Kato, Japan).[11,12]

Both SNP microarrays and NGS were used for the comprehensive chromosome screening (CCS) of biopsied cells to identify the euploidy or aneuploidy status of each blastocyst. Based on the result of CCS, blastocysts were divided into two groups: (1) euploid blastocysts with mosaicism levels <50%; (2) aneuploid blastocysts including numerical chromosomal aberration and high-level mosaic blastocysts with mosaicism levels between 50% and 80%.

Establishment of AMSNet and AMCFNet

Since we evaluated the euploidy status of blastocysts other than D3 embryos, we built an AMSNet to predict blastocyst formation. AMSNet was built upon ResNet-50 with two major components:[13] a multi-focal feature selection (MFS) module and a temporal shift module (TSM) [Figure 2].[14]

F2Figure 2:

Network architecture of the AMSNet. For each moment t, it took the embryoscope images from multiple focal planes as input and generated the probability of blastocyst formulation y t ∈ [0, 1]. Besides, part of the feature channels would be temporally shifted to the next moment via the TSM module. AMSNet: Attentive multi-focus selection network; FC: Fully connected layers; I T : Image at certain time T; MFS: Multi-focal feature selection; TSM: Temporal shift module; Res: Residual.

ResNet-50, a representative of deep convolutional neural networks, has been proven to be effective in extracting high-dimensional spatial features from natural images.[13] We used ResNet-50 as the backbone of AMSNet to extract the morphological features of embryos from embryoscope images. To effectively utilize the image features of different focal planes, we proposed the MFS module to selectively exploit the multi-focus features via an attention mechanism. Inspired by Woo et al,[15] we proposed to decompose the attention module into channel-wise attention and spatial attention, as a separate attention scheme was much more efficient to process the 3D multi-focus feature maps.

Specifically, MFS included a channel-wise attention module to selectively fuse the multi-focus feature channels and employed a Gaussian non-local mechanism to model pairwise spatial correlations[16] [Figure 3A]. MFS module first adopted multiple individual Res-1 branches with shared weights to process the input embryoscope images from multiple focal planes. Then the features from those branches were concatenated and enhanced by the channel-wise attention module. Channel-wise attention first calculated the weights of each feature map through a pooling operation and a multilayer perceptron, subsequently element-wise attention multiplied these weights with the concatenated feature map and finally reduced the dimensionality of the feature channels using a convolution layer. The channel-wise attention highlighted the meaningful channels in the multi-focus feature map, allowing the AMSNet to efficiently exploit the inter-channel relationships of multi-focus features. As for spatial attention, we proposed to utilize a 2D Gaussian non-local module to capture the spatial correlation from the channel-wise enhanced feature.[16] By computing interactions between any two positions on a feature map, regardless of their position distance, Gaussian non-local could achieve 2D pair-wise spatial correlation learning, and thus enhance multi-focus features in the spatial dimension.

F3Figure 3:

Multi-focus feature selection module and temporal channel shift. (A) MFS module. MFS module first adopted multiple Res-1 branches with shared weights to process the input images from multiple focal planes. Then the features from multiple branches were concatenated and enhanced by the channel-wise attention module. Next, the channel dimension of the channel-wise enhanced feature was reduced to as same as that of the output feature map of Res-1. Finally, it was further enhanced by the spatial attention module via a residual connection pattern. (B) Residual TSM. For each residual block, TSM shifted part of the channels of its input feature at moment X t -1, into that at moment X t, to obtain a temporally combined feature. Then, TSM enhanced X t via a residual addition to obtain a temporally enhanced output feature Y t . Conv: Convolution; MFS: Multi-focus feature selection; TSM: Temporal shift module; F: Focal; I: Image; Res: Residual; X: Input feature; Y: Output feature.

MFS took the embryoscope images shot at multiple focal planes as input at every moment, and TSM was responsible for partially shifting the feature channel along the temporal dimension forward, which endowed the AMSNet with memory capabilities to achieve a temporal understanding of time-lapse videos [Figure 3B]. By embedding each residual block in AMSNet with a temporal channel shift operation, the appearance features of the embryo at each moment contained the features of the previous developmental state, therefore the proposed AMSNet model could continuously read the time-lapse data and provide a highly accurate blastocyst development prediction to assist embryologists in early embryo selection.

Reported aneuploidy miscarriage-related clinical factors were collected and Spearman's correlation was conducted to select clinical features highly correlated to the aneuploidy status [Figure 4A]. Aneuploidy-related clinical factors included female age, male age, female anti-Mullerian hormone (AMH) level, adverse pregnancy events, parental chromosomal structural abnormality, immunological abnormalities, and semen abnormalities.[17–19] By fusing selected clinical features and chromosomes with the time-lapse videos in a cross-modal way, the Attentive Multi-Focus Video and Clinical Information Fusion Network (AMCFNet) was established to predict the probability of a blastocyst being an aneuploid [Figure 4B]. Specifically, we designed a clinical feature extraction network and an embryo development feature extraction network in AMCFNet to extract features from the patient's clinical information and the multi-focal time-lapse videos, respectively. The clinical feature extraction network took clinical indicators as input. An embedding module was used to map clinical information into feature vectors, an attention module was used to fuse those clinical feature vectors, and a multilayer perceptron was used to map the fused features into low-dimensional, non-linear clinical features. As for the processing of multi-focus time-lapse videos, we used the proposed blastocyst development prediction model AMSNet as the embryo development feature extraction network, as the AMSNet could effectively process the embryoscope images shot at multiple focal planes and extract morphological features of embryos. Finally, the AMCFNet used the multimodal tucker fusion for visual question answering (MUTAN) module to fuse the clinical features of the patient as well as the morphological features of embryo development in a cross-modal manner and used the fused features to predict whether the embryo was euploid.[20]

F4Figure 4:

Network architecture of the AMCFNet. (A) Spearman correlation r-value for selection of clinical information applied in AMCFNet. (B) Overview of AMCFNet. A clinical feature extraction network that consisted of an embedding module and an attention mechanism was proposed to extract features from patients' clinical indicators. Then the clinical features were fused with the embryo's morphological features from AMSNet in a cross-modal way to predict the probability of a blastocyst being an aneuploid. AMCFNet: Attentive multi-focus video and clinical information fusion network; AMSNet: Attentive multi-focus selection network; Bi-LSTM: Bi-directional long short-term memory; FC: Fully connected layers; I: Image; MFS: Multi-focal feature selection; MLP: Multilayer perceptron; MUTAN: Multimodal tucker fusion for visual question answering; Res: Residual; TSM: Temporal shift module.

The training and performance evaluation of the AMSNet and AMCFNet

For both the blastocyst formation and aneuploidy identification tasks, patients were randomly split into the training set, validation set, and test set according to the ratio of 6:2:2 (for blastocyst formation task, n = 1719:568:568; for aneuploidy identification task, n = 854:284:284). For PGT-SR patients, we retrained the AMCFNet model in 80% of patients and tested the model in remained 20% patients. Both AMSNet and AMCFNet were implemented on PyTorch (version 1.8.0, Facebook, America),[21] a flexible framework for deep learning. During the training process, for these two different tasks, we both split each time-lapse video clip into small segments at a pace of 6 h and randomly sampled one frame from each segment. In total, we sampled the time-lapse data of 7 days (168 h), that was, T = 28 frames from each video in both tasks. For those samples with fewer than 168 h of time-lapse data, we added blank frames to bring the number of sample frames to T = 28 and ignored the lost backpropagation of the padded frames for the AMSNet in real-time blastocyst prediction task. For each frame, both proposed models could use the embryoscope images shot at seven focal planes, that was, [-45, -30, -15, 0, 15, 30, 45] mm or [-75, -50, -25, 0, 25, 50, 75] mm. The resolution of each video clip was resized and randomly cropped to 224 × 224 pixels, and the format of each video clip was converted to grayscale as the embryo was transparent. Continuous clinical indicators were normalized. The experiments were conducted in a workstation with eight NVIDIA A100 GPUs (NVIDIA, Santa Clara, America), and we used the stochastic gradient descent (SGD) optimizer with a learning rate of 0.001 and a weight decay of 0.0005 during model training, with each model trained for 256 epochs.

The performance AMCFNet was evaluated by the average AUC of the receiver operating characteristic (ROC) curve in the test set. We tested our model in blastocysts from patients who underwent PGT-SR treatment, a special population with parental chromosomal structural anomalies and a high risk of aneuploidy. For the euploidy identification task, we also compared the performance of the AMCFNet model with other popular deep learning-based video recognition models, including the recently published model I3D for aneuploidy identification with time-lapse videos. All these models were trained and tested in the same datasets with the same parameter as the AMCFNet.

Results Data collection and baseline characteristics

This research involved 355 couples who had 368 PGT-A or PGT-SR treatment cycles between February 2020 and May 2021 at The First Affiliated Hospital of Sun Yat-sen University. A total of 2855 embryos with complete time-lapse videos were enrolled for the blastocyst formation prediction task, and 1965 of them had developed into blastocysts. A total of 1422 qualified blastocysts received PGT-A (n = 589) or PGT-SR (n = 833) were enrolled for the euploidy identification task, of which 58.58% (833/1422) received PGT-SR. The aneuploidy percentage of all detected blastocysts was 54.22% (771/1422), while the aneuploidy percentage of detected blastocysts from the PGT-SR cycle, when at least one parent carried chromosomal rearrangement, was 67.59% (563/833). Other related information about included PGT cycles is shown in Table 1.

Table 1 - The demographic background of enrolled couples who had PGT-A or PGT-SR treatment cycles. Characteristics Values Total cycle number of PGT-A and PGT-SR 368 Patients 355 Couples with structural rearrangements 216/368 (58.70%) Mean women's age (years) 31.2 ± 4.6 Mean men's age (years) 33.3 ± 5.0 Mean BMI of women (kg/m2) 21.07 ± 2.66 Mean serum AMH level (ng/mL) 4.61 ± 3.46 Total recording 2PN zygote number 4112 Blastocyst formation 1965/4112 (47.79%) Detected blastocyst number in PGT-A/PGT-SR 589/833 (70.71%) Aneuploidy percentage in all detected blastocysts 771/1422 (54.22%) Aneuploidy percentage of blastocysts in PGT-SR cycle 563/833 (67.59%)

Data are shown as n, mean ± standard deviation, or n/N(%). AMH: Anti-Mullerian hormone; BMI: Body mass index; PGT-A: Preimplantation genetic testing for aneuploidy; PGT-SR: Preimplantation genetic testing for chromosomal structural rearrangement; 2PN: 2 pronuclear.


Deep learning-based blastocyst formation prediction

For the blastocyst formation prediction task, we trained an MFS model with three focal raw time-lapse videos, five focal raw time-lapse videos, and seven focal raw time-lapse videos. The AMSNet model using seven focal raw time-lapse videos had the best real-time accuracy. As shown in Figure 5A, the real-time accuracy for AMSNet to predict blastocyst formation reached above 70% on the day 2 of embryo culture, and then increased to 80% on the day 4 of embryo culture, which was better than using the baseline ResNet or TSM model alone, particularly at the early developing stage of embryos. Figure 5B showed that AMSNet achieved an AUC of 0.764, 0.809, and 0.881 in the test set for the blastocyst formation prediction task on day 2, day 3, and day 4, respectively. These results suggested that AMSNet could accurately predict in real time the probability of blastocyst formation during the early stages of embryo development, allowing embryologists to identify embryos with high developmental potential on the second to the fourth day of embryo culture, rather than waiting for blastocyst formation on day 5 to day 7 of embryo culture.

F5Figure 5:

Performance of AMSNet on recognition of blastocyst formation in the test set. (A) Real-time accuracy of the AMSNet model. Accuracy was plotted against the in vitro culture time (hours) from fertilization. (B) ROC curve and AUC of AMSNet at a specified time. AMSNet: Attentive multi-focus selection network; AUC: Area under curve; ROC: Receiver operating characteristic; TSM: Temporal shift module.

Deep learning-based euploidy identification

For the euploidy identification task, we trained an AMSNet model with three focused raw time-lapse videos from day 0 to day 7 in blastocysts from patients who underwent PGT-A or PGT-SR treatment, and the AUC of AMSNet for identifying aneuploidy status reached 0.658 [Figure 6A]. Considering that several clinical factors have been reported to be connected with the risk of aneuploidy, we fused an additional seven clinical characteristics to enhance the model for the aneuploidy identification task and produced a knowledge-embedded model termed AMCFNet. Spearman correlation analyzed correlation coefficients between seven clinical parameters associated with aneuploidy miscarriage and aneuploidy. Female age, male age, parental chromosomal structural rearrangements, and adverse pregnancy outcomes were statistically significantly related to the occurrence of aneuploidy (P <0.05). After combining these four clinical data, the AUC of AMCFNet with 5 and 7 focal points increased to 0.770 and 0.778, demonstrating the efficacy of the fusion of multimodal data [Figure 6A].

F6Figure 6:

Performance of AMCFNet on the identification of aneuploidy status in the test set. (A) ROC curve and AUC of AMCFNet with 3/5/7 focal points in PGT patients; (B) ROC curve and AUC of AMCFNet with 3/5/7 focal points in PGT-SR patients. AMCFNet: Attentive multi-focus video and clinical information fusion network; AMSNet: Attentive multi-focus selection network; AUC: Area under curve; PGT-SR: Preimplantation genetic testing for chromosomal structural rearrangement; ROC: Receiver operating characteristic.

For the evaluation of the euploidy predictive value of the AMCFNet model in populations with a high chance of generating imbalanced gametes and a high risk of miscarriage, the AMCFNet model was applied to PGT-SR treated couples with chromosomal rearrangement. According to CCS results, the aneuploidy rate in 833 blastocysts from PGT-SR treatment cycles was 67.59% (563/833). In embryos that underwent PGT-SR and in which at least one parent carried chromosomal rearrangement, AMCFNet with seven focus points showed a high prediction ability with an AUC of 0.729 [Figure 6B].

Comparatively, various well-known deep learning-based video recognition models were trained and evaluated on our PGT-SR blastocysts datasets, and the results are displayed in Table 2. The AUC of these models inside the test set varied between 0.603 and 0.633, which was lower than the AMCFNet model. Notably, AMCFNet also outperformed the previously reported deep learning model I3D in predicting the aneuploidy status of patients undergoing PGT therapy [Table 2 and Figure 6A].

Table 2 - Performance of other conventional models for predicting aneuploidy status in the test set. Other models AUC I3D[29] 0.604 Two-Stream I3D[29] 0.628 R2 + 1D[30] 0.612 C3D[31] 0.606 SlowFast[32] 0.603 TRN[33] 0.633 TSN[34] 0.625

AUC: Area under curve; I3D: Inflated 3D ConvNet; C3D: Convolutional 3D network; TRN: Temporal Relation Network; TSN: Temporal segment networks.


Discussion

In this study, we collected continuous time-lapse images of early embryo development (days 0–7) and developed the blastocyst prediction model AMSNet and the euploidy prediction model AMCFNet. Our models were constructed using the ResNet50 framework together with the MFS and TSM modules. The MFS module could collect multi-focus features, while the TSM module could fuse features from different time points, enabling AMSNet and AMCFNet to merge embryo appearances with morphokinetic changes for developmental full-view recognition of time-lapse videos.

As a result of the development and widespread application of time-lapse methods, various algorithms based on the video recordings of the embryos have been developed for predicting different clinical outcomes, including blastocyst formation,[22] aneuploidy status,[7,23] and clinical fetal heart pregnancy.[24] Previous researches mainly focused on a few static images at key times and did not encompass all morphokinetic changes during the whole in vitro embryo development process.[8,25,26] However, we cannot overlook the reality that embryo development is a long-term, dynamic, and complicated process of morphological change. Therefore, using static morphological images at certain time points to predict embryo development potential cannot accurately reflect the actual embryo developmental state. Our spatio-temporal predictive model, AMSNet, was able to predict blastocyst development in real time with high predictive accuracy. Our outcomes showed that the accuracy of the AMSNet model in predicting blastocyst development could reach 74.80–83.18% in 48–96 h of embryo culture, but the reported algorithms and existing D3 embryo evaluation system were unable to achieve this. This real-time blastocyst formation prediction model AMSNet could be applied to all non-PGT assisted reproductive therapy cycles in the future to increase the clinical pregnancy rate and live birth rate following D3 embryo transfer.

In the past 10 years, investigations on the assessment of embryonic euploidy status and time-lapse videos have shown conflicting results.[10,27,28] It was difficult to extract usable and meaningful information from the vast quantity of embryo images in time-lapse videos that needed to be annotated and analyzed. In 2021, Lee et al[9] collected continuous time-lapse images and constructed deep-learning models (I3D) to predict the euploidy status of embryos. However, they only used a 3D convolutional neural network model to analyze single-focal time-Lapse videos, and they dismissed the clinical characteristics of patients. In addition, their model was solely evaluated on PGT-A-treated embryos from young patients (age <38 years) lacking elderly women and couples with chromosomal rearrangement, the populations with a high aneuploidy rate.

To explore a novel aritificial intelligence (AI)-assisted euploidy blastocyst identification method for populations with chromosome rearrangement, we designed a clinical feature extraction network in AMCFNet to extract features from patients' clinical information and fused them with the morphological features of embryos in a cross-modal way to predict the probability of a blastocyst being euploid. Utilizing an embedding module and an attention mechanism, clinical characteristics were extracted from a patient's clinical information. MUTAN, a widely utilized multimodal features fusion module, was used to fuse the clinical features of patients with the morphological features of embryos.[20] Considering the difficulties of distinguishing euploidy blastocysts from blastocyst pools of couples with chromosomal rearrangement, we attempted to extract all detailed image information of embryo development from raw time-lapse videos. In comparison to a recently reported AI model, AMCFNet used all seven-focal time-lapse image data and attempted to distinguish tiny variations throughout embryo development. We investigated the prediction capability of models based on videos with three, five, and seven focus points. Consistent with our assumptions, the incorporation of video with more focal models improved the performance of our models. In addition, the fusion of data from time-lapse videos with selected clinical information improved the AUC from 0.658 to 0.765 on three focal, demonstrating that clinical characteristics of the patients could be added to the deep learning model to improve its accuracy in the aneuploidy status prediction task. AMCFNet was able to reliably identify the euploidy status of all tested blastocysts from PGT-treated patients (AUC = 0.778) and maintained good effectiveness in blastocysts with at least one parent carrying chromosomal rearrangements (AUC = 0.729). Theoretically, individuals who carry chromosomal translocation have only a 2/18 chance to produce normal offspring. According to our results, 67.5% of the 833 blastocysts tested by PGT-SR were aneuploid. However, there was no morphological evaluation tool that could predict the euploidy status of blastocysts in this population until now. In blastocysts from patients with chromosomal rearrangement, we examined the AMCFNet's euploidy prediction accuracy, and the euploidy prediction accuracy was high (AUC = 0.729). Our findings indicated that the AI could predict the euploidy status of embryos by combining patient clinical characteristics with dynamic three-dimensional morphological changes during embryo development, which could provide clinical support for the development of additional optimized human embryo selection criteria in the future. These findings demonstrated that the AMCFNet model was both a viable tool for PGT-SR patients who had a high risk of aneuploidy and a model that was accurate at predicting euploidy status in PGT patients.

The models could be easily applied in IVF centers since AMSNet and AMCFNet might assist embryologists in choosing day 3 embryos with high potentials of blastocyst formation and days 5–7 blastocysts with high potentials of euploidy without the requirement for human annotation. The real-time prediction of blastocyst formation in AMSNet could assist embryologists to choose days 2–3 embryos with high blastocyst formation potential, which might improve the pregnancy rate and live birth rate of day 3 embryo transfer. In the PGT-SR treatment cycle, the AMCFNet model might be utilized to assist embryologists in distinguishing the euploid condition prior to blastocyst biopsy and PGT. In the near future, the AMCFNet model might also be extended to non-PGT patients and assist embryologists in selecting blastocysts with a high probability of euploid status. The AMCFNet aimed to minimize or even replace the use of invasive techniques in the detection of embryonic euploidy, particularly in populations with a high possibility of creating imbalanced gametes and a high risk of miscarriage, such as elderly women. Compared to previously published AI embryo ploidy status evaluation models, AMCFNet is a real-time method with higher AUC value for evaluating the euploid state of blastocysts in couples with chromosomal rearrangement. Our results indicated that AMSNet might be a valuable tool for evaluating embryo blastocyst formation potential and increasing the live birth rate of D3 embryo transfer, while AMCFNet could be utilized for the preliminary evaluation of blastocyst euploidy before PGT confirmation.

The study has several limitations. Due to the retrospective nature of the study, the inherent property is that it has some selection bias. The other limitation of the present investigation is the bias caused by the embryo enrollment criteria. Embryos with severe gene disorders will not receive PGT-A or PGT-SR; hence, the ratio of aneuploidy in tested blastocysts was lower than previously reported. This study employed data from a single ART facility, which might have introduced bias into the results.

In conclusion, our study demonstrated the new time-lapsed video-based deep learning models AMSNet and AMCFNet had high efficacy for blastocyst formation prediction and blastocyst euploidy prediction, especially for couples carrying chromosomal rearrangement. It may serve as a valuable tool for the early evaluation of blastocyst formation potential for D3 embryo and preliminarily judgment of the euploid status of the blastocyst, hence optimizing the existing embryo selection technique and enhancing the clinical result of assisted reproductive treatment.

Acknowledgments

We thank Xiu Zhou and Tian Meng for the collection of time-lapse raw image data of embryos.

Funding

This research was supported by grants from the National Natural Science Found of China (No. 81270750), Natural Science Found of Guangdong China (No. 2019A1515011845), Stem Cell Research Founding from Chinese Medical Association (No. 19020010780), and Sun Yat-sen University 5010 Clinical Research Project (No. 2023003).

Conflicts of interest

None.

References 1. Alfarawati S, Fragouli E, Colls P, Stevens J, Gutiérrez-Mateo C, Schoolcraft WB, et al. The relationship between blastocyst morphology, chromosomal abnormality, and embryo gender. Fertil Steril 2011; 95: 520–524. doi: 10.1016/j.fertnstert.2010.04.003. 2. Savio Figueira Rde C, Setti AS, Braga DP, Iaconelli A Jr, Borges E Jr. Blastocyst morphology holds clues concerning the chromosomal status of the embryo. Int J Fertil Steril 2015; 9: 215–220. doi: 10.22074/ijfs.2015.4242. 3. Kanakasabapathy MK, Thirumalaraju P, Bormann CL, Kandula H, Dimitriadis I, Souter I, et al. Development and evaluation of inexpensive automated deep learning-based imaging systems for embryology. Lab Chip 2019; 19: 4139–4145. doi: 10.1039/c9lc00721k. 4. Kirkegaard K, Campbell A, Agerholm I, Bentin-Ley U, Gabrielsen A, Kirk J, et al. Limitations of a time-lapse blastocyst prediction model: A large multicentre outcome analysis. Reprod Biomed Online 2014; 29: 156–158. doi: 10.1016/j.rbmo.2014.04.011. 5. Adolfsson E, Andershed AN. Morphology vs morphokinetics: A retrospective comparison of inter-observer and intra-observer agreement between embryologists on blastocysts with known implantation outcome. JBRA Assist Reprod 2018; 22: 228–237. doi: 10.5935/1518-0557.20180042. 6. Carrasco B, Arroyo G, Gil Y, Gómez MJ, Rodríguez I, Barri PN, et al. Selecting embryos with the highest implantation potential using data mining and decision tree based on classical embryo morphology and morphokinetics. J Assist Reprod Genet 2017; 34: 983–990. doi: 10.1007/s10815-017-0955-x. 7. Chavez-Badiola A, Flores-Saiffe-Farías A, Mendizabal-Ruiz G, Drakeley AJ, Cohen J. Embryo Ranking Intelligent Classification Algorithm (ERICA): Artificial intelligence clinical assistant predicting embryo ploidy and implantation. Reprod Biomed Online 2020; 41: 585–593. doi: 10.1016/j.rbmo.2020.07.003. 8. Barnes J, Malmsten J, Zhan Q, Hajirasouliha I, Rosenwaks Z. Noninvasive detection of blastocyst ploidy (euploid vs. aneuploid) using artificial intelligence (AI) with deep learning methods. J Fertil Steril 2020; 114: e76. doi: 10.1016/j.fertnstert.2020.08.233. 9. Lee CI, Su YR, Chen CH, Chang TA, Kuo EE, Zheng WL, et al. End-to-end deep learning for recognition of ploidy status using time-lapse videos. J Assist Reprod Genet 2021; 38: 1655–1663. doi: 10.1007/s10815-021-02228-8. 10. Reignier A, Lammers J, Barriere P, Freour T. Can time-lapse parameters predict embryo ploidy? A systematic review. Reprod Biomed Online 2018; 36: 380–387. doi: 10.1016/j.rbmo.2018.01.001. 11. Kuwayama M, Vajta G, Ieda S, Kato O. Comparison of open and closed methods for vitrification of human embryos and the elimination of potential contamination. Reprod Biomed Online 2005; 11: 608–614. doi: 10.1016/S1472-6483(10)61169-8. 12. Kuwayama M. Highly efficient vitrification for cryopreservation of human oocytes and embryos: The cryotop method. Theriogenology 2007; 67: 73–80. doi: 10.1016/j.theriogenology.2006.09.014. 13. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016: 770–778. 14. Lin J, Gan C, Han STSM. Temporal shift module for efficient video understanding. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South), 2019: 7082–7092. 15. Woo S, Park J, Lee JY, Kweon IS. Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018: 3–19. 16. Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7794–7803. 17. Bu Z, Hu L, Su Y, Guo Y, Zhai J, Sun YP. Factors related to early spontaneous miscarriage during IVF/ICSI treatment: An analysis of 21, 485 clinical pregnancies. Reprod Biomed Online 2020; 40: 201–206. doi: 10.1016/j.rbmo.2019.11.001. 18. Atik RB, Hepworth-Jones BE, Doyle P. Risk factors for miscarriage. In: Farquharson RG, Stephenson MD. Early Pregnancy 2010; 9–18. doi: 10.1017/CBO9780511777851.003. 19. Tur-Torres MH, Garrido-Gimenez C, Alijotas-Reig J. Genetics of recurrent miscarriage and fetal loss. Best Pract Res Clin Obstet Gynaecol 2017; 42: 11–25. doi: 10.1016/j.bpobgyn.2017.03.007. 20. Ben-Younes H, Cadene R, Cord M, Thome N. Mutan: Multimodal tucker fusion for visual question answering. In: Proceedings of the IEEE international conference on computer vision. Venice, Italy, 2017: 2612–2620. 21. Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, et al. Automatic differentiation in PyTorch. In: 31st Conference on Neural Information Processing Systems. Long Beach, USA, 2017: 1–4. 22. Chen TJ, Zheng WL, Liu CH, Huang I, Lai HH, Liu M. Using deep learning with large dataset of microscope images to develop an automated embryo grading system. Fertil Reprod 2019; 01: 51–56. doi: 10.1142/S2661318219500051. 23. Raudonis V, Paulauskaite-Taraseviciene A, Sutiene K, Jonaitis D. Towards the automation of early-stage human embryo development detection. Biomed Eng Online 2019; 18: 120. doi: 10.1186/s12938-019-0738-y. 24. Tran D, Cooke S, Illingworth PJ, Gardner DK. Deep learning as a predictive tool for fetal heart pregnancy following time-lapse incubation and blastocyst transfer. Hum Reprod 2019; 34: 1011–1018. doi: 10.1093/humrep/dez064. 25. Khosravi P, Kazemi E, Zhan Q, Malmsten JE, Toschi M, Zisimopoulos P, et al. Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization. NPJ Digit Med 2019; 2: 21. doi: 10.1038/s41746-019-0096-y. 26. Schenk M, Kröpfl JM, Hörmann-Kröpfl M, Weiss G. Endometriosis accelerates synchronization of early embryo cell divisions but does not change morphokinetic dynamics in endometriosis patients. PLoS One 2019; 14: e0220529. doi: 10.1371/journal.pone.0220529. 27. Campbell A, Fishel S, Bowman N, Duffy S, Sedler M, Hickman CF. Modelling a risk classification of aneuploidy in human embryos using non-invasive morphokinetics. Reprod Biomed Online 2013; 26: 477–485. doi: 10.1016/j.rbmo.2013.02.006. 28. Kramer YG, Kofinas JD, Melzer K, Noyes N, McCaffrey C, Buldo-Licciardi J, et al. Assessing morphokinetic parameters via time lapse microscopy (TLM) to predict euploidy: Are aneuploidy risk classification models universal? J Assist Reprod Genet 2014; 31: 1231–1242. doi: 10.1007/s10815-014-0285-1. 29. Carreira J, Zisserman A. Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, HI, USA, 2017: 6299–6308. 30. Tran D, Wang H, Torresani L, Ray J, LeCun Y, Palu

留言 (0)

沒有登入
gif