Evaluation of somatic copy number variation detection by NGS technologies and bioinformatics tools on a hyper-diploid cancer genome

Study design

This study rigorously evaluated six CNV callers: ascatNgs, CNVkit, FACETS, DRAGEN, HATCHet, and Control-FREEC. These callers underwent thorough evaluation in peer-reviewed studies [13,14,15,16,17,18,19] or were utilized in substantial cohort studies [20,21,22]. To comprehensively assess their performance, all callers underwent benchmarking using six NGS short-read datasets from the Somatic Mutation Working Group. These datasets were prepared under various experimental conditions, such as variations in input DNA amount, Fresh vs. FFPE samples, tumor purity, and the choice between WGS and WES [9] (Fig. 1).

Fig. 1figure 1

Overall study design. A total of 21 WGS replicates and 12 WES replicates performed across six sequencing centers was used to determine the consistency and reproducibility of CNV calling across sequencing centers, CNV callers, and sequencing platform (WGS vs WES). High-confidence CNV clusters in the cancer cell line were defined by the consensus scores across 21 WGS replicates with six callers (126), in combination of supporting evidence from three orthogonal technologies, Affymetrix CytoScan, Illumina Array, and Bionano. Finally, defined high-confidence CNV clusters were used to evaluate precision and detection sensitivity under various experimental conditions, such as amount of DNA input (1–250 ng), library preparation protocol (TruSeq, TruSeq-Nano, and Nextera Flex), tumor content range (5–100%), read coverage (10X–300X), and FFPE samples

Initially, to evaluate reproducibility across sequencing centers, callers, and platforms, the six CNV callers were applied to 21 WGS replicates sequenced across six different sequencing centers (Novartis (NV), Illumina-HiSeq (IL), Illumina-NovaSeq (NS), Fudan University (FD), European Infrastructure for Translational Medicine (EA), National Cancer Institute (NC), Loma Linda University (LL)). This resulted in 126 call sets (Fig. 1), stratified by center and caller, to assess concordance of CNV calls.

Subsequently, the callers were applied to a WES dataset comprising 12 replicates across six centers. A comparison between CNV calls from WGS and WES datasets was conducted to evaluate concordance across these different technologies. Using the WGS datasets, a reference CNV call set was established and validated against CNV calls derived from three orthogonal technologies: Affymetrix CytoScan, Illumina BeadChip, and Bionano.

Furthermore, to evaluate the impact of non-analytical factors, the six CNV callers were employed across following three datasets:

1.

The WGS on library preparation protocol (LBP) dataset, encompassing samples prepared with three library preparation protocols (TruSeq, TruSeq-nano, Nextera flex) with varying DNA input amounts (ranging from 1 to 250 ng). This dataset assessed how DNA input amounts affected CNV calling.

2.

The WGS on tumor sample purity (SPP) dataset, comprising varying tumor and normal gDNA mixtures with tumor purities ranging from 5 to 100% and different read coverage amounts (ranging from 10 to 300X coverage). This dataset explored the roles of tumor purity and read coverage in CNV calling.

3.

The FFPE samples dataset, which included four time points for fixing time in both WGS and WES analyses. This dataset investigated how cell fixing time influenced CNV calling across the different callers.

Concordance of CNV calls across six callers

Our evaluation focused on the consistency and reproducibility of CNV calls within WGS datasets generated by six callers, comprising 21 replicates from six sequencing centers. We initiated the examination by assessing the total genomic regions spanned by gain, loss, and LOH segments across replicates for each caller. ascatNgs, DRAGEN, and CNVkit consistently identified copy gains averaging about 1500 Mb (Additional file 1: Fig. S1A). These three callers maintained relative consistency in the copy number loss regions (Additional file 1: Fig. S1B). FACETS displayed reasonable consistency in gain and loss genome regions, except for a few outliers (Fig. 2A, Additional file 1: S1A). Conversely, HATCHet and Control-FREEC showed notable inconsistency across replicates in both gains and losses (Fig. 2A, Additional file 1: S1A). DRAGEN, Control-FREEC, and CNVkit reported fewer LOH regions compared to FACETS and HATCHet (Additional file 1: Fig. S1C). While CNVkit and DRAGEN maintained consistency around 1000–1500 Mb, the majority of FACETS and HATCHet calls surpassed the 1500-Mb range, with some larger outliers.

Fig. 2figure 2

Consistency of CNV calls. A CNV gain regions, B CNV loss regions, and C CNV LOH regions supported by each replicate across five callers. Note, ascatNgs did not report LOH results. Heatmaps of Jaccard index of genome regions with gain (D) or loss (E) calls in 21 WGS and 12 WES replicates across six callers

Further examination of CNV regions across replicates and callers revealed that ascatNgs, CNVkit, and DRAGEN consistently exhibited the highest consensus in identifying CNV gains (Fig. 2A). In contrast, the remaining three callers produced a higher number of unique calls per replicate, particularly HATCHet, presenting the most unique gain regions. Similar trends were evident in the identified loss regions (Fig. 2B). DRAGEN and FACETS demonstrated higher consistency in LOH calls, with most CNV regions supported across 105 replicates (21 replicates × 5 callers) (Fig. 2C). Conversely, HATCHet showed inconsistencies across replicates, with numerous clusters specific to single or subsets of replicates (Fig. 2C). Comparing CNV calling results between WGS and WES through Jaccard indexes revealed that clustering was primarily influenced by the caller, followed by the platform (WGS vs WES), with minimal sequencing center impact (Fig. 2D, E). Stratifying comparisons by platform showed similar distributions of Jaccard indexes within and across sequencing centers, confirming minimal sequencing center impact (Additional file 1: Fig. S1D, E). Within WGS replicates, ascatNgs, CNVkit, and DRAGEN displayed the most consistency for gains and losses (Additional file 1: Fig. S1D, E). FACETS exhibited high concordance for most within WGS comparisons but lower concordance for some. Control-FREEC and HATCHet displayed the most variability.

Within WES replicates, all callers showed lower concordance compared to WGS, especially for losses (Additional file 1: Fig. S1D, E). CNVkit and DRAGEN maintained the highest concordance within WES replicates, while Control-FREEC and HATCHet demonstrated the lowest concordance for losses. Cross-platform comparisons revealed moderate concordance for gains and more variable, lower concordance for losses across all callers (Additional file 1: Fig. S1D, E). CNVkit and DRAGEN displayed the highest concordance for both gains and losses between WGS and WES.

Additionally, we measured concordance at segment, gene, and exon levels using the Jaccard Index for CNV gain and loss detected from WGS replicates. The intra-caller concordance surpassed inter-caller concordance overall. Notably, the heatmap for loss calls highlighted HATCHet as distinct from other callers, likely due to generating more unique loss calls. The distribution patterns within and across sequencing centers were akin, suggesting minimal impact on CNV calling when adhering to the same WGS library preparation protocol (Additional file 1: Fig. S2F, G).

Inconsistency between replicates due to inaccurate assessment of genome ploidy

We then set to understand inconsistent CNV calling observed in the 21 WGS datasets by FACETS and HATCHet. These inconsistencies were likely due to an inaccurate assessment of genome ploidy by these callers, resulting in excessive gain or loss calls (Additional file 1: Fig. S3). Genome ploidy significantly influences how CNV callers identify gains or losses. Some callers assume the genome median coverage as copy-neutral (CN = 2), identifying segments with higher coverage than the median as gains and those with lower coverage as losses. However, this assumption may fail when the genome median coverage deviates significantly from the copy-neutral level, particularly in samples with numerous alterations.

The cell line used in this study, with an overall genome ploidy reported as 2.85 in literature [23], reflects the challenge of deviations from the typical diploid level observed in somatic samples [24]. CNVkit, one of the callers, was adjusted with manual re-centering based on this known genome ploidy. Manual re-centering becomes necessary when the overall genome ploidy differs significantly from the assumed diploid level (CN = 2) by CNVkit. Without this adjustment, CNVkit may call gains or losses using the median coverage, which could be inaccurate when the median coverage is far from the copy-neutral coverage. An instance was noted on chr21 where CNVkit, without manual re-centering, incorrectly identified a deletion (DEL) in a region with a B-Allele Frequency close to 50%, indicating compatibility with multiples of CN = 2 and no LOH. However, the genomic region had lower coverage than the median (approximating the overall genome ploidy, 2.85), leading CNVkit, without manual re-centering, to interpret it as a loss.

We also computed an examination of ploidy based on Control-FREEC calling results (see “Methods”). In the seven WGS runs with excessive gain calls (as depicted in Fig. 2A), they consistently revealed a ploidy level of 5 (as shown in Additional file 1: Table S1). These discrepancies underscore the critical importance of comprehending and adjusting for genome ploidy variations, particularly within CNV calling methodologies, to prevent misinterpretations when departing from the expected diploid level.

Establishment of CNV reference call set for the cancer cell line (HCC1395)

From the analysis of the six CNV callers applied to the 21 WGS replicates dataset, we devised an integration strategy to construct a consensus CNV call set (Additional file 1: Fig. S4A). The process involved segmenting the data based on overlapping call sets to identify distinct segments (Additional file 1: Fig. S4B).

We established a scoring system to assess the confidence level of each CNV interval by evaluating the consistency of CNV calls across replicates and among different callers. The initial scoring involved assessing reproducibility across the 21 replicates for each of the six callers (Additional file 1: Fig. S5A). This scoring system ranged from 0 to 3, with higher scores indicating stronger consistency within the same callers. The total of these scores from all callers determined the confidence level of CNV intervals, categorized as strong, medium, weak, or neutral (Additional file 1: Fig. S5B). Higher confidence levels indicated stronger support from multiple sequencing centers and callers, reinforcing the reliability of the identified CNV intervals.

Upon examination, most CNV calls for gains, losses, and LOH intervals received a group score of 3, signifying reproducibility within and across groups using the same caller. However, inconsistencies were observed for specific calls between replicates: Control-FREEC and FACETS for CNV gains (Fig. 3A), HATCHet for CNV losses (Fig. 3C), and Control-FREEC and HATCHet for LOH calls (Fig. 3E). Consequently, more CNV regions received lower scores (0 and 1) in these instances.

Fig. 3figure 3

Genomic regions of CNV with site scores and confidence levels. Genomic region of copy number gain called by each caller with site scores (A) and confidence levels (B), copy number loss (C and D), and LOH (E and F)

Further refinement utilized consensus evidence across all six callers to categorize CNV intervals into four confidence levels. Strong-evidence CNV intervals were widely supported by most replicates and callers, while weaker confidence levels had support from fewer replicates or specific callers. Neutral-evidence CNV intervals were only substantiated by specific sequencing centers or CNV callers. The resulting analysis showcased 1518.5 Mb of genome regions strongly supported for copy gains (Fig. 3B) and 1456.1 Mb for LOH regions (Fig. 3F). However, for loss intervals, the consensus evidence shifted away from strong support, with more regions associated with neutral evidence (1498.9 Mb), primarily backed by unique calls from HATCHet (Fig. 3D).

Validation of CNV call sets with orthogonal methods

The consensus calls from the 126 WGS datasets were validated against CNV calls derived from three different orthogonal methods: Affymetrix CytoScan, Illumina BeadChip, and Bionano. This validation process involved comparing CNVs identified by these orthogonal methods with those in the consensus call set, categorized by confidence levels for both gain and loss clusters (Additional file 1: Fig. S4C).

Strong-evidence gain calls exhibited a high validation rate of 88% when supported by at least one of the three orthogonal technologies (Table 1). Similarly, all strong- evidence loss calls were validated by at least one orthogonal technology. However, as the confidence level decreased, the validation rate of calls decreased accordingly. For CNV gains and LOH, the NGS strong-evidence regions were well supported by all three orthogonal technologies (Fig. 4 and Additional file 1: Fig. S7). Regarding CNV losses, Illumina BeadChip and Bionano corroborated the NGS strong-evidence regions, while Affymetrix CytoScan aligned more with NGS neutral-evidence regions (Additional file 1: Fig. S8). Nevertheless, the evidence from Affymetrix CytoScan alone was not substantial enough to solely support these CNV calls.

Table 1 Confirmation of CNV benchmark sets by three orthogonal methods (based on regions, Mb)Fig. 4figure 4

Consistency of collapsed gain clusters in comparison to three orthogonal technologies. A Chromosome view of gain intervals in comparison to three orthogonal technologies. B Paired-wise concordance of Jaccard index for gain intervals with different confidence level in comparison to three orthogonal technologies. C Upset plot of concordance of gain intervals with different confidence level in comparison to three orthogonal technologies. Intersections of different NGS confidence levels were not included

The CNV benchmark sets comprised strong-evidence calls and medium/weak- evidence calls supported by at least two of the three orthogonal technologies. Neutral-evidence calls were excluded from the benchmark set. Conflicting CNV regions, where gain and loss were identified simultaneously, were carefully handled: if a strong-evidence NGS CNV gain region was identified as a loss by orthogonal technologies and not supported as a gain by any, it was removed from the benchmark set. However, if gain calls were detected by orthogonal technologies in a strong-evidence NGS CNV gain region, even though other orthogonal technologies identified it as a loss, the region was retained in the benchmark set.

Furthermore, breakpoints were trimmed based on evidence from orthogonal technologies. If a gain end and a loss end overlapped and the gain end was supported by any orthogonal technologies, the gain end was retained, while the loss end was trimmed.

This rigorous validation process resulted in CNV benchmark sets for HCC1395, comprising 346 high-confidence gain calls spanning 1525.6 Mb, 33 high-confidence loss calls covering 87.9 Mb, and 320 high-confidence LOH calls encompassing 1490.4 Mb (Additional file 1: Table S2, Fig. S9).

Effect of tumor content/FFPE/DNA inputs/library prep protocol on calling results

To evaluate the influence of non-analytical factors on the performance of each CNV caller, we considered three datasets: FFPE, tumor content, DNA inputs, and library preparation. First, to assess the impact of FFPE, we applied all callers to the FFPE dataset that included four time points for fixing time (1, 2, 6, and 24 h), each sequenced with both WGS and WES. CNV results from WGS and WES of the FFPE samples (FFG and FFX, respectively) along with CNV results from WGS and WES of fresh samples were evaluated by calculating precision and recall, using the high-confidence call set as the truth. The accuracy of callers varied across both WGS/WES and FFPE/Fresh datasets, shown by the precision and recall of the six callers on fresh or FFPE DNA in WGS/WES replicates (Fig. 5A, Additional file 1: Fig. S10). Overall, we observed the impact of DNA damage due to FFPE process on the precision and recall of CNV calling by all six callers (Additional file 1: Fig. S10). In WGS data set, the precision of loss calls dropped significantly (p-value of 1.49e − 07, one-tailed t-test, Additional file 1: Fig. S10A). Even though CNV calling by ascatNgs, CNVkit, and DRAGEN were also impacted in FFPE data sets, their calling reproducibility remained consistent across replicates, particularly for WGS platform (Additional file 1: Fig. S10B-C). It is evident from the analysis that the increase in FFPE fixing time does not significantly impact the accuracy of CNV calls (Additional file 1: Fig. S11). Furthermore, the accuracy of CNV calls derived from FFPE samples is higher in WGS datasets compared to WES datasets. The accuracy is primarily dependent on the specific caller employed. CNVkit demonstrated relatively higher performance in detecting copy number gain, whereas HATCHet and Control-FREEC exhibited comparatively lower performance. DRAGEN achieved high performance in detecting both copy number gain and copy number loss, except for FFPE WES datasets.

Fig. 5figure 5

The accuracy of copy number gains and loss per genome regions from Fresh/FFPE, WGS/WES, tumor purity, and library preparation dataset. F1 score of gain and loss calls by six callers on Fresh/FFPE and WGS/WES (A), tumor purity dataset (B), and library preparation dataset (C). FFG: FFPE on WGS, FFX: FFPE on WES

To assess the effect of different tumor purity and read coverages on CNV calling, we analyzed the SPP dataset, which included samples with different combinations of tumor purity (5%, 10%, 20%, 50%, 75%, 100%) and read coverage (10X, 30X, 50X, 80X, 100X, 200X, 300X) (Fig. 5B). For ascatNgs and DRAGEN, calling accuracy of both gain and loss dropped drastically with tumor purity equal to or less than 20% and performance was not significant impacted by increasing read coverage if it was great than 30X. AscatNgs, CNVkit, Control-FREEC, and HATCHet failed in making any loss calls at low tumor content (< 20%).

To assess the impact of DNA input amount and library preparation methods, we used the LBP dataset, which consisted of samples with different combinations of DNA input (1, 10, 100, and 250 ng) and library kit (TruSeq, TruSeq-nano, Nextera). There were no significant differences between the different DNA input amounts and library preparation methods for results from ascatNgs, CNVkit, and DRAGEN (Fig. 5C). All callers were consistent in performance based on the F1 scores. Even low DNA input amounts achieved high performance in the copy number gain and loss calls. Inconsistent results were observed for Control-FREEC and HATCHet. FACETS achieved consistent results with different inputs with Nextera and 10 and 100 ng input with TruSeq-nano but did not produce any loss calls for data derived from 1 ng with TruSeq-nano protocol (Fig. 5C).

留言 (0)

沒有登入
gif