Graph structure based data augmentation method

3.1 Datasets and models description

The experiments are carried out using three different open databases of The China Physiological Signal Challenge 2018 (CPSC) [16], SHAOXING [17] and Physikalisch Technische Bundesanstalt-XL (PTBXL) [18] which includes 7,166, 10,615 and 21,837 data points each. Each dataset includes labels for different types of arrhythmia. CPSC and PTBXL datasets contain labels fit for mult-label tasks for the following arrhythmia detection: atrial fibrillation, first-degree atrioventricular block, left bundle branch block, right bundle branch block, premature atrial contraction, premature ventricular contraction, ST-segment depression, and ST-segment elevated for CPSC and normal ECG, myocardial infarction, ST/T change, conduction disturbance and additionally, hypertrophy for PTBXL. SHAOXING dataset contains labels fit for multi-class classification tasks of seven classes composed of sinus bradycardia, sinus rhythm, atrial fibrillation, sinus tachycardia, atrial flutter, sinus irregularity, and supraventricular tachycardia. The training, validation, and testing sets are divided within the same datasets into 70%, 15%, and 15%, respectively.

To investigate the efficacy of the proposed method, we tested on the well-known models: Residual Network (ResNet) [19], Efficient Network (EfficientNet)  [20] and Densely Connected Network (DenseNet) [21]. Since these networks were originally designed for low-resolution image classification, we modified them for usage in 1D signals. For ResNet, we implemented a modification of ResNet that is popularly used for handling ECG data [22]. To make correct adjustments for the length of the signal, the pooling layer of the last block is modified from 2 to 1. For EfficientNet, we implemented a modified version of EfficientNet-B0. Specifically, 2-dimensional \(N \times N\) filters are changed to 1-dimensional \(N^2\) filters, and stride operation is added at the 5th stage. For DenseNet, we set the growth rate to 12 and the filter size to 16 for the entire network, where each block consists of 6 bottleneck layers. The networks are trained using Adam optimizer with a learning rate of 0.001. We used dropout regularization with \(p=0.1\) and a batch size of 32. \(l_2\) regularization with a coefficient of \(10^\) is applied.

3.2 Performance evaluation and comparison

In order to evaluate the performance of our augmentation method, we first compared the performance gain of Graph Augmentation against no augmentation as well as normal augmentations. We also show that the Graph Augmentation method can be applied on top of normal augmentations by showing that they have an additive effect on performance. The results of these experiments are summarized in Table 1 where each entries are F1 scores averaged over multiple repeated experiments. Although performance gain achieved from normal augmentations comes from optimal parameter combination tuned by RandAugment, Graph Augmentation, and normal augmentations show similar performance gain on average across every model and dataset. Additionally, when Graph Augmentation and normal augmentations are applied at the same time, additional performance gain has been observed across every dataset and model. As a result of these experiments, we arrived at the following conclusions. First, we show that using Graph Augmentation alone has a similar impact on network performance than using not only one but multiple well-tuned normal augmentation methods. Second, we conclude that performance gain from Graph Augmentation is orthogonal to normal augmentations, resulting in an additive effect on performance.

Table 1 Performance of neural network models for each dataset trained according to different augmentation schemes3.3 Augmentation parameters optimization

Depending on how the augmentation parameters are tuned, they can have critical effects on the performance of the trained networks. For maximal performance gain, we have tuned these parameters for both normal and Graph Augmentations using RandAugment. We observed that, for the normal augmentations, the following RandAugment scheme clearly outperforms hand-searched parameter combinations.

We have tried six different augmentation parameter combinations, measuring the performance gain achieved for each combination. As the summary of performance gain for ResNet shows in Fig. 3, performance gain achieved by augmentation methods is usually quite marginal. In addition, applying multiple augmentations on top of each other does not guarantee performance gains [11]. However, since the performance gain resulting from Graph Augmentation is on par with RandAugment, we conclude that it can provide a considerable amount of performance gain compared to other single augmentation methods.

Fig. 3figure 3

The performance gain in terms of the F1 score of ResNet trained with six different augmentation parameter combinations for RandAugment. The number and intensity parameters represent the input parameters to RandAugment, which are explored with grid-search

3.4 Robustness effect of graph augmentation

We hypothesized that the main reason for the performance gain achieved by the Graph Augmentation method lies within the network’s robustness with respect to graph structure-induced perturbations. To elaborate, if the network trained with Graph Augmentation is robust to these perturbations, it will make more correct predictions on the perturbed data samples when tested. To show the network robustness against graph perturbations, testing datasets with large graph structure-based perturbations is required. An ideal way to build such a test set is to have lead-wise, positional perturbations at the measurement level. Since this is intractable, we instead constructed perturbed data samples using adversarial attack method [23] which finds an (sub) optimal perturbation to change the label of the data as best as possible. Therefore, by demonstrating adversarial robustness, we believe sufficient evidence is provided to explain the performance gain achieved by Graph Augmentation. In addition, we believe that the Graph Augmentation and normal augmentation schemes have an additive effect on performance for the following reasons. In contrast to Graph Augmentation, the input data generated by normal augmentations introduces perturbations limited to each waveform rather than the relations between. As a result, the normal augmentation reduces overfitting related to noises within each waveform signal, which is independent of the effect of the Graph Augmentation. For this reason, the two different augmentation methods implement robustness that is orthogonal to each other, having additive effects when applied at the same time.

In the robustness experiments, we used ResNet trained on the CPSC dataset evaluating the F1 scores according to the perturbation strength \(\epsilon\). We evaluated the effect of the Graph Augmentation by comparing the F1 score decay of the network trained with only RandAugment and the network trained with Graph Augmentation added on top. As shown in Fig. 4, the network trained with both augmentations is more robust by a significant margin for every \(\epsilon\) tested. Thus, we conclude that the Graph Augmentation can offer robustness with respect to the adversely constructed perturbations.

Fig. 4figure 4

The adversarial robustness of ResNets trained using RandAugment and using both RangAugment and Graph Augmentation according to different perturbation strength (\(\epsilon)\)

留言 (0)

沒有登入
gif