Radiological complexity of nuclear facilities: an information complexity approach to workplace monitoring

Nuclear energy is crucial for achieving net-zero carbon emissions. A big challenge in the nuclear sector is ensuring the safety of radiation workers and the environment, while being cost-effective. Workplace monitoring is key to protecting workers from risks of ionising radiation. Traditional monitoring involves radiological surveillance via installed radiation monitors, continuously recording measurements like radiation fields and airborne particulate radioactivity concentrations, especially where sudden radiation changes could significantly impact workers. However, this approach struggles to detect incremental changes over a long period of time in the radiological measurements of the facility. To address this limitation, we propose abstracting a nuclear facility as a complex system. We then quantify the information complexity of the facility's radiological measurements using an entropic metric. Our findings indicate that the inferences and interpretations from our abstraction have a firm basis for interpretation and can enhance current workplace monitoring systems. We suggest the implementation of a radiological complexity-based alarm system to complement existing radiation level-based systems. The abstraction synthesized here is independent of the type of nuclear facility, and hence is a general approach to workplace monitoring at a nuclear facility.

During the 26th session of the United Nations Framework Convention on Climate Change (COP 26) in 2021, India set forth its ambitious target to achieve net zero carbon emissions by 2070 [1]. A pivotal aspect of realising this goal lies in the development of low-carbon electricity systems that are both cost-effective and scalable. Nuclear energy, with its low carbon footprint, aligns well with this imperative [2]. However, a significant challenge in the nuclear energy sector is the delicate balance between cost-effectiveness and the management of risks from ionising radiation. It is crucial to assess and mitigate the risks associated with ionising radiation without overly compromising the contributions of nuclear energy to sustainable development.

The International Atomic Energy Agency (IAEA) is the international statutory body dedicated to establishing safety standards for radiological protection and minimising the risks posed by ionising radiation. Occupational exposure to ionising radiation can occur across a spectrum of sectors including industries, medical institutions, educational and research establishments, and nuclear fuel cycle facilities. Ensuring appropriate levels of radiation protection for workers is fundamental for the safe and justified use of radiation, radioactive material, and nuclear energy.

Over the years, with improvements in radiation protection practices, the radiation exposures and consequently the risk from ionising radiation to workers and the public, have become well-controlled. For perspective, the average annual effective dose to occupational workers at nuclear reactors across the world between 2010–2014 was 0.5 mSv, compared to 4.1 mSv between 1975–79 [3].

Workplace monitoring forms an integral part of radiation protection program to control exposures to workers at a nuclear facility. According to general safety guidelines on occupational radiation protection [4], the frequency of routine monitoring at a workplace should be commensurate with the occupancy factor of radiation workers in the location and the anticipated changes in the radiation environment. In scenarios where a sudden unexpected increase in radiation exposure might result in a significant radiation dose being received by a worker, provisions for continuous monitoring should be in place. The set of continuous monitors, which include both radiation field and airborne contamination monitors, is collectively referred to as the radiation monitoring systems (RMS).

Merely recording RMS measurement data as part of workplace monitoring is not adequate. It's imperative to analyse the trends in the measurement data to identify any deterioration of engineered safety controls or deviations from safe operating procedures (SOPs). This analysis can be initially performed by plotting the RMS data as multiple time series to observe its trend and by calculating statistics such as mean and variance. While it is relatively straightforward to identify non-decreasing trends in RMS measurement data, pinpointing gradual changes is non-trivial. Gradual changes are incremental changes in measurement data over a long period of time. These changes (which might stem from faults in shielding doors, deviations from safe operating protocols by human operators, or natural wear and tear of radioactive handling equipment) do not leave prominent signatures in the radiation measurement timeseries trends.

These gradual changes are influenced by multiple factors or components within a nuclear facility. Ideally, we could model the contribution of each of these factors to the change in radiological measurands, i.e. define a set of equations whose solution would describe the evolution of radiological parameters over time. These evolution functions $f_x(t)$ could aid in identifying gradual changes in radiological measurands over time. However, the interplay of multiple components within the nuclear facility may render this exercise intractable.

To address this challenge, we propose an abstraction (viewing complex entities by focusing on their essential characteristics, rather than detailed, operational aspects) of a nuclear facility as a complex system. A complex system, broadly speaking, consists of a collection of interacting elements/components, which exhibit behaviour that cannot be simply expressed as the sum of behaviors of these interacting elements [5]. In the context of our abstraction, we have defined radiological complexity as the set of externalities which affect the radiological measurands in a nuclear facility (the system). These externalities (components) can be a set of processes through which a nuclear facility cycles through during the course of its normal operations; or it can refer to the fluctuations in the ventilation patterns; or it can refer to changes in the quality of shielding and other engineered safety measures which might lead to unintended escalation of radiation sources. The complexity of this abstracted system can be quantified using a suitable information theoretic entropy metric. In our abstraction, this entropy metric will encapsulate the amount of surprise that a radiological measurand has, given its history is known. The more the surprise, the more will be the entropy.

In this paper, we explore the abstraction of a nuclear facility as a complex system, and investigate if the study of radiological complexity of a nuclear facility can give us insights and interpretations relevant to radiological safety of occupational radiation workers.

Complex systems are everywhere, from ecosystems to economies to the internet, but defining what exactly makes a system 'complex' has proven to be elusive. In systems theory, a complex system is defined as a system consisting of many parts which have varying degrees of interactions which each other. These parts display collective behaviour, which is not reducible to individual parts [6]. This is also called emergent behaviour. Notable necessary conditions for complexity are time-dependence, interactions between parts, and unexpected behaviour [7]. Complex systems show efficient, robust, fragile phenomena arising from co-evolving networks and are adaptive, with path-dependent dynamics and statistics far from equilibrium [8, 9].

We propose the abstraction of nuclear facility as a complex system. In our abstraction, we define 'the system' as the collection of radiological measurands and the radioactive handling equipment (e.g. including pumps, transfer routes, etc) of location li of the facility. The system could have also been used to refer to an entire facility. However, doing so might lead to loss of finer details.

Aligning with the key attributes of a complex system we define radiological complexity of a location li, $H_$ within a nuclear facility as the set of externalities that affect the radiological measurands of the location. Radiological Complexity, is characterised by multiple interacting components, non-linear relationships, emergent behaviours and adaptability with time. Since these are the general characteristics of a complex system, for our proposed abstraction to be on firmer radiological grounds, we align these general characteristics of a complex system with the peculiar features of a nuclear facility.

2.1. Components and interactions

In our abstraction, components of radiological complexity refer to the various factors which affect the radiation environment of a location in a nuclear facility. These components interact with each other, and lead to changes in radiological parameters i.e. radiation levels and airborne particulate radioactive (APR) concentrations. These components include, but are not limited to,-

Movement of radioactive material—Movement of radioactivity can be due to physical transfer of radioactive solution or material from one location of the facility to another. It can also refer to changes in concentration/content of radioactivity in a location due to different process cycles or operations or maintenance of equipment in the facility. An example of this would be dilution of process quality control samples of radio-chemical process (like PUREX) and transfer of the diluted samples to laboratories for further analysis; Shielding alterations—These can be due to natural wear and tear of shielding material, can be due to malfunction of shielding mechanism like the shielding doors not closing as designed or left open due to operator error, or can be due to alterations in operational procedures (e.g. deployment of mobile shielding during specific operations); Changes in ventilation rates and patterns—Ventilation rates, specifically the number of air changes per hour in controlled areas of a facility are maintained as per the relevant safety guides. However, these ventilation rates might change due to malfunctions of exhaust/ventilation fans or increase of differential pressures in filter beds. Changes in ventilation patterns could be due to unintended variation in two different exhaust fans in a location, leading to changes in the path of radioactive air. These variations will lead to changes in APR concentrations at a location; Human behaviour—Radioactive work is usually carried under radiation work permits, which have well-defined SOPs for the radiation workers. However, it cannot be denied that there is a human tendency to shorten the work time, by deviating from the SOP. These deviations may have trivial radiological effects, however, over time, these may lead to cascading changes in the radioactive workflow, which in turn might lead to creation of escalating sources of radioactivity. An example of this would be not following the recommended frequency of decontamination of a radioactive area. Over time, this may lead to accumulation of radioactivity in the area, which will change the radiological measurands in the location.2.2. Non-linearity and emergence

The components of a nuclear facility do not interact with each other linearly. A small change in one of the processes could lead to significant changes in the radiological parameters, thus affecting the overall radiological landscape i.e. the radiation levels and APR concentrations. For example, shielding alteration in the transfer route of radioactive solution will non-linearly affect the changes in radiological measurands during the movement of the solution.

2.3. Robustness and fragility

The radiological parameters of a location may not change significantly over time i.e. they are robust. However, the smallest of changes in the system may lead to significant changes in the radiological measurands i.e. the system is fragile. For example, a diaphragm of a radioactive solution transfer pump may not be delivering the required flow-rate, increasing the residence time of radioactive solution in a location beyond the normal levels. The radiological changes due to this reduced flow rate will not be significant i.e. radiological measurands are robust. However, the eventual rupture in the diaphragm and spillage of radioactive solution will lead to a non-trivial change in the radiation environment i.e. radiological measurands are fragile.

2.4. Hierarchical structure of the system

The radiological parameters at a location (i.e. a system) are not independent i.e. there is a hierarchy of systems. Take for example, radioactive solution being transferred from one part of a location to another. These transfers may not be direct and the solution may traverse other locations, with the concentration of radioactivity and corresponding radiological parameters being changed in the process, making each of these solution transfers a complex subsystem. So, whatever changes occur in a different location will have a cascading effect in subsequent locations.

Different metrics of complexity are used in different types of complex systems. The best metric to use depends on the specific system being studied. For our abstraction, we will be studying the information complexity of the system. Information complexity refers to the amount of information or knowledge contained within a system or measurands for the system. We will quantify this using information entropy. There is a large literature on use of information entropy as a measure of complexity of physical systems across disciplines.

3.1. Literature review

Domerçant [10] explored the use of information entropy as a measure of complexity in engineered systems during design. They showed how measures of architecture and design complexity based on information entropy can enable a trade-off analysis between complexity and other attributes. Grassberger et al [11] provided an overview of how information entropy can be used to measure complexity in dynamical systems. They discussed how to estimate dynamical entropies and attractor dimensions. Mays et al [12] applied information entropy to specifically measure the complexity of unsaturated flow in heterogeneous porous media. Majumdar et al [13] analysed information entropy, Fisher information, and Fisher–Shannon complexity in diatomic molecules modelled by the generalised Kratzer potential. Cho et al [14] developed a model using information entropy to measure the complexity of manufacturing systems. Their model accounts for interactions between resources and finds that systems with evenly distributed interactions are more complex. Gao et al [15] gave an overview of information entropy and its role in complexity theories like chaos theory and fractal theory.

Information and uncertainty of an observation or a series of observations are conversely linked. This was the basis of the definition of entropy, H(X) for information systems by Shannon [16]

Equation (1)

H(X) is the entropy of the discrete random variable X

$ p(x_i) $ is the probability mass function of X at xi.

b is the base of the logarithm, usually 2 or 10.

Since this seminal definition of entropy, it has been defined in many different fields and interpreted in various ways. A bibliometric survey of research on entropy in the past two decades can be found in [17].

While developing our abstraction, it was important to work with existing radiological data acquisition systems at nuclear facilities. This would lead to easier and more widespread adoption of our proposed approach to workplace monitoring. The data used in this paper, have been acquired through a Central Radiation Protection Console (CRPC). CRPC can be thought of as a dashboard which collates the radiation readings from all installed radiation monitors in a nuclear facility. The CRPC data are, thus, essentially a timeseries of readings from radiation monitors installed in various locations of the facility, with timestamps separated by a certain constant interval. This collection of observations recorded sequentially in time, forms a representation of the radiological landscape.

Analysing the inherent structure and behaviour of these radiation readings timeseries was essential for understanding the complexity of the system. Information entropy can be used for analysing the complexity and randomness of time series data. Darbellay and Wuertz [18] showed how entropy can be used to detect statistical dependencies in financial time series and discusses how to account for long-range dependence and non-stationarity. Lee et al [19] developed the minimum entropy density method to detect the scale with the least uncertainty and applied it to stock market data, finding the efficient market hypothesis does not always hold. Rostaghi and Azami [20] introduced dispersion entropy as a fast, amplitude-sensitive irregularity measure and showed that it outperforms permutation entropy on several real-world datasets. Guzman-Vargas et al [21] used multiscale entropy to analyse geoelectric time series and observed the complex changes around an earthquake.

3.1.1. Sample entropy

Sample entropy (SampEn) [22], is a metric which is sensitive to the degree of irregularity or unpredictability present in a time series. Unlike traditional statistical measures such as the standard deviation or autocorrelation, sample entropy provides a robust framework for assessing the complexity of time series, even in the presence of noise and non-linearity, which we expect in a radiation environment. A comprehensive definition of sample entropy can be found in [23, 24].

The calculation of SampEn involves the comparison of subseries within the time series, whereby the algorithm quantifies the likelihood of similar subsequences remaining similar when an additional data point is appended. This iterative process yields a metric that reflects the underlying regularity or irregularity of the data.

SampEn is the negative natural logarithm of the probability that if two sets of contiguous data points of length m have distance $\lt r$, then two sets of contiguous data points of length m + 1, also have distance $\lt r$. We start with a template vector of length m, such that, $X_m (i) = \,,x_\}$, and define a distance function: $d(X_m (i),X_m (j))$. This can be, for example, the Chebyshev distance [25].

Then, we define sample entropy as:

Equation (2)

where, A = number template vector pairs such that $d(X_(i),X_ (j))\lt r$, and, B = number of template vector pairs such that $d(X_m (i),X_m (j))\lt r$,

Generally, m = 2 and r = 0.2σ.

3.1.2. Multiscale sample entropy

Multiscale sample entropy (MSE) [26] is a variation of SampEn that incorporates multiple scales to analyse the complexity of a time series signal. SampEn is a widely used method for quantifying the regularity and complexity of time series data [27]. However, SampEn has limitations when applied to signals with multiple timescales or when the scale of interest is not known in advance.

MSE addresses these limitations by considering multiple timescales simultaneously. It calculates the entropy at different timescales and provides a more comprehensive analysis of the complexity of the signal. This multiscale approach allows for the detection of patterns and irregularities at different levels of detail, capturing both local and global characteristics of the signal.

The calculation of MSE involves two main steps. First, the time series signal is coarse-grained into multiple scales using a coarse-graining procedure. This procedure involves dividing the signal into non-overlapping segments of equal length and replacing each segment with a single data point representing the average value of the segment. The coarse-grained time series is then used to calculate the SampEn at each scale. The second step involves analysing the relationship between the entropy values at different scales to extract meaningful information about the complexity of the signal [28].

MSE has been applied in various fields and has shown its usefulness in different applications. In the field of physiology, MSE has been used to study physiological timeseries such as photoplethysmography and electromyography signals, providing insights into the complexity and dynamics of these timeseries data [29]. MSE has also been used for the early fault detection and diagnosis in rotating machinery and rolling bearings [17, 30, 31]. MSE has also been applied in the analysis of financial stock markets, traffic time series, and structural health monitoring systems [3234].

The efficacy of MSE lies in its ability to capture the complexity and dynamics of signals at multiple timescales. MSE provides a more comprehensive analysis of the signal, allowing for the detection of patterns and irregularities that may not be apparent at a single scale. This can be particularly valuable in applications where the timescale of interest is not known in advance or when signals exhibit multiple scales of complexity. MSE can provide valuable insights into the underlying dynamics and complexity of the signal, aiding in the understanding, analysis, and interpretation of the data.

For our abstraction of a nuclear facility as a complex system, we use sample entropy and multiscale sample entropy to quantify the radiological complexity.

4.1. Data acquisition

In order to investigate the efficacy of our abstraction of a nuclear facility as a complex system, we used the RMS data from a particular nuclear facility. This data formed the base of our numerical experiments. The data used for this paper are the radiation readings from the installed Area Gamma Monitors (AGMs) of a spent nuclear fuel reprocessing facility. All the installed AGMs in the facility are connected to a CRPC. The data from each AGM is in the form of a time series i.e. a timestamp and the radiation level reading at that timestamp (in mR h−1. 1 mR h−1 ≈ 10 µGy h−1). Radiation reading data for all AGMs from 01 January 2019 to 31 December 2020 (i.e. a two-year period), have been used in this study.

The CRPC collects and stores data for every second. However, our dataset, consists of radiation readings at 15 min intervals. A 15 min interval was chosen because of the limitation of the legacy data export capabilities of the existing CRPC. Only monthly exports were possible for 1 s interval, and a manual export for 24 months for each of the installed monitors was considered intractable. Hence, instead of reducing the overall period of the study, we chose to make the time-interval coarser. It is important to note here, that our approach can afford coarser data, because we are studying gradual changes radiation reading patterns and not sudden temporary rises in radiation readings. A more recent time period has not been considered, since this particular facility has been under shutdown since March, 2021 until the time of writing this paper. It was necessary to work with the radiological data from a period when the facility was fully operational and study the interpretability of a complexity abstraction in the context of radiological safety.

For studying radiological complexity of the facility, only the AGM data have been used. It is acknowledged that a more complete radiological landscape should include the airborne particulate radioactivity (APR) concentrations also, which can be estimated from the timeseries data of the installed continuous air monitors (CAMs). However, the timeseries data from CAMs for this facility would have included the contributions from short-lived radionuclides, since the installed CAMs in the nuclear facility measure the gross alpha and beta activities deposited on a sampling medium. As this is the first attempt to abstract a nuclear facility as a complex system, the complications from the contribution of short-lived radionuclides were avoided.

The installed gamma monitors at the facility, can be divided into two broad categories—area monitors and process monitors. The area monitors are installed with the express intent of measuring the external radiation levels in the location, whereas process monitors are installed with the intent to identify deviations in the process (e.g. presence of radioactivity in process cooling water, etc). In this study, only area monitors have been considered. In total, data from 48 AGMs, in the 2 year period have been considered.

The facility has been divided into 26 different locations, labelled as li, where li $\in$ . Each location li has n AGMs installed. Some locations in the facility have n = 1, but for the majority of li, n > 1. Each of the AGMs has a label number x, and is labelled as AGMx. For a better understanding of the relative spatial positions of li, a schematic of the facility is shown in figure 1. The specific AGMs installed in these locations li are shown in figure 2.

Figure 1. Schematic of the facility for which the dataset for analysis was acquired. The radiological complexity framework does not depend on the nature of the radiation facility, however, this schematic will aid in a better understanding of where the labelled locations are relative to each other.

Standard image High-resolution image

Figure 2. A graphic showing which AGMs are located in which location (li) of the facility. Each of the 48 AGMs in the dataset has a unique location, however, most locations li in the facility have more than one AGM. Multiple AGMs in a location are installed by design, as part of the traditional approach to workplace monitoring, to provide radiological surveillance to the large locations with multiple radioactive systems.

Standard image High-resolution image

For an AGMx in our dataset, we have a timeseries $a_x = \$, where yi is the AGM reading at time ti. Each AGM has a unique location label li. We had 48 such timeseries, with roughly 70 000 observations in each of the timeseries and 26 distinct locations li.

Since our objective is to present an abstraction of any nuclear or radiation facility, we have refrained from going into the details of the reprocessing processes in the facility. But, wherever deemed necessary, we have mentioned the radiological relevance of a location li.

4.2. Pre-processing data4.2.1. Fault labels

In the two-year period, the installed AGMs have exhibited certain types of faults. Such faults were expected during the operating cycle of an AGM. In order to take these faults in to account, all the readings in each of the AGM timeseries have been labelled by a fault variable. This variable was essential, since, while carrying out sample entropy calculations based on AGM readings only the non-faulty readings were to be considered. The new variable was named as fault_label and it is defined to take integer values between −2 and 2 based on the conditions given in table 1.

Table 1. Fault labels for AGMs.

Fault labelDescription of fault−2The detector has no power supply. An AGM reading was labelled by −2 when the reading is near −25 mR h−1. This condition usually occurs when the AGM detector is turned off for maintenance.−1An AGM reading was labelled −1 when the reading was above the calibrated value (116 mR/h). This fault occurs when there is failure of the AGM detector.0The reading was normal and had no fault.1No value was recorded by the Central Radiation Protection Console (CRPC). This can occur due to loss of communication between the AGM and the CRPC. The corresponding reading for this fault label would be Null.2An AGM reading was labelled 2 when the reading was negative, but was close to zero. This fault occurs due to current fluctuations during the transfer of signal from the instrument to the reading recorder.

Only readings with fault_label = 0 were considered for entropy calculations. The rest of the readings were assigned as null.

4.2.2. Removing outliers and imputing missing values

Removing outliers from the dataset was important to ensure the accuracy and reliability of subsequent analyses. One commonly used method for outlier detection is the standard deviation method, where data points deviating more than a certain number of times the standard deviation are considered outliers [35]. A comprehensive review on outlier and anomaly detection in time series data can be found in [36]. Readings of an AGM were identified as outliers if they were 3σ or more away from the mean. Outlier readings were removed from the dataset.

Using fault labels to choose only those readings with fault_label = 0 and removing outliers led to creation of missing data (this was in addition to readings which had fault_label = 1). The percentage of null values in our dataset was less than 1%, however, since we were analysing about $35 \times 10^6$ datapoints, even 1% was a fairly large number of null values. We could have simply deleted these values from our dataset, however, that would have affected the MSE analysis to which the dataset is later subjected.

In order to have a timeseries with only non-null values, it was necessary to impute the missing values, using a suitable interpolation. For the dataset, the missing values were imputed using interpolation based on derivatives [37]. This method was particularly useful for our timeseries data because it can capture the underlying trends and patterns in the data. In this method, a piece-wise polynomial was fitted to the data based on their derivatives. This allowed for a smooth interpolation of missing values, taking into account the local behaviour of the data. By using derivatives to estimate the missing values, it was ensured that the imputed data aligns with the overall trend of the timeseries. Compared to other imputation methods, such as median imputation or zero imputation, this method takes into account the local behaviour of the data and captures the underlying trends, resulting in a more realistic and reliable imputation. This was especially suited for radiation readings data.

The last step in pre-processing the raw data was normalisation i.e. scaling the data. This was done to ensure that the entropy calculations were not affected by scale or unit of the reading. The readings of AGMs at the facility are generated in the legacy mR/h units (1 mR h−1 ≈ 10 µGy h−1). However, there maybe nuclear facilities with monitors which have readings in other units. Hence, it was important to scale the readings. Scaling also ensures that the data are transformed to a consistent range, which improves the performance of numerical algorithms. There are several popular methods for scaling timeseries data in the context of physical processes. One commonly used method is min–max scaling, which rescales the data to a specific range, typically between 0 and 1. This method is particularly useful when the absolute values of the features are not as important as their relative positions within the range.

Another popular method for scaling timeseries data is standardisation, also known as z-score normalisation. This method transforms the data to have a mean of 0 and a standard deviation of 1. Standardisation is useful when the distribution of the data is expected to be Gaussian or when the algorithm used for analysis assumes standardised data. It can be achieved by subtracting the mean and dividing by the standard deviation of the timeseries. For the radiation readings, it is not expected that the distribution of the data will be Gaussian. Hence, the data was not standardised. However, we normalised the radiation readings from AGMs using a min-max scaling [38].

4.3. Exploratory data analysis of AGM timeseries

Exploratory data analysis (EDA) of the AGM timeseries data, involved calculating the summary statistics and plotting the AGM readings for the period. For better visual understanding of the graphs, we have refrained from plotting all the available readings of AGM. Instead, wherever we have shown the plots of AGM radiation readings, we have plotted the daily moving averages. This has led to graphs which show normalised observations, but the highest point in the graph was not equal to 1. The min-max scaling was done for the entire dataset, but the plots of AGM timeseries, are daily moving averages.

Because of the locational similarities of some AGMs, it was more prudent to show the average radiation levels in the 26 locations, rather than timeseries plots for all 48 AGMs. The location-wise radiation levels (in µGy/h) are given in figure 3.

Figure 3. Mean radiation levels at different locations of the facility. In order to calculate the average radiation levels at a location li, the average of radiation readings of n AGMs was calculated, n being the number of AGMs in that location. The y-axis has unscaled radiation readings, since we wanted to show the relative radiation levels at different locations in the Facility. The error bars show the 1σ deviations from the mean radiation levels in the location.

Standard image High-resolution image 4.3.1. Sharp rises in radiation levels for AGMs

Some of the monitors have low mean radiation readings, as can be seen from figure 3, and the sharp rises in the readings, which are artefacts from electrical fluctuations and are not radiologically significant. However, some of these spikes are due to movement of radioactive material in and out of the location.

At L-4, AGM11 is located near the drums in which Category I and Category II solid radioactive wastes are stored. New waste drums are stored and the older ones are routinely transferred away from AGM11 to a waste management facility, based on an operational schedule. This leads to rises and drops in radiation readings of the monitor (figure 4). AGM12 however, is located at the entry to L-4. This has a much lower mean radiation level compared to AGM11, but the spikes in the radiation readings are due to the same reason.

Figure 4. Timeseries of radiation readings of AGMs in L-4. AGM11 is located near the drums in which Category I and Category II solid radioactive wastes are stored. AGM12 however, is located at the entry to L-4. The radiation levels have been scaled by their respective maximum values in the timeseries. Hence, the y-axis is unitless.

Standard image High-resolution image

The EDA of radiation readings and graphs of AGM readings have useful information in studying the radiological landscape of the facility, especially when viewed over a long period of time. However, this approach has limitations. These radiation readings have patterns, since there are scheduled operations in the facility, which change the radiation environment in the vicinity of the AGMs. Gradual changes in the operations of the facility, will lead to incremental changes in the patterns of these radiation readings. However, it is not easy to discern these patterns merely from the graphs and statistical moments. While we can estimate the mean radiation levels of a location with a low standard error of mean, only plotting and calculating statistical moments of the raw readings do not provide information pertaining to changes in patterns of radiological measurands.

Our abstraction of a nuclear facility as a complex system affords for a more nuanced approach. Specifically, estimating the sample entropy of the AGM timeseries, gives us a metric to quantify the repetition of patterns in radiation readings. Deviations from normalcy, will lead to reduction in the number of such patterns, and hence will increase the sample entropy of that time series. This will, thus, be an indicator of these gradual changes in the radiation environment. In the context of our abstraction, we define the radiological complexity index of each location li within a nuclear facility. We estimate the complexity indices $H_$ of location li, by first calculating the sample entropy of AGM timeseries of n AGMs in the location.

5.1. Sample entropy of AGM timeseries

The sample entropy of all 48 AGM timeseries were calculated. The sample entropy calculations were done using an open-source python library, EntropyHub [39]. It was important to use an open-source and peer-reviewed code library to ensure transparency and reliability of the results. The calculation of SampEn in this package is based on the definition in [22].

The scatter plot of sample entropies of all AGMs, grouped by their locations is shown in figure 5. There were marked differences in entropies across different AGMs. Notably, the highest entropy ( = 2.24) was for AGM50 at L-23. Interestingly, during the EDA of AGM timeseries, AGM50 did not make much impression (Radiation readings of AGM50 are shown in figure 6). It had a mean radiation level of only 3.1 ± 0.01 µGy/h. The variation in the daily moving averages of AGM50 and the variance in the mean radiation levels were small (refer figure 3).

Figure 5. Scatter Plot of sample entropies of all AGM timeseries in the facility. The scatter points are colored as per the location-grouping of the AGMs. AGM timeseries with higher entropy values have less repeating patterns than those with lower entropy values.

Standard image High-resolution image

Figure 6. Radiation Readings for AGM50 at L-23. Notably, AGM50 has the highest entropy among the AGMs in the facility. The radiation readings have been scaled by timeseries maximum values. Hence, the y-axis is unitless.

Standard image

留言 (0)

沒有登入
gif