Trapezoidal back projection for positron emission tomography reconstruction

This section presents the TBP procedure. First, we describe the PET scanner geometry assumed by the method (Sect. Geometry of the PET scanner). After that, we move on to theoretical calculations (Sect. Revisiting the back projection) and propose a sampling method (Sect. Defining detector sample points that cover every LOR). Then, we present the look-up table based acceleration (Sect. Look-Up Table (LUT) based overlap area calculation). Finally, we give an overview of the algorithm (Sect. Algorithm overview).

Geometry of the PET scanner

For scanner geometry, we use the nanoScan PET/CT system developed by Mediso Medical Imaging Systems [20] (Fig. 1). In this model, the detector ring consists of \(N_m\) planar modules of \(N_\text \times N_\text\) detector crystals, the inner surfaces of which form a rectangular matrix. The reconstructed volume is located at the center and is divided into a 3D grid of voxels.

Fig. 1figure 1

Overview of the scanner geometry and the coordinate system used in this study. The detector ring consists of \(N_m\) modules, each of which in turn consists of \(N_\text \times N_\text\) detector crystals. The measured volume is located at the center of the ring and is divided into homogeneous voxels

In Tera-TomoTM PET reconstruction, crystals of the same modules are managed together, so LOR formation takes place between pairs of modules. The range of modules considered to be “opposite” to an arbitrarily selected one (i.e., the range of modules it can be paired with) is determined by the coincidence mode. For example, if the coincidence mode is 1:3, LORs can be formed between crystals of any module and crystals of the three modules opposite to it; if the coincidence mode is 1:5, five opposite modules are considered. This can also be viewed as each module having 3 or 5 pairs with which it can form LORs. During reconstruction, the pairs of detector modules are processed consecutively. That is, coincidence events between the two modules of the first pair are calculated first, then those of the second module pair, and so on until the last pair in EM or the last of a specific subset in ordered subset expectation maximization (OSEM) [21].

Revisiting the back projection

In the ML-EM reconstruction, back projection updates the positron emission density \(\textbf\) based on the ratio of the measured and computed number of LOR hits, \(\textbf\) and \(\mathbf }\):

$$\begin \textbf^ = \textbf^\; \frac^T \cdot \frac}}}}^T \cdot \textbf}, \end$$

(3)

where \(\textbf\) is the \(N_\text \times N_\text\) system matrix, whose elements \(\textbf_\) describe the probability that a positron born in voxel V generates an event in LOR L:

$$\begin \textbf_ = \int \limits _ \in \mathcal _V} \mathcal (\vec \rightarrow L) \text\vec , \end$$

(4)

where \(\vec \) is a volume \(\mathcal _V\) of voxel V, and \(\mathcal (\vec \rightarrow L)\) is the system sensitivity denoting the probability that the positron formed in \(\vec \) gets detected in LOR L.

Fig. 2figure 2

Path of the two photons formed at \(\vec \) as they hit the two crystal surfaces at \(\vec _1, \vec _2\), thus contributing to LOR L

Fig. 3figure 3

The surface of the detector module highlighted in green is sampled by random points \(\^_1\}_i\). Each \(\vec ^_1\) is connected to the emission point \(\vec \), and the line segments defined in this way are extended into straight lines. The intersections of these lines and the plane of the red module form points \(\^_2\}_i\). However, only points which are inside the red area get accepted (\(\vec ^_2\), \(\vec ^_2\), \(\vec ^_2\)); if a line intersects another module (\(\vec ^_2\)) or does not intersect any module (\(\vec ^_2\)), the sample is discarded

Considering only direct geometric factors and ignoring scattering, absorption, positron range, and detector sensitivity, the two photons formed in \(\vec \) travel in opposite directions \(\vec \omega \) and \(-\vec \omega \), and contribute to LOR L if the line defined by point \(\vec \) and direction \(\vec \omega \) crosses the surfaces of the two detector crystals that make up LOR L (Fig. 2). Let \(\vec _1, \vec _2\) denote the intersection points of this line and the detector surfaces. Direction \(\vec \omega \) is of uniform distribution on the half sphere \(\Omega _H\) of solid angle \(2\pi \), thus the scanner sensitivity can be obtained as

$$\begin \mathcal (\vec \rightarrow L) = \int \limits _ \frac \xi _L(\vec _1, \vec _2) \text\vec \omega , \end$$

(5)

where \(\xi _L\) is the indicator function that is 1 if intersection points \(\vec _1, \vec _2\) are on the detector crystals of LOR L and 0 otherwise. Substituting Eq. 5 into Eq. 4, we get the final form of \(\textbf_\):

$$\begin \textbf_ = \iiint \limits _ \in \mathcal _V} \hspace \iint \limits _ \frac \xi _L(\vec _1, \vec _2) \; \text\vec \omega \text\vec . \end$$

(6)

Due to computing capacity limitations, the volumetric integral is usually estimated from a single position sample \(\vec \) taken from voxel V (e.g., its center), whereas the directional integral is approximated from a finite number of \(\vec \omega \) samples. More specifically, one of the detector modules in the module pair currently being processed is sampled and the sample points \(\vec ^_1\) are connected by straight lines to the emission point \(\vec \) to obtain directions \(\vec ^\). For a given sample, the system matrix element is nonzero for that LOR \(L(d_1, d_2)\) where \(d_1\) is the detector crystal to which \(\vec ^_1\) belongs and \(d_2\) is the detector crystal intersected by the line of \(\vec ^_1\) and \(\vec \) on the opposite module (Fig. 3). (Note that if \(\vec ^_1\) is sampled randomly, it is possible that the line defined by \(\vec ^_1\) and \(\vec \) does not intersect any modules, i.e., misses all detectors. An aim of the algorithm presented in this paper is to avoid such samples.) In this case, the system matrix element can be approximated as

$$\begin \textbf_ \approx \frac}, \end$$

(7)

where \(\Delta \omega _\) is the solid angle subtended by the two crystal surfaces with voxel V (represented by volumetric position \(\vec \)):

$$\begin \Delta \omega _\approx & \Delta \omega _ \int \limits __1 \in d_1} \frac_1, \vec \rightarrow d_2)}\; \text\vec _1 \end$$

(8)

$$\begin\approx & \Delta \omega _ \frac \sum \limits _^ \xi (\vec ^_1, \vec \rightarrow d_2), \end$$

(9)

where \(N_s\) is the number of sample points \(\vec ^_1\) on the surface of detector crystal \(d_1\), \(\Delta \omega _\) is the solid angle in which \(d_1\) is seen from \(\vec \), whereas \(\xi (\vec ^_1, \vec \rightarrow d_2)\) is the indicator function that is 1 if the line defined by \(\vec ^_1\) and \(\vec \) intersects the surface of detector crystal \(d_2\) and 0 otherwise.

Denoting the surface area of detector crystal \(d_1\) by \(D_1\) and the angle between \(d_1\)’s unit normal vector \(\vec _1\) and the direction \(\vec ^_1 \rightarrow \vec \) by \(\theta _^_1}\) (Fig. 4), solid angle \(\Delta \omega _\) is approximately

$$\begin \Delta \omega _ \approx \frac^_1}} - \vec ^_1|^2} = D_1 \frac_1 \cdot (\vec - \vec ^_1)} - \vec ^_1|^3}. \end$$

(10)

Note that surface area \(D_1\) does not have to be absolute, it is sufficient to specify it in the detector space. Since the system matrix appears in both the numerator and the denominator of the back projection formula, the coordinate transformation matrix could be omitted in a single simplification step.

Fig. 4figure 4

Given a detector crystal \(d_1\) and a sample point \(\vec _1\) on its surface, the angle between \(d_1\)’s unit normal vector \(\vec _1\) and the direction \(\vec _1 \rightarrow \vec \) is denoted by \(\theta __1}\)

In the algorithm we propose, every LOR \(L(d_1, d_2)\) that crosses voxel V is sampled exactly once and no other samples are taken. That is, detector points \(\vec ^_1\) are generated such that

if \(\vec ^_1\) and \(\vec ^_1\; (1 \le i, j \le N_\text,\; i \ne j)\) are from the same detector crystal \(d_1\), then the lines obtained by connecting them with \(\vec \) intersect different detector crystals \(d_2^, d_2^\) on the opposite module,

there is no such crystal \(d_2\) on the opposite module that would be possible to reach from \(d_1\) through \(\vec \), but is not reached because no appropriate line starting point \(\vec ^_1\) is generated on the surface of \(d_1\),

there is no such point sample \(\vec ^_1\) that, if connected with \(\vec \), defines a line that does not intersect any detector crystal \(d_2\) on the opposite module.

In addition to the effective selection of the sample points, we also strive to accurately calculate the integral

$$\begin \int \limits __1 \in d_1} \frac_1, \vec \rightarrow d_2)}\; \text\vec _1 \end$$

in Eq. 8 instead of approximating it by finite sampling as in Eq. 9. More specifically, we combine this integral with surface area \(D_1\) into a new variable \(D_\) as

$$\begin D_= & D_1 \int \limits __1 \in d_1} \frac_1, \vec \rightarrow d_2)}\; \text\vec _1 \end$$

(11)

$$\begin= & \int \limits __1 \in d_1} \xi (\vec _1, \vec \rightarrow d_2)\; \text\vec _1, \end$$

(12)

which specifies the part of the area of detector crystal \(d_1\) from which, taking any sample point \(\vec ^_1\), if we connect it with \(\vec \), the line would intersect detector crystal \(d_2\) on the opposite module. Substituting back into Eq. 7, we get the final form as

$$\begin \textbf_ \approx D_\; \frac_1 \cdot (\vec - \vec _1(d_1, d_2))} - \vec _1(d_1, d_2)|^3}. \end$$

(13)

Note that \(\vec ^_1\) was replaced by \(\vec _1(d_1, d_2)\) as now only one sample \(\vec _1\) is taken for every detector crystal pair \((d_1, d_2)\) and its selection is deterministic. The task is to define these \(\vec _1(d_1, d_2)\) functions and to calculate the exact area \(D_\).

Defining detector sample points that cover every LORFig. 5figure 5

An overview of the task to identify the end crystals of LORs that go through a given voxel \(V\!\)

The detector modules of the PET scanner ring are organized into pairs according to a pre-defined coincidence mode, and processed one pair after the other. In each turn, one of the detector modules is selected as primary, whereas the other one is called secondary. The primary module is sampled by surface points \(\vec ^_1\), each of which is connected with \(\vec \) to define a straight line, which intersects the secondary module in surface point \(\vec ^_2\). The detector crystal on which the sample point \(\vec ^_1\) lies is \(d_1\), whereas the detector crystal of \(\vec ^_2\) is \(d_2\). The two crystals together specify LOR \(L(d_1, d_2)\), for which the system matrix element \(\textbf_\) is calculated according to Eq. 13.

Intersection points \(\vec ^_2\) can be obtained by connecting the primary sample points \(\vec ^_1\) with the emission point \(\vec \) by straight lines and determining the intersections of these lines with the plane of the secondary detector module. That is, the primary detector module is centrally projected onto the plane of the secondary detector module, using \(\vec \) as the center of projection. However, we do not just want to determine the sample points \((\vec ^_1, \vec ^_2)\), but also the surface area \(D_\) that accounts to the part of \(d_1\) whose central projection falls on \(d_2\). In this case, it is advisable to reverse the procedure: secondary detector crystal \(d_2\) is projected onto the plane of the primary detector module, and \(D_\) can be obtained as the area of the intersection of \(d_2\)’s projection and the surface of \(d_1\).

Recall that the detector crystals on the modules are arranged in a matrix-like manner, so their surfaces form a 2D grid, where each grid cell is the surface of a detector crystal. In our algorithm, the grid of the secondary module is projected through \(\vec \) onto the plane of the primary module, and our aim is to determine the cells of the two grids that overlap each other, together with the area of their overlap (Fig. 5).

Central projection is a homogeneous linear transformation that maintains straight lines, and the rectangle of the secondary detector module is always transformed into a trapezoid. The shape of this projection depends on the angle formed by the planes of the primary and secondary modules:

When the planes are parallel (i.e., when the modules are directly opposite each other), the projection is a rectangle similar to the secondary detector surface (Fig. 6). The size of this rectangle varies based on the position of \(\vec \): if \(\vec \) is closer to the primary module, the projection will appear smaller; if it is closer to the secondary module, the projection will be larger; and if \(\vec \) is equidistant, the projection will be congruent to the secondary detector surface.

When the planes are not parallel, the projection results in a trapezoid, where one pair of sides remains parallel along the axial direction of the detectors (Fig. 7).

Note that a rectangle is a specific type of trapezoid, so in all cases, the projection of the secondary detector module onto the primary module plane will be trapezoidal. The task is then to determine the overlapping cells between the two grids: the rectangular grid of the primary detector module and the trapezoidal grid of the secondary detector module. Both grids are now on the same plane and in the same local coordinate system, allowing us to analyze these overlaps precisely.

First, the four vertices of the secondary module’s projection are calculated. The trapezoid has a bottom-left, a top-left, a top-right, and a bottom-right vertex, which are specified in the coordinate system of the primary detector module’s grid. The two vertices on the left and on the right have the same x (tangential) coordinate, i.e., the trapezoid has two vertical edges on its two sides.

The trapezoid itself is also a grid of resolution \(N_\text \times N_\text\), where the borders of each cell can be obtained by linearly interpolating between the four vertices. Using this knowledge, we iterate through each trapezoidal cell to identify which cells of the primary module it overlaps.

For example, if the left edge of a trapezoidal cell lies at the tangential coordinate \(x_\mathrm \) and the right edge at \(x_\mathrm \), then the overlapping primary module cells can have tangential coordinates

$$\begin \left( \left\lfloor }\right\rfloor , \left\lfloor }\right\rfloor + 1, \ldots , \left\lfloor }\right\rfloor \right) , \end$$

where \(\left\lfloor \right\rfloor \) denotes the floor function. For the primary detector module grid, cell boundaries are always located at integer coordinates.

Similarly, in the axial direction, if the lowest vertex of the trapezoidal cell is at the axial coordinate \(y_\mathrm \) and the highest vertex is at \(y_\mathrm \), then the overlapping primary module cells may have axial coordinates

$$\begin \left( \left\lfloor }\right\rfloor , \left\lfloor }\right\rfloor + 1, \ldots , \left\lfloor }\right\rfloor \right) . \end$$

If both the x and y coordinates of a primary module cell meet the above requirements, then that cell is partially or completely covered by the trapezoidal (secondary module) cell. In this case, the detector crystals corresponding to the two cells form a valid LOR that is guaranteed to pass through emission point \(\vec \). This implies that back projection requires the contribution of this LOR to update the activity of voxel V. Specifically, the numerator term \(\textbf_ \; /_L}\) and the denominator term \(\textbf_\) in Eq. 2 need to be calculated for this LOR L and summed with the contributions of other valid LORs.

As the measured and forward-projected LOR events \(y_L\) and \(\tilde_L\) are known at the time of back projection, the only value that needs to be computed is the system matrix element \(\textbf_\). To obtain \(\textbf_\), we use the form presented in Eq. 13, where the two unknowns are the area of overlap \(D_\) and the primary module sample \(\vec _1(d_1, d_2)\).

A simple, low-computation heuristic for determining the sample point on the primary module is to designate the center of the overlap of the two cells as \(\vec _1(d_1, d_2)\) (Fig. 8). Then, only the area of overlap \(D_\) remains to be calculated. In the following section, we present a look-up table based solution for efficiently approximating \(D_\).

Fig. 6figure 6

If the two detector planes are parallel (which occurs when they are directly opposite each other), the projection (yellow) is a rectangle

Fig. 7figure 7

If the primary (green) and the secondary (red) module are not parallel, the projection (yellow) will be a trapezoid with one pair of parallel sides in the axial direction

Fig. 8figure 8

Sample points \(\vec _1(d_1, d_2)\) are simply set to the center of overlap of the two cells. Note that the detector crystals are numbered from left to right, from bottom to top

Look-Up Table (LUT) based overlap area calculation

The intersection of the trapezoidal cell of the secondary module’s projection and the square cell of the primary module is a convex polygon (Fig. 10), the area of which is \(D_\) that we want to calculate. If the vertices \(\textbf_\;(i = 1, 2, \cdots , K, \text\, K \ge 3)\) of this polygon are known, the area can be obtained according to the shoelace formula [22]:

$$\begin \text = \frac \left|\sum \limits _^ (\textbf_^x \textbf_^y - \textbf_^x \textbf_^y) \right|. \end$$

(14)

However, this method is computationally expensive, since it requires identifying all vertices of the polygon and then sorting them in a pre-determined direction (clockwise or counter-clockwise). Moreover, as these steps are predominantly sequential, the architecture of the GPU cannot be fully exploited.

Let us go back to the definition of \(D_\) and break down the calculation of the trapezoid–square intersection into 2 steps:

1.

The \(1\times 1\) square cell of the primary module is intersected by the vertical lines of the left and right sides of the trapezoidal cell, \(x=x_\mathrm \) and \(x=x_\mathrm \). This results in a rectangle that is \((x_\mathrm - x_\mathrm )\) unit wide and 1 unit high.

2.

The rectangle is intersected by the lines of the bottom and top sides of the trapezoidal cell, and the area between the two yields \(D_\).

The first step is a simple cropping that is straightforward to perform. As for the second step, we propose a method that builds on the work of Ha and Mueller [23], who developed a look-up table based ray integration framework for CT. More precisely, we have adapted their 2D ray-voxel area calculation method designed for fan-beam CT geometry, which now works for general rectangles, not just squares.

The basis of our method

In fan-beam CT, the detector is a linear array on a circular trajectory and the ith projection data corresponds to the area integral over the fan-beam segment that is defined by connecting the X-ray source with the two corners of the ith detector bin. The intersection of a fan-beam with a given (square) voxel is a 2D polygon, whose area is needed for the integral. Instead of using the shoelace formula (as in Eq. 14), Ha and Mueller proposed the construction of an area look-up table (\(\mathop \)), which returns the area of the part of a voxel that lies above a given directed line. Each fan-beam can be represented by two such lines, connecting the X-ray source to the two corners of the detector bin. The area of intersection between the fan-beam and the voxel can be calculated as the difference between the area above the bottom line and the area above the top line (see Fig. 9). This solution requires only two memory reads and one subtraction, making it extremely computationally efficient.

Fig. 9figure 9

In the CT framework that is the basis of our method, the intersection area of the fan-beam and the voxel is obtained as the difference between the area above the bottom line (v1–v3–v5–v6) and the area above the top line (v2–v3–v4). The lines were identified by the perpendicular distance from the voxel center, \(d_\mathrm \), and the angle from the x-axis, \(\theta _\mathrm \)

The first dimension of \(\mathop \) is indexed by \(d_\mathrm \), the signed perpendicular distance from the voxel center to the line, whereas the second dimension is indexed by \(\theta _\mathrm \), the directed angle formed by a reference axis (e.g., the x-axis) and the line. Each entry \(\left( d_\mathrm , \theta _\mathrm \right) \) of the LUT stores the area above the line pre-calculated according to Eq. 14. Note that by utilizing the symmetry properties of the voxels (the square shape), it is possible to decrease the number of entries in the LUT. For example, \(d_\mathrm \) may take only nonnegative values, because for negative distances, the corresponding area can be calculated as \(S - \mathop (|d_\mathrm |, \theta _\mathrm )\), where S is the total area of the voxel, i.e., \(\Delta x \times \Delta y\). The maximum value for \(d_\mathrm \) is

$$\begin d_\mathrm ^\mathrm = \frac}. \end$$

(15)

This is a voxel’s half diagonal since the intersection area converges either to zero or to the total area beyond this length. Similarly, the value of \(\theta _\mathrm \) can be set between \(0^\) and \(45^\), since for angles larger than \(45^\), \(\theta _\mathrm \) can be replaced by a suitable \(\theta _\mathrm ' \in [0^, 45^]\) angle (e.g., \(\theta _\mathrm ' = | 90^ - \theta _\mathrm |\)) [23].

Our method

The left and right sides of the trapezoidal cell are vertical, whereas the bottom and top ones can have any angle \(\theta _\mathrm \in [0^, 90^)\). If they are both horizontal (\(\theta _\text = \theta _\text = 0^\)), the trapezoid is a rectangle and the calculation of the areas of intersections with the square cells of the primary detector module is straightforward. However, if either is not horizontal (\(0^ < \theta _\mathrm \)), then the extension of their lines will intersect somewhere (Fig. 10). The intersection point can be interpreted as the “X-ray source” and the method of Ha and Mueller [23] can be applied almost directly. Note that an entry for the both-horizontal case (\(\theta _\text = \theta _\text = 0^\)) can be added to the area LUT, eliminating the need for angle checks.

Fig. 10figure 10

The intersection of the trapezoid cell (AD) and the square cell is a convex polygon (v1–v2–v4–v5–v6), the area of which is obtained similarly to Ha and Mueller’s area calculation (Fig. 9). The extensions of the trapezoidal cell’s bottom and top lines intersect in one point that can be interpreted as the “X-ray source”, and the opposite side of the trapezoid can be interpreted as the “detector bin”

The difference compared to the task of Ha and Mueller is that while they always have a square-shaped voxel that is intersected by the two straight lines, in our case it may happen that only a rectangular part of a “voxel” is examined, because we discard regions that are not in the minimum bounding box of the trapezoidal cell. To address this problem, we multiply with the ratio of the rectangle’s sides when calculating \(\theta _\text\):

$$\begin \theta _\text = \arctan \left( \frac\, m_\text \right) , \end$$

(16)

where \(\Delta x\) is the width and \(\Delta y\) is the height of the rectangle, and \(m_\text\) is the slope of the line intersecting it.

To obtain the perpendicular distance \(d_\text\), we first calculate the y coordinate of the line at the rectangle’s tangential center, \(x = 0\). Then, we divide it by \(\Delta y\) to transform our rectangle to a height of 1. The obtained coordinate is the length of the hypotenuse of a right triangle in which the adjacent side’s length is \(d_\text\), which can be calculated by multiplying the hypotenuse length by \(\cos (\theta _\text)\):

$$\begin d_\text = \left( y_ x = 0} / \Delta y\right) \cos (\theta _\text). \end$$

(17)

Effectively, by adjusting the values of \(\theta _\text\) and \(d_\text\), we transform our \(\Delta x \times \Delta y\) rectangle into a \(1 \times 1\) square on which the area LUT method of Ha and Mueller can be applied.

As a result of working with rectangles, the symmetry properties of our solution also change. For example, \(\theta _\mathrm \) has a smaller range (\([0^, 90^)\)), so exploiting symmetry is simpler:

$$\begin \theta _\mathrm ' = |90^ - \theta _\mathrm | & \text \theta _\mathrm > 45^,\\ \theta _\mathrm & \text \end\right. } \end$$

(18)

Furthermore, the maximum distance is \(d_\mathrm ^\mathrm = \frac}\), since now \(\Delta x = \Delta y = 1\).

Algorithm overview

Let us put together the complete algorithm, which we call trapezoidal back projection (TBP). The goal of the procedure is to calculate the back projection update of a given voxel V according to Eq. 2. The numerator and the denominator of the update formula (\(\sum _L \textbf_ /_L}\) and \(\sum _L \textbf_\), respectively) are summed in separate variables over the LORs.

To find the LORs passing through voxel V, all detector module pairs valid in the selected coincidence mode are examined one after the other. To save on computational resources, voxel V is represented by a single emission point \(\vec \). In our solution, \(\vec \) is simply set to the center of the voxel.

As proposed in Sect. 2.3, the secondary module is centrally projected through emission point \(\vec \) to the plane of the primary module. The resulting projection is a trapezoid with vertical sides. The next task is to determine which cells of the secondary module’s projection overlap which cells of the primary module. To do this, we iterate over all cells of the secondary module with two nested loops: one for the axial index of the cell \(Y_\text\) and another one for the tangential index \(X_\text\). Then, we determine the position of the secondary module cell \((X_\text, Y_\text)\) in the coordinate system of the primary module.

Let us start with the tangential direction. In the trapezoidal grid, the vertical cell borders are not equally spaced, but are closer together on the shorter (smaller \(\Delta y\)) side and further apart on the longer (larger \(\Delta y\)) side—just as in perspective drawing. To obtain exactly how far along the x-axis the cell is, we calculate the intersection of two line segments (Fig. 11). The first segment is defined by connecting the world position of the cell \((X_\text, 0)\) on the secondary module with \(\vec '\), the projection of emission point \(\vec \) onto the \(z = 0\) plane. The second segment is defined by connecting the world position of the primary module cells (0, 0) and \((N_\text, 0)\); i.e., the (unnormalized) tangential direction vector of the primary module. (Note that these two line segments fall on the same 2D plane of 3D space: the \(z=0\) plane.) The ray parameter \(t_\text\) of the intersection along the tangential vector is obtained as proposed by LaMothe in [24] (Algorithm 1). The ray parameter \(t_\text\) of the right side is calculated similarly, but using the secondary module cell \((X_\text + 1, 0)\).

Fig. 11figure 11

Intersection of line segments (0, 0)—\((N_\text, 0)\) and \(\vec '\)—\((X_\text, 0)\) yields parameter \(t_\text\) used for linear interpolation. Point \(\vec '\) is the projection of the emission point \(\vec \) onto the \(z=0\) plane

Algorithm 1figure a

Algorithm for calculating the intersection of line segments \((\vec _0\)—\(\vec _0)\) and \((\vec _1\)—\(\vec _1)\) as proposed by LaMothe [24]

Moving on to the axial direction, first, the y coordinates of the two endpoints of the bottom and top lines of the trapezoidal cell’s row need to be obtained (Fig. 12). The axial boundaries of the trapezoidal cell are then calculated using linear interpolation with parameters \(t_\text\) and \(t_\text\).

Fig. 12figure 12

To obtain the y coordinates of the

留言 (0)

沒有登入
gif