Energy Calibration
In the introductory section on Gamma spectrum, we have already briefly discussed how energy calibration is generally conducted (for a recap, click here).
We want to deepen this knowledge here.
The gamma radiation hitting a detector deposits some or all of its energy as electric charge in the detector. This charge is further processed in the subsequent measurement chain and ultimately stored in a multichannel analyzer. In this process, the current value of the channel corresponding to this charge amount is increased by 1. Over time, a histogram is created in the measurement, known as the pulse height spectrum, in which each channel contains the frequency with which a specific charge amount was detected by the detector. Depending on the type of detector, measuring electronics, and settings, this distribution typically occurs over 1024, 2048, 8192, or 16384 channels. The width of a channel, i.e., the charge range that is registered in it, can be determined by the settings of the measurement chain parameters.
The further evaluation of gamma spectrometry measurements is generally based on energy specifications, i.e., each channel must first be assigned its corresponding (mean) energy, which means performing an energy calibration. Only the correlation between channel position and corresponding energy allows a spectrum to be interpreted unambiguously.
Energy calibration is closely related to the selected settings of the detector system. Therefore, at the beginning of a calibration process, one should always consider what goal is pursued, i.e., which energy range one wants to capture or expects in the measurement. Nothing is more frustrating than to invest significant time in calibration and measurement only to find that a too small energy range has been chosen, meaning the peaks of the target lines lie outside the measurement range. Alternatively, one may have chosen a (much) too large energy range, which can lead to extremely long measurement times since the energy of a peak (which has a certain width) is now distributed across many channels.
An energy calibration can be carried out in various ways:
- The calibration measurement is performed using calibration radiators before the planned actual measurement of a sample.
- For the calibration measurement, the existing natural background at the measurement site can be used under certain circumstances. Here, the lines of the following nuclides are suitable:
- 214Bi (609.3 keV) from the 238U decay series (Radium series)
- 40K (1460.8 keV)
- 208Tl (2614.5 keV) from the 232Th decay series (Thorium series)
- The energy calibration can be determined using the characteristic lines of the measured spectrum of the sample. However, the sample must contain at least two characteristic lines that allow a unique assignment and are clearly depicted in the measurement spectrum.
Tip:
If there are no special specifications for the energy range to be considered, then at least 2800 keV should be chosen as the upper limit. If a characteristic line is found at 2614.5 keV (from 208Tl) that is above the background value, this is a clear indication of the presence of 228Th and its decay products.
The use of the natural background for energy calibration also depends on the shielding of the detector system. If it is integrated into a lead shield – which is designed to minimize the contribution from a background present at the measurement site in a measurement – then two problems arise: on the one hand, extremely long measurement times are required for sufficient statistics, and on the other hand, the contributions from low-energy lines in the shielding are largely absorbed and thus cannot contribute to energy calibration.
One advantage of using the actual measurement spectrum for energy calibration is that no additional energy calibration measurement is required. However, this method only works if one is relatively sure that the spectrum will contain usable (known, i.e., unambiguously assignable) lines for energy calibration. In the worst case, a supplementary calibration measurement with calibration nuclides can still be performed after the measurement. This procedure can also be used for routine monitoring of an already existing energy calibration by examining the deviations between the measured and actual energies. If these deviations are too large, it is a reason for redoing the energy calibration.
The most elaborate but also the most precise method is the use of specific calibration radiators in the measurements. The data determined here can often also be used for efficiency calibration, which puts the increased effort of this measurement into perspective.
Note:
Most modern gamma spectrometry systems have a nearly linear relationship between channel number and energy. In principle, a two-point calibration would therefore be sufficient. However, with multiple calibration points, you can usually further increase accuracy.
The following calibration tool allows you to test the impact on the calibration curve with realistic test data or data you enter. The adjustment to the data can optionally be done using a polynomial of the 1st, 2nd, or 3rd order. The coefficient of determination R² describes the quality of the fit (0 to 1, with 1 being a perfect fit), and RMSE is the root mean square error.
Further Information:
For the approximation of data, R² and RMSE provide two metrics to evaluate the quality of an approximation.
R² is a relative measure that describes how well the approximating function (the fit) explains the variance of the data. An R² value close to 1 means that the chosen fit reflects the structure of the data well but says nothing about the absolute size of the errors.
RMSE measures the average distance between the approximated and actual data points, and it is an absolute measure, given in the same units as the data themselves.
For approximations, it is important that RMSE grows with the number and range of the data. When more data points are added or when the data have larger ranges, RMSE often increases automatically, even if the approximation stays relatively good or improves with more data points. Therefore, RMSE should always be interpreted in the context of the amount of data and scale, while R² is a relative error measure and always ranges between 0 and 1.
Did you use the example data? Did you notice anything when performing the calculations?
If you perform a curve fitting (a so-called fit), you will find that for none of the possible polynomial orders is there a perfect fit! Upon closer examination of the data points, you will notice that the two data points on the far right seem to "fall out of line." Have the fit calculated for a polynomial of the 1st order and note the R2 value. This is a measure of how well the fit matches the data points. A perfect fit would yield R2 = 1. Now, delete the last two data points (i.e., those at 1173.2 keV and 1332.5 keV) and recalculate the fitting. Now the R2 value should be much closer to the optimal value of 1. Enter the two data points with the following modified data: (1173.2 keV, 4114 counts), (1332.5 keV, 4674 counts) and perform the fitting again for a polynomial of the 1st order. Does it fit better now?
A warning:
By changing the data, we simply wanted to illustrate that a critical questioning of the data is always required. Since we know that modern detector systems typically show a linear dependency between channel number and energy, we should ask ourselves why the last two values deviate from this linear dependency? What reasons could there be for this? If we do not obtain satisfactory answers to these questions, then we should not simply delete these data points. They may actually reflect reality! Randomly inserting "better" data points or deleting "bad" ones (as we did above for illustration) is never acceptable!