Operational Characteristics and Quality Control (QC) of a Scintillation Camera



Operational Characteristics and Quality Control (QC) of a Scintillation Camera





A number of parameters of an imaging device play a major role in the delineation of a radioactive distribution. Of these, two, spatial resolution and sensitivity, are important. For scintillation cameras, two other operational characteristics, uniformity and high count rate performance, are also considered. These and some routine QC procedures constitute the subject matter of this chapter. Operational characteristics of Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) are discussed in Chapters 14 and 15, respectively.


Quantitative Parameters for Measuring Spatial Resolution

Spatial resolution is defined as the ability of an imager to reproduce the details of radionuclidic distribution. The finer the details an imaging device reproduces, the better the spatial resolution it has. How is spatial resolution measured quantitatively? Two parameters—full-width at half-maximum (FWHM) of a point-spread function (PSF) and modulation transfer function (MTF)—are used to measure spatial resolution of an imaging device. Sometimes “bar phantoms” are used as a semiquantitative measure of spatial resolution. These are discussed later in this chapter.

PSF and FWHM as Measures of Spatial Resolution, R. If we image a single-point source and plot the intensity profile across its center, a bell-shaped curve similar to that shown in Figure 12.1 results. This curve is known as the PSF(x) of an imager. The FWHM of this curve can be used to measure the spatial resolution R, quantitatively. The unit of resolution is the same as the unit of length, centimeter or millimeter, generally. A narrower FWHM implies a better spatial resolution of the imaging device. Thus, an imaging device with 2 mm FWHM is a better imaging device than one with 5 mm FWHM. Measurement of FWHM is quite involved, and requires a special computer software that may or may not be available with a scintillation camera.

FWHM is a useful parameter for expressing the relationship of spatial resolution to various parameters of an imaging device such as size,
shape, and length of the holes of a collimator of a scintillation camera. Its main drawback is that it does not measure spatial resolution under varying object contrast conditions. As a result, it is possible to design two imaging devices that have equal spatial resolutions according to this definition, although in practice one performs better than the other.






Figure 12.1. Intensity profile through the center of a point-source image taken with a scintillation camera. The width or narrowness of such a profile is a good measure of spatial resolution. The narrower the curve, the better is the spatial resolution of the imager. The full width of this curve at the two points for which the response has decreased to one-half of the maximum FWHM is customarily used as a quantitative parameter for measuring the spatial resolution of an imaging device.

MTF. MTF gives a complete characterization of the spatial resolution of an imaging device, provided the response of the imaging device is linear. Although the latter condition is not strictly met for imaging devices, the MTF is still useful in their evaluation. Its main drawback is the inability to express its relationships with the various parameters of an imaging device in simple and understandable terms.

To understand this parameter fully, knowledge of Fourier analysis is essential. However, to comprehend it conceptually, an analogy with sound is helpful. Any sound—the ding-dong of a bell or the pretty voice of a singer—is made up of a number of sound waves of different frequencies. Once the component frequencies and their strengths (amplitudes) are known for a given sound, it can then be resynthesized (in a laboratory) by proper superimposition of these frequencies. In a similar fashion, any spatial distribution (object) can be broken down into a number of spatial frequencies, and the original distribution (object) can then be resynthesized by proper superimposition of these spatial frequencies.

How does this breaking up of a spatial distribution into component spatial frequencies aid in evaluating the imaging device? Not per se, but if we measure the degradation produced by an imaging device as a function of various spatial frequencies, the resulting function will provide the information needed to characterize the imaging system completely. The degradation M produced for a spatial frequency ν by an imaging device is measured as the ratio of the contrast (amplitude of the wave) in the image frequency to the contrast in the object frequency (Fig. 12.2).






Figure 12.2. Modulation transfer function. When a spatial frequency ν of an object is imaged, its amplitude may change. The ratio of the amplitude in the image of a spatial frequency to that in the object is known as modulation (M). Measurement of M as a function of ν yields the MTF of an imaging device.

Measurement of M as a function of ν, then, produces the MTF of the imaging device. When the value of M for a particular spatial frequency is 1, this indicates no degradation of contrast for that frequency. If M equals 0, the imaging device is unable to reproduce this particular spatial frequency; therefore, a value of 0 represents the maximum degradation. Values of M between 1 and 0 represent the extent of degradation for a given frequency. An ideal imager (which produces an exact image of an object) has a value of M = 1 for all the spatial frequencies.

The use of the MTF to compare the spatial resolutions of two different imaging devices can be appreciated easily by examining Figure 12.3, which depicts the MTF for three imaging devices, A, B, and C. Here, the MTF of the imaging device A is higher for all the spatial frequencies than the MTF of the imaging devices B and C; therefore, the imaging device A possesses the best spatial resolution of the three imaging devices. The choice between B and C is difficult. At low spatial frequencies, B is superior to C, whereas at higher frequencies C is superior to B. The selection of the particular imaging device in this case depends on the type of objects to be imaged. If an object dominates in high frequencies, device C would be a better choice. If an object contains primarily low frequencies, then device B would be more advantageous to use. The MTF of an imaging device is difficult to be measured directly. Instead, it is calculated from the line-spread function (LSF) (discussed in the following) of an imager, which can be easily measured.







Figure 12.3. MTF of three imaging devices. Modulation (M) for imaging device A is higher than that of B and C at all spatial frequencies. Therefore, it possesses the best spatial resolution of the three. The choice between B and C is difficult because at low spatial frequencies, B is superior to C, whereas at high spatial frequencies, C is superior to B.

Resolution of an Imaging Chain. Quite often an imaging device can be broken into components, with each component contributing to the final or system resolution of the imaging device; for example, in the case of a scintillation camera, the final or system resolution is derived from two components, the collimator and the x, y localization mechanism. How does one combine these component resolutions to give the system resolution? If the resolution is expressed using FWHM and R1, R2, … are the component resolutions, respectively, then the system resolution RS, is given as follows:


For MTF representation of resolution, the system MTFs is given by the following expression:



Quantitative Parameters for Measuring Sensitivity

In addition to spatial resolution, the other important parameter of an imaging device is sensitivity, which is assessed on an annual basis. Sensitivity can be defined as the ability of an imaging device to use efficiently all the photons available from an object within a given unit of time. Three parameters—point sensitivity, line sensitivity, and plane sensitivity—have been used to measure the sensitivity of an imaging device. Each has relative advantages and disadvantages.






Figure 12.4. LSF of an imaging device. Using a gamma camera, LSF can be directly obtained from the image of a line source, provided it is interfaced with a computer system.

Point Sensitivity Sp. This parameter is defined as the fraction of γ-rays detected per unit of time for a point source of radioactivity. In a scintillation camera, Sp is more or less constant in the field of view of the collimator.

Line Sensitivity, SL. This parameter is defined as the fraction of γ rays detected per unit of time per unit of length of a very long line source of uniform radioactivity. The count profile of a line source, as determined by an imaging device, through a direction perpendicular to the line source is known as the LSF (Fig. 12.4). LSF(x) is primarily used in the calculation of the MTF of an imager as follows:


Plane Sensitivity, SA. Plane sensitivity is defined as the fraction of γ-rays detected per unit of time per unit of area of a large plane source of uniform radioactivity. This parameter is commonly used to compare the sensitivities of two imaging devices. The principal advantage of SA is the ease with which it can be measured. Plane sensitivity does not vary with the distance of the plane source from the collimator as long as the
area of the plane source is larger than the field of view of the collimator at that distance.


Factors Affecting Spatial Resolution and Sensitivity of an Imager

The spatial resolution and sensitivity of an imaging device depend on a number of variables described in the following. Theoretically, the exact relationships of these variables to spatial resolution and sensitivity are difficult to be obtained in a generalized case. However, by making certain assumptions, these relationships can be expressed in simplified mathematical form. Using these formulas, an approximation of the dependence of spatial resolution and sensitivity on a given variable can be easily deduced. In the following discussion, we use these simpler formulas. In addition, we use R (FWHM of a PSF) and SA as measures of the spatial resolution and sensitivity, respectively, of an imaging device, assuming that there is no septal penetration by γ-rays in the collimator of the scanner, and there is no scattering of γ rays in the radionuclide source.


Scintillation Camera

System Resolution. The system spatial resolution, RS, of a scintillation camera comprises R1 and R2, where R1 is the intrinsic spatial resolution of a scintillation camera and R2 is the spatial resolution of the collimator used with the scintillation camera. Spatial resolution RS is approximately related to R1 and R2 by equation (12.1).

Intrinsic Resolution, R1. The intrinsic spatial resolution R1, which is a measure of the uncertainty in the localization of the point where light is produced in the crystal, is degraded with an increase in the thickness of the NaI(Tl) crystal and is improved with an increase in γ-ray energy. The improvement in the intrinsic spatial resolution of a camera with the γ-ray energy is shown in Figure 12.5.

Collimator Resolution, R2. Spatial resolution R2 depends on various collimator parameters such as collimator length L and diameter of holes d. We limit our discussion here to a parallel-hole collimator (Fig. 12.6), although similar considerations apply for a converging or diverging collimator. R2 in this case depends on the hole-diameter d, length of the collimator L, distance from the back surface of the collimator to the detection plane C, and the
distance F of the source from the collimator face. R2 is given by the following expression:

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Nov 8, 2018 | Posted by in GENERAL SURGERY | Comments Off on Operational Characteristics and Quality Control (QC) of a Scintillation Camera

Full access? Get Clinical Tree

Get Clinical Tree app for offline access