Basic Physics of Digital Radiography/The Image

The image can be considered to result in a health benefit to the patient from the radiation exposure. The generation of images of high quality at low patient dose can therefore be considered to be a key objective of Diagnostic Radiography. Digital radiography images are generally viewed on specialised computer screens for diagnostic reporting. Technical features of such screens are considered in this chapter and objective measures of physical image quality are described. Various methods for evaluating image quality are also discussed.

Image Display

edit

CRT displays were once the only devices which could be used for displaying digital radiographic images. This situation changed in the early 21st century following the development of Liquid Crystal Displays (LCDs) for use in digital television as well as in home and business computers. Medical imaging has benefitted from these developments. However it should be appreciated that digital radiography displays require superior performance because of the relatively large number of pixels in CR and DR images[1].

An LCD is a two-dimensional, electro-optical light modulator that is mounted in front of a back-light - see Figure 6.1. The light is modulated for each pixel by applying an electric field to a thin layer of nematic liquid crystal mounted between two polarising films. Active Matrix Liquid Crystal Displays (AMLCDs) apply the electric field to each pixel using a large array of Thin Film Transistor (TFT) switches made from amorphous silicon (a-Si) deposited on a glass substrate.

 
Fig. 6.1: Illustration of the sandwich of materials used to form an LCD.

Note that the development of such large TFT arrays has also led to the subsequent development of DR image receptors.

The size of the monitor screens used in digital radiography is generally sufficient for 35 cm x 43 cm radiographs to be displayed at a resolution of 2,048x2,560 pixels (i.e. 5 megapixels) in portrait mode, for instance, although 3 megapixel displays are also in use. Conventional computer monitors have been found to be inadequate for primary radiographic diagnosis, but can be used subsequently for display in conjunction with a radiological report. Display luminance is also high, >200 cd/m2 typically, so as to achieve an image brightness adequate for diagnostic purposes. In addition, images are generally displayed following corrections for human visual perception. The Grayscale Standard Display Function (GSDF), for instance, is widely applied in an attempt to generate consistent rendition of images irrespective of the actual display device used.

 
Fig. 6.1.5: Deconstruction of a medical-grade 3-Megapixel display.

Components of a medical-grade LCD are shown in Figure 6.1.5. Here the LCD sandwich is shown in the top right photo, the back light consisting of fluorescent strips in the bottom left (note that the diffusing layer has been removed) and the LCD active layer in the bottom right. The device contains a small photodetector at the faceplate (not shown) which is used to maintain the brightness of the displayed image, and an ambient light sensor.

Veiling glare degrades image quality as a result of light scattering within the display’s faceplate and electronic cross-talk. This results in a low-frequency, image-dependent degradation of subtle features especially in dark regions with nearby bright areas. Veiling glare must therefore be minimised in the design of medical diagnostic displays. Luminance uniformity is also an important feature.

The ambient lighting in the reporting room has been found to be critical. This is mainly because of light reflections onto the screen. Such reflections are generally of two types: specular (where spatial features of the reflecting objects can be seen in the reflection) and diffuse (which adds a relatively uniform luminance to the displayed image). Both types need to be minimised through appropriate room design.

Given the demanding applications for which these displays are used, many have features for remote performance monitoring and calibration for quality assurance purposes.

Image Contrast

edit
 
Fig. 6.2: Transfer characteristic for three types of video camera.

Subject contrast, developed by exposing the patient to an X-ray beam, is converted to image contrast on the display monitor. In general, this conversion is expressed by the Transfer Characteristic of the image receptor and can be obtained by plotting the radiation exposures necessary to generate a broad range of image receptor outputs. The transfer characteristic is generally linear in the case of CR and DR imagers, but this is not always so with XII-video systems. The XII itself generally has a linear characteristic, but some video cameras, e.g. the vidicon, may use a transfer characteristic which selectively boosts low exposure regions of images - see Figure 6.2. This type of behaviour is usually characterised by a power function which is expressed by what is called the camera's Gamma, γ, i.e.

I = k Lγ,

where I is the signal current developed by the video camera when L is light exposure at its input and k is the electronic gain. In the case of the Plumbicon and Chalnicon, the gamma is close to unity, while for the vidicon its of the order of 0.7.

Image contrast is also affected by scattering processes within the image receptor. In the case of the XII, with is multi-stage image transduction, these processes include:

  • X-ray scattering (Compton effects) in the entrance window,
  • X-ray scattering (Compton effects) in the input phosphor,
  • Lateral diffusion of light in the input phosphor,
  • Lateral diffusion of light in the output phosphor,
  • Electron scattering within the body of the XII tube, and
  • Retrograde emission of visible light by the output phosphor which cause electrons to be ejected from the photocathode which in turn will give rise to light emission from the output phosphor.
 
Fig. 6.3: The transfer characteristic of CR technology showing excellent linearity and broad dynamic range, properties not shared by the traditional film-screen technology (see the red curve).

These scattering effects are collectively referred to as Veiling Glare. They're substantially less with CR and DR image receptors, at below 10% as opposed to 30% or more in the XII, but they can nevertheless manifest themselves in a manner similar to scattered X-rays in the detected radiation beam and provide similar reductions in contrast.

Image contrast, in the context of our earlier consideration, can be given simply by an expression derived from the subject contrast in scatter conditions, i.e.

C = k [ln (IA+S+G) - ln (IB+S+G)]

where k is the gain of the image receptor and G is the veiling glare signal. Once again, note that G is assumed to be the same in both bone and tissue regions in the above expression, when in fact it can be expected to vary slowly throughout an image.

 
Fig. 6.4: Images of a pelvic phantom demonstrating the dynamic range of CR. The image on the left is taken using 2 mAs and the one on the right using 160 mAs exposure. The major discernible difference is a marked increase in variability of grey levels in the lower exposure image (e.g. see the region shown arrowed, for example).

A second parameter to be derived from the transfer characteristic is the dynamic range, which expresses the range of input signals over which the image receptor is sensitive. In the case of CR, its about four orders of magnitude - see Figure 6.3, which shows its transfer characteristic in comparison to the traditional film/screen technology. The result is that under-exposure and over-exposure of regions traditionally seen radiographs are much less of an issue in clinical imaging. This feature of CR is illustrated by the radiographs in Figure 6.4.

In fluoroscopy, automatic brightness control (ABC) and automatic dose rate control (ADRC) can be used to adjust the X-ray exposure factors to match the patient's anatomy being examined, as discussed in an earlier chapter. This can be achieved using a photodetector at the XII output to sense image brightness, for instance, so as to feed a signal back to the HV generator to automatically adjust the kV and/or the mA. It can also be achieved using the image signals themselves, sampled from the central portion of images, for example.

Spatial Resolution

edit

Spatial resolution refers to the ability of a radiographic imaging system to record fine detail. Obviously, detail is a pre-requisite for clinical images of excellent quality. However, it should be appreciated that not all image receptors demonstrate the same performance in this regard.

 
Fig. 6.5: A spatial resolution test object.

The maximum spatial resolution of an imaging system can be readily obtained by imaging a resolution test object - an example of which is shown in Figure 6.5, panel (a). The test object consists of narrow parallel slits in a lead sheet at spacings which decrease to beyond the maximum resolution of the image receptor. The minimum spacing resolved in images is called the Limiting Spatial Resolution and can be determined to be about 3.5 line pairs/mm from the figure.

Note that the width of each slit in the test object is the same as that of the adjacent piece of lead, so that the radiation intensity transmitted through the test object can be considered in profile to be represented by a square wave - see panel (b). A Spatial Period (usually measured in mm) can be used to characterize this square wave and is equal to the width of one line pair, i.e. the width of a slit plus its adjacent piece of lead. Its reciprocal is called the Spatial Frequency, which is generally expressed in line pairs/mm (LP/mm) or cycles/mm.

 
Fig. 6.6: Profile (in blue) through a radiograph of a lead bar test object.

An amplitude profile through an image of the test object allows the modulation at each spatial frequency to be determined - see Figure 6.6 - and can be used to provide more complete information than the limiting resolution on its own.

Here, the modulation is obtained from the difference between the maximum and minimum pixel value at each spatial frequency and expressed in the form of a Square Wave Response (SWR) as shown in Figure 6.7. The modulation is seen to be relatively constant at low spatial frequencies and then to decrease rapidly towards zero. The SWR allows the spatial imaging capabilities to be expressed for both broad, relatively uniform objects, i.e. those with low spatial frequencies, and fine detail, i.e. those with high spatial frequencies, as well as features with intermediate frequencies.

 
Fig. 6.7: A square wave response.

A more complete and elegant approach to the assessment of spatial resolution is provided by Fourier methods. These computations can be used in the mathematical analysis of factors which contribute to and detract from the generation of images with excellent spatial resolution.

 
Fig. 6.8: The Line Spread Function (LSF) and its origin for an X-ray intensifying screen.

Fourier methods can be used to analyse the response of an imaging system to a square wave input using a narrow slit in a sheet of lead, for instance. Remember that a square wave is the equivalent of the sum of an infinite number of sine waves. The imaging of such a slit is illustrated in Figure 6.8 where the transmitted radiation is seen to excite fluorescence in an intensifying screen. The fluorescent light is emitted in all directions and the image of the slit therefore becomes spread out over a broader area than is ideal. The effect is seen in the illuminance profile which consists of a central peak, as expected, with tails extending around it. This type of profile is called the Line Spread Function (LSF). The effect on the slit’s image as a result is seen as a slight tinge of greyness around the slit’s edges to an extent given by the tails of the LSF. Better performance can therefore be seen as a narrowing in the LSF and a suppression of its tails.

The same type of data can be obtained for 2 dimensions using a small hole in a sheet of lead and is called the Point Spread Function (PSF).

 
Fig. 6.9: The modulation transfer function (MTF).

When the Fourier Transform of an LSF is calculated, then the imaging system’s response to sine waves of all spatial frequencies is obtained. This response is called the Modulation Transfer Function (MTF) - see Figure 6.9. It can be seen that the modulation falls off with increasing spatial frequency, as was seen with the Square Wave Response, but as a continuous curve representing all, and not just discrete, spatial frequencies.

The response of an ideal imaging system is also shown in the figure. Its constant value of 1.0 at all spatial frequencies implies that all details in the patient will be imaged perfectly, unlike our real intensifying screen whose modulation drops by 20% at spatial frequency, A, and by 90% at frequency, B, for example - whatever their absolute values might be.

 
Fig. 6.10: MTFs of two hypothetical image receptors.

Spatial frequency, B, in Figure 6.9 can be considered to be approaching the extreme of the resolving capability of an imaging system. The limiting spatial resolution is sometimes defined as the frequency where the modulation drops to 4%. When the resolving ability of two different image receptors is compared, as in Figure 6.10 for instance, we could infer from measuring the limiting resolution alone that system B was superior to system A, at 8 compared to 5 LP/mm. An MTF comparison would reveal however that system A in fact provides superior quality at frequencies less than about 3 LP/mm, which is where many features of clinical interest said to lie.

 
Fig. 6.11: MTFs for phosphor plates used in CR. Curves are for high resolution (HR) and standard (ST) plates. The MTF for a 400-speed film/screen receptor is also shown for comparative purposes.

The MTF performance of film/screen radiography is compared with Computed Radiography (CR) in Figure 6.11. It can be seen that the standard resolution CR system (ST) is about equivalent to a regular film/screen receptor across the frequency range. It can also be seen that the HR computed radiography system provides an improvement of ~20% at the intermediate spatial frequencies, while it approaches the performance of the other two receptors above 4 LP/mm.

 
Fig. 6.12: The MTFs of components of an X-ray image intensifier.

A major advantage of the MTF concept is that for an image receptor with a number of image transduction stages, the overall MTF can be obtained from the product of the individual component MTFs. This feature is demonstrated in Figure 6.12, which shows the component MTFs for an X-ray image intensifier. It can be seen that the contrast at high spatial frequencies is limited by the behaviour of the input phosphor and not the output phosphor in this hypothetical case. Thus, from a design point of view, a reduction in veiling glare in the input phosphor should improve the overall MTF of the XII. As an example of the multiplicative property of the MTFs, with reference to the figure, note that at a spatial frequency of 3 LP/mm the MTF of the image intensifier is:

Modulation = 0.78x0.55x0.48 = 0.21.
 
Fig. 6.13: Profile (in green) through a radiograph of a resolution test object imaged in scatter conditions. The no scatter case is shown in blue for comparison.

In the presence of scatter, we can expect the reduction in contrast to give rise to reduced modulation at all spatial frequencies and a reduced ability to discriminate fine detail. This is illustrated in Figure 6.13 where modulation reduction is quite evident.

 
Fig. 6.14: The absolute SWR in scatter and no scatter conditions is shown on the left, while a normalised plot is shown on the right.

The impact on the square wave response is shown in Figure 6.14. It can be seen that scatter reduces the amplitude of the SWR and eliminates the modulation at frequencies above 2 LP/mm, in this case, so that they can no longer be resolved.

Note that a large reduction in modulation at very low spatial frequencies can be inferred from the figure. This phenomenon is generally referred to as the Low Frequency Drop and could conceivably be used as an indicator of scatter (and veiling glare) levels.

Image Noise

edit

Mottle can typically be seen in radiographic images as minute random fluctuations in the greyness of the anatomical details portrayed - see the images in Figure 6.4, for an example. When these fluctuations are large enough, they may obscure subtle changes in image contrast and render details invisible on the image. We will consider the major source of mottle below.

 
Fig. 6.15: Impact of noise on the ability to image objects of varying contrast. As the object contrast decreases the ability to discriminate it is masked by the noise.

The imaging of three objects of the same size but differing contrasts is considered in Figure 6.15. Here, profiles through each object can be seen to decrease in amplitude until they are just above the background gray level. In the presence of fluctuations in signal amplitude (generally called Noise), the ability to discern the low contrast object is severely compromised. The noise is seen to add a randomness to both the object and its background.

 
Fig. 6.16: When an area of an image receptor is subdivided into small elements each element will absorb different numbers of X-ray photons because of statistical fluctuations in photon flux. In the left hand matrix the average photon flux is 100 per unit area with a standard deviation of 10 leading to quantum mottle of 10%. By increasing the photon flux by a factor of 100 the mottle can be decreased to 1% as shown in the right hand matrix.

Let’s continue by considering two equally-sized small areas of a hypothetical digital image containing nine pixels. And let’s consider that the number of X-ray photons collected by each pixel is given by the numbers in Figure 6.16. We can estimate that for the area on the left, the average number of photons detected is 100, while that on the right, possibly as a result of lower attenuation, collects an average of 10,000 photons. The variation in the individual pixels within each area can be estimated statistically from the standard deviation of the number of detected photons. The standard deviation can be calculated from the square root of the mean when the photon number is assumed to follow a Poisson distribution as is the situation in radiography. The variation arises because of the random nature of X-ray emission that occurs within the anode of the X-ray tube and gives rise to what is called Quantum Noise.

Quantum noise is generally expressed as being plus/minus one standard deviation of the mean. The noise in our panel on the left is therefore ±10 photons, while that on the right is considerably higher at ±100 photons. It can therefore be inferred that noise increases with the number of X-rays detected. On this basis, it can be concluded that noise increases with radiation exposure, i.e. mAs.

The mottle is given by the ratio of the noise to the mean number of photons and therefore is 10% for the left panel and just 1% for that on the right. The mottle as a result is more apparent in the left than it is in the right. In other words, mottle decreases with increasing radiation exposure so that subtle contrasts become more conspicuous.

The Signal-to-Noise Ratio (SNR) is a more general concept applied in this form of image analysis and is given by the ratio of the noise to the mean signal. Therefore, its 10:1 on the left, while its considerably higher at 100:1 for the right hand panel. We can infer therefore that the SNR increases with increasing radiation exposure, implying improved image quality at higher mAs and for lower attenuating tissues.

Note that an assumption underlying the above discussion is that random X-ray emission is the sole source of random variations in the grey level of images. In reality, noise also arises from electronic components within the digital image receptor. This electronic source is sometimes called System Noise and should generally be much less than the Quantum Noise. Note however that should the system noise increase above that due to quantum fluctuations, because of faulty electronics, for example, then increases in radiation exposure (i.e. mAs) to offset the appearance of mottle are unlikely to have any major effect on image quality.

A final point to note is that the dependence of image noise on spatial frequency can be analysed using Fourier methods giving rise to the so-called Wiener Spectrum of an image receptor.

Detective Quantum Efficiency (DQE)

edit
 
Fig. 6.17: The image reception process and the origin of the input and output SNR ratios.

The DQE combines the effects on modulation, spatial frequency and noise of an image receptor and can be used to compare different receptors in a more general manner than the MTF alone - see Figure 6.17. The parameter relates the signal to noise ratio (SNR) of images displayed by the imaging system, SNRout, to the SNR of the incident X-ray intensity pattern, SNRin, i.e.

DQE (f) = (SNRout/SNRin)2
 
Fig. 6.18: Illustrative comparison of the DQE for four possible image receptors: direct digital radiography (a-Se), indirect digital radiography (CsI), computed radiography (CR) and the traditional 400-speed screen/film (400 S/F).

where the DQE is defined as a function of spatial frequency. The imaging performance of different image receptors receptors is illustrated in Figure 6.18. Note that a perfect image receptor would have a DQE of 1.0 at all spatial frequencies. In the cases illustrated, note that the DQE performance of CR is similar to that of a traditional regular screen/film combination. Note also that the digital technologies illustrated show substantially superior DQE at all spatial frequencies, heralding potential dose reductions relative to those necessary with traditional and CR image receptors. In addition, the DQE of indirect image receptors is about 10-15% greater than XII-video technology.

DQE measurements are generally used to compare the physical image quality of different image receptor technologies. Measurement methods have therefore been developed which are specified in standards such as those of the International Electrotechnical Commission (IEC). This had led to the commercial development of devices which can conveniently perform the necessary measurements, e.g. the DQEpro.

Temporal Resolution

edit

Temporal resolution expresses how quickly an image receptor can respond to a change in exposure and is a parameter of relevance only to fluoroscopic imaging. Lag or image persistence (e.g. arising from incomplete readout of image signals) is generally an undesirable property of many photoconductive cameras. It can be seen as a blurring of the image whenever there is relative movement between the patient and the XII. It can also be seen as the finite time it takes for an image to build up following initiation of the X-ray exposure and to decay following termination of the exposure.

 
Fig. 6.19: Timing diagram for pulsed progressive readout of a video camera.

It does have the advantage of averaging statistical fluctuations to minimise the effects of mottle that normally occur with low dose fluoroscopy. Internal body structures can move with a velocity of 10-30 mm/s in X-ray examinations of the gastrointestinal tract due to peristalsis. In cardiac angiography the mean velocity of a coronary vessel is typically about 50 mm/s, while the peak velocity may exceed 100 mm/s. A typical vidicon camera tube might have 20% of the image signal remaining after three video frames and 5% remaining after as many as 10 frames - which would be totally useless in angiography applications, for instance. By contrast the lower lag of Plumbicon cameras, make them much more suitable for such application. The magnitude of the lag can also be quite extensive in DR and approaches such as detector back-lighting can be applied to reduce the effects.

Pulsed progressive readout (PPR) of the video camera target can be used to minimise persistence effects. Here the conventional interlaced scanning of the target is replaced by a progressive scan where each video line is read sequentially. The sequence of events, see Figure 6.19, is as follows:

  • First, the exposure pulse is produced and the image accumulates on the target of the video camera. The target is blanked during this period, i.e. it is not scanned by the electron beam.
  • Next, the target is scanned in a progressive fashion in a standard frame period and most of the image information is read off. This image information is fed to the digital image processor for subsequent processing.
  • Finally, the target is read again in order to discharge the target of any residual image signals (lag) and to prepare the target for the next image. This latter scan is commonly referred to as a Scrub Frame.

The PPR mode of operation is also used because it allows video scanning to occur with independence from the duration of the radiation exposure. Such independence allows the application of exposure pulses of variable length - which gives some flexibility in the design of X-ray generators.

Independence from image persistence effects allows the utilisation of the information from virtually all of the radiation exposure without any contributions from image build-up and decay effects. In addition, images of better spatial resolution have been obtained using the PPR mode. A disadvantage of the approach however is that, since two video frame periods are required for each image, the maximum frame rate for PPR exposures is about half that of continuous interlaced raster scanning.

Image Quality Assessment

edit

Image quality is a capacious term which can mean different things to different people. For instance, a radiologist when viewing a radiograph may be interested primarily in the diagnostic value of an image, while a radiographer may focus on how well the image represents the anatomy and a physicist may be interested in the contrast, resolution and noise properties[2]. All have an interest nonetheless in keeping the patient's absorbed dose as low as reasonably achievable.

A physics perspective provides objective measures of image contrast, resolution and noise which can be used to compare the performance of different image receptors and different exposure techniques, for instance. However, its apparent that such measurements on their own provide little information of direct clinical value. Furthermore, it is apparent that an absorbed dose measurement on its own provides little information regarding the value of an examination nor the physical image quality of the radiographs.

Given that there is a stochastic health risk from all X-ray examinations, it is reasonable to conclude that image quality should be assessed in conjunction with absorbed dose measurements so that the likely benefit to the patient can be determined relative to the health risk from the radiation exposure.

Various approaches are used for the assessment of image quality which include:

  • Physical measures of image quality as described above. These represent objective measurements and by their nature do not include the observer’s performance in the processes of image perception and analysis. As such, they indicate what can be achieved in ideal situations without the influence of psychophysical factors of the observer and the detectability of lesions in complex projected anatomical backgrounds.
 
Fig. 6.20: An image of a contrast-detail test phantom acquired using a fluoroscopy system.
  • Contrast-detail evaluation combines physical indices of image quality with observer detection ability. Contrast-detail phantoms contain test objects of different sizes and subject contrast mounted on a plastic plate that is radiographed under specific exposure conditions - see Figure 6.20. Contrast-detail plots are derived on the basis of the borderline visibility of test objects in the image. A disadvantage of this approach however is the introduction of bias as a result of the observer's prior knowledge of the size, shape and location of the low-contrast objects. The link between this type of evaluation and clinical imaging performance is therefore difficult to establish.
  • Anthropomorphic phantoms consist of anatomical models manufactured from synthetic materials representing various parts of the human body. The synthetic materials have X-ray attenuation characteristics similar to that experienced clinically and subtle lesions can be simulated. Images provide the ability to compare the influence of exposure factors and the use of different image receptors. They have the advantage of being able to simulate clinical imaging conditions without irradiating patients. Phantoms can also be used for radiation dose measurements by placing small TLDs within their volume. Their disadvantage however is that they lack the variations in patient anatomy experienced in the clinical environment with respect to body composition and anatomical backgrounds.
  • Detailed evaluation of patient images offers the most realistic method of image quality assessment. Patient images have the advantage of representing real world conditions. However there are numerous ethical considerations with the use of patients and their images in medical experiments. Besides this feature, there is no reference standard to compare patient images against and, in many cases, no validation mechanism for the actual presence or absence of a lesion. In addition, it is apparent that additional patient exposure is likely to be required in comparison studies. Furthermore, a broad variation in lesion conspicuity generally does not exist in specific patient cohorts nor in body habitus demands on the imaging system. Large numbers of patients and images have therefore to be included in such studies for statistical reasons.
Image quality assessment through subjective scoring is widely used because of these constraints. In visual grading analysis, for instance, observers can subjectively grade the rendition of certain structures in images and score certain parameters on a decision scale with, for example, three or five levels. Predefined decision levels are used, such as:
  • the anatomical feature is detectable, but details are not fully reproduced,
  • the details of anatomical structures are visible but not necessarily clearly defined, and
  • the anatomical detail is clearly defined in all respects.
 
File 6.21: An interpretation of a Receiver Operating Characteristic (ROC) curve as the degree of separation between two Gaussian distributions, where the perpendicular line marks the decision point. Positive diagnoses lie to the right of this line, while negative diagnoses lie to the left. The decision point defines a point on the ROC curve. See text for further details.
Image quality can also be assessed using descriptive statistics, e.g. accuracy, sensitivity and specificity, with large patient cohorts and validated diagnoses. Here, a truth table is constructed which details the number of true and false, positive and negative radiology interpretations - see Figure 6.21. Note that this perspective is similar to that adopted in the famous Known Knowns remark used in US politics!
Receiver Operating Characteristic (ROC) curves can be derived from such cohort data. This method is based on signal detection theory, where human observers score their level of confidence for their interpretations of the image data. ROC curves are obtained by plotting the probability of true positive results against that of false positives. A diagonal line is obtained in the case of random decisions and actual curves lie above this line. Decision levels towards the extreme top right of the curve is referred to as Over-Reading while those close to the origin are referred to as Under-Reading. A major advantage of such data analysis is that it yields a single number - the area under the ROC curve - which describes the overall performance of a diagnostic system. It should be appreciated, however, that ROC analysis requires the availability of a superior standard of truth, e.g. biopsy results, an adequate number of diseased and control subjects as well as a number of qualified observers.

Given this state of affairs, image quality studies that combine objective measurements with contrast-detail and/or visual grading analysis are used in many clinical evaluations. Note that these evaluations do not take the whole diagnostic process into account and additional factors related to human perception, for instance, should also be considered[3]. For objective measurements, it is apparent that DQE and dose data provide information about the physical imaging performance of image receptors and that this is just one contributor to the complete expression of image quality It is also apparent that visual-based image interpretation is subject to considerable sources of variability. These visually-derived indicators can be considered to result from at least two separate sources, one being the visual perception capability of the observer and the other being the clinical interpretation ability of that observer, in addition to the exact radiographic presentation of the patient's anatomy. On this basis, it is clear that physical indicators of image quality provide an ability to compare the performance of different image receptors and exposure techniques in the lab, and that the results of visual-based studies should be interpreted only as the performance of a particular diagnostic team in a specific clinical environment.

References

edit
  1. Samei E, Ranger NT & Delong DM, 2008. A comparative contrast-detail study of five medical displays]. Med Phys, 35:1358-64.
  2. ICRU, 1996. Medical Imaging - The Assessment of Image Quality. Report No. 54.
  3. Krupinski EA, 2010. Current perspectives in medical image perception. Attention, Perception, & Psychophysics, 72:1205-1217.