Quantitative data from microscopic specimens

23 Quantitative data from microscopic specimens




Introduction


The practice of histology and histopathology has traditionally relied upon the subjective interpretation of microscopic preparations by a highly trained individual. The accuracy with which such interpretations can be made is the foundation of histology and of histopathology. That said, it is important to note that these interpretations are based on pattern recognition: that is, overall arrangements of elements within the specimen, a task for which the human visual system is well suited. The human visual system is not well suited for quantitative functions, such as assessment of linear measurements, areas, or density of stain. The human eye is a remarkable sensor, but one that is highly adaptable. It is able to alter its sensitivity depending on the brightness of the object being viewed. The eye is also a non-linear sensor, with a response to brightness that more closely approaches a logarithmic response. These two characteristics preclude accurate assessment of the density of specimens viewed through a microscope.


Human observers do not accurately estimate physical distances and areas of specimens. The eye is reasonably good at comparisons, and most microscopists will ‘estimate’ sizes based on some internal specimen object, such as the diameter of red blood cells. Even with such comparisons, length and size estimates made by microscopists are neither accurate nor highly repeatable. It is the purpose of image quantitation to eliminate observer-to-observer variation, and produce evaluations that are accurate and repeatable.


In recognition of the problem of accurately describing physical measurements in microscopic specimens, manufacturers of microscopes have included various calibration devices. For spatial measurements within the object, an eyepiece (ocular) reticule is used. These reticules typically consist of either a single line, or crossed lines (like the ‘+’ symbol) that are marked off in even increments. Reticules are also available in the form of a grid. For a reticule to be useful for measurement, it must be calibrated for each magnification at which it is used. This is done by use of a stage micrometer, which is a microscope slide with an accurate scale etched or photographically applied to the slide. Typically, these stage micrometers have divisions of 0.1 and 0.01 millimeters. After calibrating the reticule with a stage micrometer, the reticule can be used directly to measure linear dimensions of microscopic objects.


Microscopes also are calibrated in the ‘z’ axis, which is the axis that controls stage (or nosepiece) movement. This calibration is found around the focus control knob, and is generally calibrated in microns. The z-axis calibration can be used to estimate the thickness of a microscopic specimen, assuming that one can accurately determine the ‘top’ and ‘bottom’ of the focal plane through the object of interest. Accuracy can be improved by using a high numerical aperture, shallow depth of field objective, since this assists in finding the top and bottom focal plane of the object. Modern usage of z-axis movement is more commonly associated with collection of a series of images at various focal planes (an image stack) that can be used subsequently to construct three-dimensional representations of the specimen.


Morphometry is the general term used to describe the measurement of size parameters of a specimen. Size is here defined as length, height, and area of an object of interest. These basic measurements can be combined to provide additional measurements, such as perimeter, smoothness, centers, etc. For some of these additional measurement parameters, it is important to understand the specific mathematical formula (algorithm) used, as there may be more than one definition of a particular parameter, and two different implementations of what appears (by algorithm name) to be an identical measurement may not be the same.



Traditional approaches


The history of development of the microscope is filled with clever devices designed to assist in performing morphometry of specimens. One such device is the camera lucida, which is an optical system that projects an image of the specimen onto a surface adjacent to the microscope. This projected image can be used to draw the specimen, or to measure portions of the image. Accurate measurements within these projected images require calibration of the projection, in a manner identical to that used to calibrate reticules.


Photographic and computer-aided imaging approaches have eliminated the use of the camera lucida in many laboratories, as convenient cameras have become universally available for microscopes. As with projections, a photographic system must be calibrated, using a stage micrometer. In addition to the calibration of the photographic negative, the enlarging process must also be calibrated for accurate measurement. For both camera lucida drawings and photographs, areas are generally determined using a device called a planimeter. This is a mechanical device that is used manually to trace the outline of objects of interest. Using a set of ‘x’ and ‘y’ calibrated wheels, the total area of the object is determined. For the planimeter data to be accurate, a standard area at the magnification of the specimen must be determined, and this then becomes a calibration factor used to interpret the planimeter data.


Stereology is a technique developed for analysis of metals and minerals, where generally the properties being measured relate to number, size, and distribution of some particle in the sample. It is based on geometry and probability theory and, using statistical mathematics, stereology makes specific assumptions about the object being analyzed. A general discussion of the theoretical basis of stereology can be found in DeHoff and Rhines (1968), and in Underwood (1970). Stereological techniques have been applied to many biological images, both light and electron microscopical. General principles and applications can be found in Weibel (1979, 1980, 1990), in Elias and Hyde (1983), and in Elias et al. (1978). Although there is a long history of use of stereology in histology and histopathology, the use of this technique makes assumptions about the specimen that may not be applicable. Since the foundation of stereology is statistical, the general nature of the distribution of whatever is being measured should be describable using some statistic. This condition may be met under specific conditions, such as examining the distribution of chromatin ‘clumps’ within a cell nucleus, where the only object being examined is a single nucleus. For highly ordered structures, such as gland elements within an organ, the organization of the structure implies that there is no statistical distribution. Stereology can make estimates of some parameters of specimens, such as area of a total image occupied by some particular component. Note that this is an estimate. The use of stereology to derive measures of the three-dimensional structure of cell and tissue specimens may provide misleading information, since the probabilities used in the mathematics assume that the entire volume of the specimen is accurately reflected in the portion measured. Due to the polarization of cell organelles, and the arrangement of tissues and organs, this is generally not the case.


While one cannot disagree that stereology has provided many useful insights into microscopic specimens, modern techniques of measurement can provide real measures of the specimen without any assumptions of the distribution pattern. The development of newer forms of microscopy (confocal) has extended this direct measurement capability to the third dimension. With the speed of modern image analysis systems, there is little justification for performing an estimate of a cell or tissue parameter when the actual parameter can be accurately measured, often in less time than is required for the stereological approach. An in-depth review of stereology can be found in the fourth edition of this book.



Electronic light microscopy


An electronic measurement of light transmitted through microscopic specimens has a long history, and roughly parallels the development of photometers, spectrophotometers, and light-detecting devices. Until recently (1980s), these devices simply detected light, and did not produce images. To use these early devices to produce images, the portion of the specimen visible to the light detector had to be restricted, and the specimen or image moved across this restricted area to generate an actual image. Many mechanisms were developed to acquire images using such techniques (Wied 1966; Wied & Bahr 1970). These mechanisms tended to be expensive, since they required high precision, and were also slow, as the image had to be acquired a small area at a time, and then ‘put together’ or reconstructed into a recognizable image. The majority of literature relating to quantitative light microscopy which utilized electronic measurement of light therefore was actually related to photometric and spectrophotometric studies, rather than analysis of images as currently defined.



Microscope photometry


A detailed theoretical account of microscope photometry can be found in Piller (1977). The hardware systems for obtaining data described in Piller have been superseded by modern devices, but the theoretical foundations are sound. The fundamental requirement for photometry is described by the Beer-Lambert law:



image



where A is the absorbance, t is the path length of the absorbing material, c is the concentration of the absorbing material, and ε is the absorptivity.


In practical terms, for microscopic images, the path length (t) is approximately a constant, and for a given material being measured (usually a dye bound to the specimen) the absorptivity (ε) is also a constant. Therefore, the absorbance of the specimen is directly proportional to the concentration of the absorbing material.


Absorbance is defined as: A = log 1/T, where T is the transmittance. Transmittance is defined as the fraction of light that is transmitted through the specimen (Light through specimen/Light through blank).


A requirement of the Beer-Lambert law is that material being measured is homogeneous (there are a few other restrictions, regarding maximum absorptivities, but these are not ordinarily met in transmitted light microscopy). The requirement of homogeneity is a significant one, as we routinely examine specimens with a microscope to observe structure, and this by definition is non-homogeneous.


The measurement of non-homogeneous materials using photometry gives rise to distributional error. While one can illustrate these errors nicely with mathematical equations, microscopists can grasp the concept intuitively with a simple, classical example. Suppose you have a chamber filled with water, such as an aquarium, and you shine light through the aquarium and measure the light traversing the chamber. Now, place a capped bottle of ink in the water of the aquarium. Depending on the size of the bottle, you may note a small effect on the amount of light that passes through the chamber. If you now uncap the ink bottle, and let the ink diffuse through the aquarium contents, the amount of light that passes through the chamber will be significantly reduced. When you think about this example, note that the amount of ink in the aquarium is identical in the two cases: the capped and the uncapped bottle. The difference is the distribution of the ink. For this reason, the error that results from photometry of non-homogeneous materials is referred to as distributional error and can be shown to induce error as large as 50% with certain distributions.


Recognition of distributional error was the reason that quantitative microscopes employed various devices to permit restricting the area of the specimen seen by the detector to a small area. This strategy was successful because of the optics of the microscope itself. A given lens resolution is defined as the ability to separate two points (point resolution). If a lens cannot separate two points, then they look like a single object. If a light detector sees an area of the specimen that is smaller than the point resolution of the microscope lens combination in use, then, by definition, that area is homogeneous, since the lens cannot see any structure in that area. For microscope lenses of 40x magnification and above, general ‘area examined’ sizes for photometry range from 0.25 to 0.5 µm diameter spots. Such spot sizes generally avoid significant distributional error. When such spots are generated by mechanical ‘stops’ or pinholes in the optical path of the microscope, they may introduce other, significant, sources of error, such as edge diffraction. These sources of error, as well as many others, such as glare within the optical system, are discussed in detail in Piller (1977). The above discussion applies to absorbing images only: that is, microscope images obtained with transmitted light microscopy. Fluorescence emission microscopy and particle reflectance (autoradiography) are based on different physical principles, and require different optics and sensor configurations.



Image acquisition


As mentioned earlier, images can be acquired with systems that examine specimens a small spot at a time. Using mechanical systems such as scanning stages or Nipkow discs, an image similar to that seen through the microscope can be acquired. With a scanning stage device, the time required to acquire such an image may range into hours. In all mechanical scanning systems, the precision and repeatability required for high-quality images translate to slow, expensive, and difficult-to-use systems.


The development of television cameras provided hope for acquiring images through microscopes. Early television cameras were vacuum tube devices, and were not suitable for quantitative microscope image use. Images collected with these devices were of low resolution, and suffered from a variety of geometric and photometric distortions related to the electronics used to operate the scanning circuits of the tube, and to read out the resulting image signals.


During the decade of the 1980s, a variety of solid-state sensors were developed for television (video) purposes. One technology in particular, the CCD (charge-coupled device) camera, matured into a significantly useful device for microscopic imaging work. CCD cameras continue to evolve, and are the technology of choice for most photometric and imaging microscopic studies. Recently, a new technology has emerged in solid-state cameras: CMOS or complementary metal oxide semiconductor cameras. These devices promise rapid image acquisition, low cost, and the potential for some image manipulation within the camera detector itself. At this time, CMOS cameras still lag behind CCD cameras in suitability for microscope photometry, but this may change in the near future. Solid-state cameras have a number of characteristics that make them ideal for microscope imaging. The detecting element itself consists of individual sensors (pixels) that are arranged in a square or rectangular array. The physical size of individual sensors or pixels in the array is in the order of a few microns, with values of 6 to 10 µm square pixels being common (there are solid-state cameras available with rectangular pixels, but these are not suitable for quantitative image use). The technique used to manufacture solid-state detectors is similar to that used to produce electronic chips such as are used in microcomputers. Therefore, sensor chips can be manufactured that may have one million or more individual sensors (pixels) and each will have similar characteristics, such as response to light (gain and linearity). Most solid-state detectors have a reasonably linear response over a wide range of light intensities, and this implies that each pixel within the sensor is also linear. Within limits, each pixel in the sensor array also generates a similar signal to a given light input (identical gain per pixel).


The camera detector consists of many individual but essentially identical detectors (the pixels): hence, each individual pixel can be considered to be a photometric detector. To use a solid-state camera detector for photometry, the image that the individual detectors (pixels) see must meet the requirements of the Beer-Lambert law; i.e., they must see a homogeneous portion of the specimen. By calibrating the microscope lens system used, and deriving the area of the specimen in square microns seen by each pixel of the solid-state camera chip, an appropriate magnification can be selected which permits accurate photometry. For modern cameras with an array size of 1024 × 1024 pixels, a microscope objective lens of 20× magnification will achieve approximately 0.50 µm per pixel, and a 40× objective will be in the range of 0.25 µm per pixel. Both of these values are close to the resolving power of the respective lenses, and therefore meet the homogeneity requirement of the Beer-Lambert law. As yet, cameras with high enough pixel density to perform photometry at lower magnifications are not commonly available, and those that do exist are both expensive and slow. With the rapid advancements in camera technology, this is expected to change in the near future.


Solid-state cameras, whether CCD or CMOS, are available in either monochrome or color versions. Color cameras may use two different techniques to generate a color image. In one approach, there are three separate detector arrays, each with a color filter in front of the array. A prism or mirror system is used to split the image coming from the microscope into three separate but identical images, so each detector sees the same image. The color filters are red, green, and blue, since a red image, a green image, and a blue image can be combined to create a full color image. This type of camera is called a three-chip color camera. The second approach to color cameras uses a single detector chip, and places a pattern of color dots over the individual pixels. Again, these color dots are red, blue, and green. The most common pattern for these dots is the Bayer pattern (Fig. 23.1). Note that in the Bayer pattern there are actually four dots per ‘repeat’, since for each red and each blue dot there are two green dots. This type of camera is called a single-chip color camera.



The three-chip camera has three individual detectors, and a beam-splitting system to divide the image; these cameras are therefore more expensive than a single-chip camera. Essentially, a three-chip camera is three separate cameras in one. The advantage of the three-chip camera is that every pixel is ‘real’: that is, it is generating a true signal. The disadvantage is that there may be differences in sensitivity of a ‘red’ pixel and a ‘green’ pixel that are seeing the exact same spot of an image. Use of a three-chip camera for photometry where various colors are examined requires careful calibration and correction of any variation in output between the separate detector chips.


The single-chip camera can produce excellent color images, but must be used carefully for quantitative work, and is unsuitable for photometry. This is because only one pixel out of four (two in the case of green) is actually seeing the specimen at the point of maximum absorption. The other pixels in the Bayer pattern are being approximated, by assigning their ‘red’ value to the same value as the one real ‘red’ pixel in the pattern. In addition to the approximation of true signal for a given color, the Bayer pattern results in a real loss of resolution at the sensor level. Since only one of every four pixels (for red and blue) actually sees a red or blue portion of the specimen, the true resolution of the single-chip camera is one-fourth the total number of pixels in the array. In practical terms this means that if a single pixel sees an area of the specimen that is 0.5 µm square, the true resolution of the Bayer pattern single-chip camera is 2.0 µm per pixel, for color detection. The camera does have true pixel number spatial resolution, but in colored specimens this may be reduced by the distribution of color within the specimen.


The three-chip color camera is essentially three monochrome cameras, with each camera having a different colored filter in front of the camera detector. Software is then used to combine the three separate images into a full-color image. This suggests that it is possible to use a monochrome camera to capture full-color images. A number of cameras provide a mechanism for doing this. Within the camera itself there is either an electronic filter, or a filter wheel carrying glass filters. To capture an image, three sequential images are taken, each through a different colored filter. These images are then combined to produce the full-color image. It is possible also to do this using a simple monochrome camera. One would place a red filter in the light path of the microscope, and capture a ‘red’ image. The same thing would then be done for ‘blue’ and for ‘green’. The result would be three separate images of the same specimen, in different colors, and when these three color planes are combined using software, the result is a full-color image.


Cameras used for imaging are also described in terms of signal resolution per individual pixel. This signal resolution is commonly described as bit depth or gray levels (for monochrome cameras). The signal resolution is a specification that describes the number of divisions of the signal between zero (no signal) and maximum signal. A common value is 256 levels, and these divisions are often also described as gray levels. They are based on the digital progression by powers of two, and therefore a 256-level signal corresponds to 8 bits of resolution (2 to the eighth power). Many modern cameras provide 10- or 12-bit signal resolutions. With a 12-bit camera, 4096 gray levels can be obtained. As the signal resolution increases, the susceptibility of the signal to perturbation increases. In particular, sources of electronic noise, such as internally generated heat within the detector itself, may become a problem. With high bit-depth cameras, it is common to find cooling systems which lower the detector temperature, and thereby reduce electronic noise. Such cooling systems also translate to higher prices for cameras so equipped and, if the cooling system contains a fan, may introduce vibration to the microscope.


It is important to note the differences between cameras used to capture images through the microscope, and the same image viewed with the human eye. The human eye is a remarkable detector of light and of color. However, it is a non-linear, highly adaptive sensor. In addition, the resolution of the eye detector (retina) varies across the surface of the retina, being highest in the fovea. Under ideal conditions, most individuals with excellent eyesight can distinguish between 30 and 35 brightness levels (gray levels). This is a far cry from the 256 or higher number of levels seen by a digital camera. A solid-state camera can always detect intensity variations that would be invisible to the human observer. This translates to the ability to detect finer detail within an image than can be resolved by a human observer.


The human eye adapts to light intensity, so the 30 or 35 gray levels that are detected vary, depending on the intensity of the light, and the immediately preceding light exposure of the eye. This is one of the reasons why individuals must ‘dark adapt’ prior to doing fluorescence microscopy. The same phenomenon occurs in bright-field microscopy, but is seldom recognized. If an individual is asked to assess the density or ‘darkness’ of a stain, the assessment will vary depending on whether the individual has been in a dim environment or a bright environment just prior to performing the assessment.


Color capture is another area in which a camera differs from the human eye. While there is much that is still unknown as to the way in which the eye-brain combination processes color, the camera provides a fixed model. The construction of the camera itself is based on the RGB (red, green, blue) model of color. There are many other models of color, and those that incorporate intensity and saturation information appear more intuitive to human users of image systems. One common model that employs such a system is the HSI (hue, saturation, intensity) model. Software programs are available that permit images to be converted from one type of color space model to another, and often such conversions are useful when one works with full-color images.


Photographic color film is balanced for the type of light used to illuminate the scene. The type of light is specified by a ‘color temperature’ number. Film sold for routine color photography is generally balanced for correct color in ‘sunlight’, which is specified as a color temperature of 5000 K. Specialty films intended for microphotography may be balanced for ‘tungsten’ illumination, with a color temperature of 3200 K. Because of the narrow limits of intensity for which the film is balanced, photomicrography requires the microscope illuminator to be set to a specified level (generally bright) prior to taking a photograph. The ‘color temperature’ of a light source is actually a measure of the intensity of the various components of the light source, in the red, green, and blue regions of the spectrum. Photographic film records all of these components simultaneously, and there is little opportunity to ‘correct’ values, other than limited adjustment during processing. With a solid-state camera and capture software, the situation is different. Each of the image components (red, green and blue) is available as an individual image. They are combined to produce the final image. Since the individual components (color planes) are available, it is possible to ‘color correct’ the image. This is generally done in the capture software, or the camera itself. The result is that ‘white’ is a true white (defined as a particular level of R, G, and B), regardless of the ‘color temperature’ of the microscope illuminator. This eliminates the requirement for presetting the brightness of the microscope prior to taking a picture with a solid-state camera. However, it does mean that each time the microscope is adjusted, in either magnification or illumination intensity, the user may have to calibrate the ‘white balance’ of the system again. Note that these same software techniques can be used to correct or modify any color image that can be converted into electronic form.


Solid-state cameras used on microscopes are generally coupled to the microscope in a manner to optimize the area of the visible field (to a user looking through the eyepieces of the microscope) that is captured. As is true of photographic cameras, the solid-state camera captures a rectangular (or in a few cases, square) portion of the circular image displayed in the microscope. The camera sensor pixels are, as previously described, quite uniform in response to light intensity. The nature of microscope lens systems, even in those that have been carefully aligned, is a higher intensity along the optical axis (center of the image) than in the periphery of the field. For an adaptive sensor such as the eye, and one with a limited number of intensity step discriminations, the field of view of a carefully aligned microscope appears to be quite uniform. That this is not the case is amply illustrated in many lectures that display photomicrographs, where a common flaw is dark corners. In the case of solid-state cameras, the increased intensity level sensitivity, as compared to the eye, accentuates this problem. Since all solid-state cameras use some type of software, either within the capture system or on the capturing computer to control the camera, this software frequently contains some type of ‘field flattening’ or ‘background subtraction’ mechanism to correct this variation in intensity from center to edge of the image. Additional details of requirements for acquiring images through microscopes using electronic cameras can be found in Shotton (1993).

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Dec 13, 2017 | Posted by in HISTOLOGY | Comments Off on Quantitative data from microscopic specimens

Full access? Get Clinical Tree

Get Clinical Tree app for offline access