Month: January 2010

  • Diagnostic Procedures that Complement and Supplement Laboratory Tests

    The clinical pathologist frequently encounters situations in which laboratory tests alone are not sufficient to provide a diagnosis. If this happens, certain diagnostic procedures may be suggested to provide additional information. These procedures are noted together with the laboratory tests that they complement or supplement. Nevertheless, it seems useful to summarize some basic information about these techniques and some data that, for various reasons, are not included elsewhere.

    Diagnostic ultrasound

    Ultrasound is based on the familiar principle of radar, differing primarily in the frequency of the sound waves. Very high-frequency (1-10 MHZ) sound emissions are directed toward an object, are reflected (echo production) by the target, and return to the detector, with a time delay proportional to the distance traveled. Differences in tissue or substance density result in a series of echoes produced by the surfaces of the various tissues or substances that lie in the path of the sound beam.

    In A-mode (amplitude) readout, the echo signals are seen as spikes (similar to an electrocardiogram [ECG] tracing format) with the height of the spike corresponding to the intensity of the echo and the distance between spikes depending on the distance between the various interfaces (boundaries) of substances in the path of the sound beam. The A-mode technique is infrequently used today, but early in the development of ultrasound it was often used to examine the brain, since the skull prevented adequate B-mode (brightness-modulated, bistable) visualization.

    In B-mode readout, the sonic generator (transducer) is moved in a line across an area while the echoes are depicted as tiny dots corresponding to the location (origin) of the echo. This produces a pattern of dots, which gives a visual image of the shape and degree of homogeneity of material in the path of the sound beam (the visual result is a tomographic slice or thin cross-section of the target, with a dot pattern form somewhat analogous to that of a nuclear medicine scan).

    Gray-scale mode is a refinement of B-mode scan readout in which changes in amplitude (intensity) of the sonic beam produced by differential absorption through different substances in the path of the beam are converted to shades of gray in the dot pattern. This helps the observer recognize smaller changes in tissue density (somewhat analogous to an x-ray).

    B-mode ultrasound (including gray-scale) is now the basic technique for most routine work. Limitations include problems with very dense materials that act as a barrier both to signal and to echo (e.g., bone or x-ray barium), and air, which is a poor transmitter of high-frequency sound (lungs, air in distended bowel or stomach, etc.).

    In M-mode (motion) readout, the sonic generator and detector remain temporarily in one location, and each echo is depicted as a small dot relative to original echo location; this is similar to A mode, but it uses a single dot instead of a spike. However, a moving recorder shows changes in the echo pattern that occur if any structures in the sonic beam path move; changes in location of the echo dot are seen in the areas that move but not in the areas that are stationary. The result is a series of parallel lines, each line corresponding to the continuous record of one echo dot; stationary dots produce straight lines and moving dots become a wavy or ECG-like line. In fact, the technique and readout are somewhat analogous to those of the ECG, if each area of the heart were to produce its own ECG tracing and all were displayed together as a series of parallel tracings. The M-mode technique is used primarily in studies of the heart (echocardiography), particularly aortic and mitral valve function.

    Real-time ultrasound is designed to provide a picture similar to B-mode ultrasound but that is obtained rapidly enough to capture motion changes. Theoretically, real-time means that the system is able to evaluate information as soon as it is received rather than storing or accumulating any data. This is analogous to a fluoroscope x-ray image compared to a conventional x-ray. M-mode ultrasound produces single-dimension outlines of a moving structure as it moves or changes shape, whereas real-time two-dimensional ultrasound produces an image very similar to that produced by a static B-mode scanner but much more rapidly (15-50 frames/second)—fast enough to provide the impression of motion when the various images obtained are viewed rapidly one after the other (either by direct viewing as they are obtained on a cathode ray tube (CRT) screen or as they are recorded and played back from magnetic tape or similar recording device). At present the equipment available to accomplish this exists in two forms: a linear array of crystals, the crystals being activated in sequence with electronic steering of the sound beam; and so-called small contact area (sector) scanners, having either an electronically phased crystal array or several crystals that are rotated mechanically. A triangular wedge-shaped image is obtained with the sector scanner and a somewhat more rectangular image with the linear-array scanner, both of which basically resemble images produced by a static B-mode scanner. On sector scanning, the apex of the triangle represents the ultrasound transducer (the sound wave generator and receiving crystal). The field of view (size of the triangle) of a typical real-time sector scanner is smaller than that of a standard static B-mode scanner (although the size differential is being decreased by new technology). Real-time image quality originally was inferior to that of static equipment, but this too has changed. Real-time equipment in general is less expensive, more compact, and more portable than static equipment; the ultrasound transducer is usually small and hand held and is generally designed to permit rapid changes in position to scan different areas rapidly using different planes of orientation. Many ultrasonographers now use real-time ultrasound as their primary ultrasound technique.

    Uses of diagnostic ultrasound. With continuing improvements in equipment, capabilities of ultrasound are changing rapidly. A major advantage is that ultrasound is completely noninvasive; in addition, no radiation is administered, and no acute or chronic ill effects have yet been substantiated in either tissues or genetic apparatus. The following sections describe some of the major areas in which ultrasound may be helpful.

    Differentiation of solid from cystic structures. This is helpful in the diagnosis of renal space-occupying lesions, nonfunctioning thyroid nodules, pancreatic pseudocyst, pelvic masses, and so on. When a structure is ultrasonically interpreted as a cyst, accuracy should be 90%-95%. Ultrasound is the best method for diagnosis of pancreatic pseudocyst.

    Abscess detection. In the abdomen, reported accuracy (in a few small series) varies between 60% and 90%, with 80% probably a reasonable present-day expectation. Abscess within organs such as the liver may be seen and differentiated from a cyst or solid tumor. Obvious factors affecting accuracy are size and location of the abscess, as well as interference from air or barium in overlying bowel loops.

    Differentiation of intrahepatic from extrahepatic biary obstruction. This is based on attempted visualization of common bile duct dilatation in extrahepatic obstruction. Current accuracy is probably about 80%-90%.

    Ultrasound may be useful in demonstrating a dilated gallbladder when cholecystography is not possible or suggests nonfunction. It may also be helpful in diagnosis of cholecystitis. Reports indicate about 90%-95% accuracy in detection of gallbladder calculi or other significant abnormalities. Some medical centers advocate a protocol in which single-dose oral cholecystography is done first; if the gallbladder fails to visualize, ultrasonography is performed. Detection of stones would make double-dose oral cholecystography unnecessary. Some are now using ultrasound as the primary method of gallbladder examination.

    Diagnosis of pancreatic carcinoma. Although islet cell tumors are too small to be seen, acinar carcinoma can be detected in approximately 75%-80% of instances. Pancreatic carcinoma cannot always be differentiated from pancreatitis. In the majority of institutions where computerized tomography (CT) is available, CT is preferred to ultrasound. CT generally provides better results in obese patients, and ultrasound usually provides better results in very thin patients.

    Guidance of biopsy needles. Ultrasound is helpful in biopsies of organs such as the kidney.

    Placental localization. Ultrasound is the procedure of choice for visualization of the fetus (fetal position, determination of fetal age by fetal measurements, detection of fetal anomalies, detection of fetal growth retardation), visualization of intrauterine or ectopic pregnancy, and diagnosis of hydatidiform mole. Ultrasound is the preferred method for direct visualization in obstetrics to avoid irradiation of mother or fetus.

    Detection and delineation of abdominal aortic aneurysms. Ultrasound is the current method of choice for these aneurysms. Clot in the lumen, which causes problems for aortography, does not interfere with ultrasound. For dissecting abdominal aneurysms, however, ultrasound is much less reliable than aortography. Thoracic aneurysms are difficult to visualize by ultrasound with present techniques; esophageal transducers may help.

    Detection of periaortic and retroperitoneal masses of enlarged lymph nodes. Ultrasound current accuracy is reported to be 80%-90%. However, CT has equal or better accuracy, and is preferred in many institutions because it visualizes the entire abdomen.

    Ocular examinations. Although special equipment is needed, ultrasound has proved useful for detection of intraocular foreign bodies and tumors, as well as certain other conditions. This technique is especially helpful when opacity prevents adequate visual examination.

    Cardiac diagnosis. Ultrasound using M-mode technique is the most sensitive and accurate method for detection of pericardial effusion, capable of detecting as little as 50 ml of fluid. A minor drawback is difficulty in finding loculated effusions. Mitral stenosis can be diagnosed accurately, and useful information can be obtained about other types of mitral dysfunction. Ultrasound can also provide information about aortic and tricuspid function, although not to the same degree as mitral valve studies. Entities such as hypertrophic subaortic stenosis and left atrial myxoma can frequently be identified. The thickness of the left ventricle can be estimated. Finally, vegetations of endocarditis may be detected on mitral, aortic, or tricuspid valves in more than one half of patients. Two-dimensional echocardiography is real-time ultrasound. It can perform most of the same functions as M-mode ultrasound, but in addition it provides more complete visualization of congenital heart defects and is able to demonstrate left ventricle heart wall motion or structural abnormalities in about 70%-80% of patients.

    Doppler ultrasound is a special variant that can image blood flow. The Doppler effect is the change in ultrasound sound wave frequency produced when ultrasonic pulses are scattered by RBCs moving within a blood vessel. By moving the transducer along the path of a blood vessel, data can be obtained about the velocity of flow in areas over which the transducer moves. Most current Doppler equipment combines Doppler signals with B-mode ultrasonic imaging (“duplex scanning”). The B-mode component provides a picture of the vessel, whereas the Doppler component obtains flow data in that segment of the vessel. This combination is used to demonstrate areas of narrowing, obstruction, or blood flow turbulence in the vessel.

    Computerized tomography (CT)

    Originally known as computerized axial tomography (CAT), CT combines radiologic x-ray emission with nuclear medicine-type radiation detectors (rather than direct x-ray exposure of photographic film in the manner of ordinary radiology). Tissue density of the various components of the object or body part being scanned determines how much of the electron beam reaches the detector assembly, similar to conventional radiology. The original machines used a pencil-like x-ray beam that had to go back and forth over the scanning area, with each track being next to the previous one. Current equipment is of two basic types. Some manufacturers use a fan-shaped (triangular) beam with multiple gas-filled tube detectors on the opposite side of the object to be scanned (corresponding to the base of the x-ray beam triangle). The beam source and the multiple detector segment move at the same time and speed in a complete 360-degree circle around the object to be scanned. Other manufacturers use a single x-ray source emitting a fan-shaped beam that travels in a circle around the object to be scanned while outside of the x-ray source path is a complete circle of nonmoving detectors. In all cases a computer secures tissue density measurements from the detector as this is going on and eventually constructs a composite tissue density image similar in many aspects to those seen in ordinary x-rays. The image corresponds to a thin cross-section slice through the object (3-15 mm thick), in other words, a tissue cross-section slice viewed at a right angle (90 degrees) to the direction of the x-ray beam.

    CT scan times necessary for each tissue slice vary with different manufacturers and with different models from the same manufacturer. The original CT units took more than 30 seconds per slice, second-generation CT units took about 20 seconds per slice, whereas current models can operate at less than 5 seconds per slice.

    CT is currently the procedure of choice in detection of space-occupying lesions of the CNS. It is also very important (the procedure of choice for some) in detecting and delineating mass lesions of the abdomen (tumor, abscess, hemorrhage, etc.), mass lesions of organs (e.g., lung, adrenals or pancreas) and retroperitoneal adenopathy. It has also been advocated for differentiation of extrahepatic versus intrahepatic jaundice (using the criterion of a dilated common bile duct), but ultrasound is still more commonly used for this purpose due to lower cost, ease of performance, and scheduling considerations.

    Nuclear medicine scanning

    Nuclear medicine organ scans involve certain compounds that selectively localize in the organs of interest when administered to the patient. The compound is first made radioactive by tagging with a radioactive element. An exception is iodine used in thyroid diagnosis, which is already an element; in this case a radioactive isotope of iodine can be used. An isotope is a different form of the same element with the same chemical properties as the stable element form but physically unstable due to differences in the number of neutrons in the nucleus, this difference producing nuclear instability and leading to emission of radioactivity. After the radioactive compound is administered and sufficient uptake by the organ of interest is achieved, the organ is “scanned” with a radiation detector. This is usually a sodium iodide crystal. Radioactivity is transmuted into tiny flashes of light within the crystal. The location of the light flashes corresponds to the locations within the organ from which radioactivity is being emitted; the intensity of a light flash is proportional to the quantity of radiation detected. The detection device surveys (scans) the organ and produces an overall pattern of radioactivity (both the concentration and the distribution of activity), which it translates into a visual picture of light and dark areas.

    Rectilinear scanners focus on one small area; the detector traverses the organ in a series of parallel lines to produce a complete (composite) picture. A “camera” device has a large-diameter crystal and remains stationary, with the field of view size dependent on the size of the crystal. The various organ scans are discussed in chapters that include biochemical function tests referable to the same organ.

    The camera detectors are able to perform rapid-sequence imaging not possible on a rectilinear apparatus, and this can be used for “dynamic flow” studies. A bolus of radioactive material can be injected into the bloodstream and followed through major vessels and organs by data storage equipment or rapid (1- to 3-second) serial photographs. Although the image does not have a degree of resolution comparable to that of contrast medium angiography, major abnormalities in major blood vessels can be identified, and the uptake and early distribution of blood supply in specific tissues or organs can be visualized.

    Data on radionuclide procedures are included in areas of laboratory test discussion when this seems appropriate.

    Magnetic resonance imaging (MR or MRI)

    Magnetic resonance (MR; originally called nuclear magnetic resonance) is the newest imaging process. This is based on the fact that nuclei of many chemical elements (notably those with an uneven number of protons or neutrons such as 1H or 31P) spin (“precess”) around a central axis. If a magnetic field is brought close by (using an electromagnet) the nuclei, still spinning, line up in the direction of the magnetic field. A new rate of spin (resonant frequency) will be proportional to the characteristics of the nucleus, the chemical environment, and the strength of the magnetic field. If the nuclei are then bombarded with an energy beam having the frequency of radio waves at a 90-degree angle to the electromagnetic field, the nuclei are pushed momentarily a little out of line. When the exciting radiofrequency energy is terminated, the nuclei return to their position in the magnetic field, giving up some energy. The energy may be transmitted to their immediate environment (called the “lattice,” the time required to give up the energy and return to position being called the “spin-lattice relaxation time,” or T1), or may be transmitted to adjacent nuclei of the same element, thus providing a realignment response of many nuclei (called “spin-spin relaxation time,” or T2). The absorption of radiofrequency energy can be detected by a spectrometer of special design. Besides differences in relaxation time, differences in proton density can also be detected and measured. MR proton density or relaxation time differs for different tissues and is affected by different disease processes and possibly by exogenous chemical manipulation. The instrumentation can produce computer-generated two-dimensional cross-section images of the nuclear changes that look like CT scans of tissue. Thus, MR can detect anatomical structural abnormality and changes in normal tissue and potentially can detect cellular dysfunction at the molecular level. Several manufacturers are producing MR instruments, which differ in the type and magnetic field strength of electromagnets used, the method of inducing disruptive energy into the magnetic field, and the method of detection and processing of results. Unlike CT, no radiation is given to the patient.

  • Laboratory Tests in Psychiatry

    Until recently, the laboratory had relatively little to offer in psychiatry. Laboratory tests were used mainly to diagnose or exclude organic illness. For example, in one study about 5% of patients with dementia had organic diseases such as hyponatremia, hypothyroidism, hypoglycemia, and hypercalcemia; about 4% were caused by alcohol; and about 10% were due to toxic effects of drugs. A few psychiatric drug blood level assays were available, of which lithium was the most important. In the 1970s, important work was done suggesting that the neuroendocrine system is involved in some way with certain major psychiatric illnesses. Thus far, melancholia (endogenous psychiatric depression or primary depression) is the illness in which neuroendocrine abnormality has been most extensively documented. It was found that many such patients had abnormal cortisol blood levels that were very similar to those seen in Cushing’s syndrome (as described in the chapter on adrenal function) without having the typical signs and symptoms of Cushing’s syndrome. There often was blunting or abolition of normal cortisol circadian rhythm, elevated urine free cortisol excretion levels, and resistance to normally expected suppression of cortisol blood levels after a low dose of dexamethasone.

    Because of these observations, the low-dose overnight dexamethasone test, used to screen for Cushing’s syndrome, has been modified to screen for melancholia. One milligram of oral dexamethasone is given at 11 P.M., and blood is drawn for cortisol assay on the following day at 4 P.M. and 11 P.M. Normally, serum cortisol levels should be suppressed to less than 5 µg/100 ml (138 nmol/L) in both specimens. An abnormal result consists of failure to suppress in at least one of the two specimens (about 20% of melancholia patients demonstrate normal suppression in the 4 P.M. specimen but no suppression in the 11 P.M. specimen, and about the same number of patients fail to suppress in the 4 P.M. specimen but have normal suppression in the 11 P.M. sample). The psychiatric dexamethasone test is different from the dexamethasone test for Cushing’s syndrome, because in the Cushing protocol a single specimen is drawn at 8 A.M. in the morning after dexamethasone administration.

    The Cushing’s disease protocol is reported to detect only about 25% of patients with melancholia, in contrast to the modified two-specimen psychiatric protocol, which is reported to detect up to 58%. Various investigators using various doses of dexamethasone and collection times have reported a detection rate of about 45% (literature range, 24%-100%). False positive rates using the two-specimen protocol are reported to be less than 5%. Since some patients with Cushing’s syndrome may exhibit symptoms of psychiatric depression, differentiation of melancholia from Cushing’s syndrome becomes necessary if test results show nonsuppression of serum cortisol. The patient is given appropriate antidepressant therapy and the test is repeated. If the test result becomes normal, Cushing’s syndrome is virtually excluded.

    Various conditions not associated with either Cushing’s syndrome or melancholia can affect cortisol secretion patterns. Conditions that must be excluded to obtain a reliable result include severe major organic illness of any type, recent electroshock therapy, trauma, severe weight loss, malnutrition, alcoholic withdrawal, pregnancy, Addison’s disease, and pituitary deficiency. Certain medications such as phenobarbital, phenytoin (Dilantin), steroid therapy, or estrogens may produce falsely abnormal results.

    At present, there is considerable controversy regarding the usefulness of the modified low-dose dexamethasone test for melancholia, since the test has a sensitivity no greater than 50% and significant potential for false positive results.

    Besides the overnight modified low-dose dexamethasone test, the thyrotropin-releasing hormone (TRH) test has been reported to be abnormal in about 60% of patients with primary (unipolar) depression. Abnormality consists of a blunted (decreased) thyrotropin-stimulating hormone response to administration of TRH, similar to the result obtained in hyperthyroidism or hypopituitarism. However, occasionally patients with melancholia have hypothyroidism, which produces an exaggerated response in the TRH test rather than a blunted (decreased) response.

    One investigator found that about 30% of patients with melancholia had abnormal results on both the TRH and the modified dexamethasone tests. About 30% of the patients had abnormal TRH results but normal dexamethasone responses, and about 20% had abnormal dexamethasone responses but normal TRH responses. The TRH test has not been investigated as extensively as the modified dexamethasone test.

    A more controversial area is measurement of 3-methoxy-4-hydroxyphenylglycol (MHPG) in patients with depression. One theory links depression to a functional deficiency of norepinephrine in the central nervous system (CNS). 3-Methoxy-4-hydroxyphenylglycol is a major metabolite of norepinephrine. It is thought that a significant part of urinary MHPG is derived from CNS sources (20%-63% in different studies). Some studies indicated that depressed patients had lower urinary (24-hour) excretion of MHPG than other patients, and that patients in the manic-phase of bipolar (manic-depressive) illness had increased MHPG levels. There was also some evidence that depressed patients with subnormal urinary MHPG levels responded better to tricyclic antidepressants such as imipramine than did patients with normal urine MHPG levels. However, these findings have been somewhat controversial and have not been universally accepted.

  • Tests for Allergy

    The atopic diseases were originally defined as sensitization based on hereditary predisposition (thus differentiating affected persons from nonaffected persons exposed to the same commonly found antigens) and characterized by immediate urticarial skin reaction to offending antigen and by the Prausnitz-Kьstner reaction. Prausnitz and Kьstner demonstrated in 1921 that serum from a sensitized person, when injected into the skin of a nonsensitized person, would produce a cutaneous reaction on challenge with appropriate antigen (cutaneous passive transfer). The serum factor responsible was known as reagin (skin-sensitizing antibody). In 1966, reagin was found to be IgE, which has subsequently been shown to trigger immediate local hypersensitivity reactions by causing release of histamines and vasoactive substances from mast cells, which, in turn, produce local anaphylaxis in skin or mucous membranes. The IgE system thus mediates atopic dermatitis, allergic rhinitis, and many cases of asthma. In patients with rhinitis, nasal itching is the most suggestive symptom of IgE-associated allergy. Allergens may come from the environment (pollens, foods, allergenic dust, molds), certain chronic infections (fungus, parasites), medications (penicillin), or industrial sources (cosmetics, chemicals). Sometimes there is a strong hereditary component; sometimes none is discernible. Discovery that IgE is the key substance in these reactions has led to measurement of serum IgE levels as a test for presence of atopic allergy sensitization.

    Total immunoglobulin E levels

    Serum total IgE levels are currently measured by some type of immunoassay technique. The most common method is a paper-based radioimmunosorbent test procedure. Values are age dependent until adulthood. Considerably elevated values are characteristically found in persons with allergic disorders, such a atopic dermatitis and allergic asthma, and also in certain parasitic infections and Aspergillus-associated asthma. Values above reference range, however, may be found in some clinically nonallergic persons and therefore are not specific for allergy. On the other hand, many patients with allergy have total IgE levels within normal population range. It has been reported, however, that total IgE values less than 20 international units/ml suggest small probability of detectable specific IgE. Besides IgE, there is some evidence that IgG4 antibodies may have some role in atopic disorders.

    Specific immunoglobulin E levels

    Specific serum IgE (IgE directed against specific antigens) can be measured rather than total IgE. This is being employed to investigate etiology of asthma and atopic dermatitis. The current system is called the radioallergosorbent test (RAST). Specific antigen is bound to a carrier substance and allowed to react with specific IgE antibody. The amount of IgE antibody bound is estimated by adding radioactive anti-IgE antibody and quantitating the amount of labeled anti-IgE attached to the IgE-antigen complex. The type of antigen, the degree and duration of stimulation, and current exposure to antigen all influence IgE levels to any particular antigen at any point in time. Studies thus far indicate that RAST has an 80%-85% correlation with results of skin testing using the subcutaneous injection method (range, 35%-100%, depending on the investigator and the antigen used). It seems a little less sensitive than the intradermal skin test method, but some claim that it predicts the results of therapy better (in other words, it is possibly more specific). Since only a limited number of antigens are available for use in the RAST system, each antigen to be tested for must be listed by the physician. Some advise obtaining a serum total IgE assay in addition to RAST; if results of the RAST panel are negative and the serum IgE level is high, this raises the question of allergy to antigens not included in the RAST panel. Total serum IgE values can be normal, however, even if the findings of one or more antigens on the RAST panel are positive. There is some cross-reaction between certain antigens in the RAST system. The RAST profile is more expensive than skin testing with the same antigens. However, the skin test is uncomfortable, and in a few hyperallergic patients it may even produce anaphylactic shock. Modifications of the RAST technique that are more simple and easy to perform are being introduced, and a dipstick method with a limited number of selected antigens is now commercially available.

    Eosinophilia

    Peripheral blood eosinophilia is frequently present in persons with active allergic disorders, although a rather large minority of these patients do not display abnormal skin tests. Correlation is said to be better in persons less than 50 years old. Unfortunately, there are many possible causes for peripheral blood eosinophilia (see Chapter 6), which makes interpretation more difficult. Presence of more than occasional eosinophil in sputum suggests an allergic pulmonary condition.

    In some patients with nasopharyngeal symptoms, a nasal smear for eosinophils may be helpful. The specimen can be collected with a calcium alginate swab and thin smears prepared on glass slides, which are air-dried and stained (preferably) with Hansel’s stain or Wright’s stain. If more than a few eosinophils are present but not neutrophils, this suggests allergy without infection. If neutrophils outnumber eosinophils, this is considered nondiagnostic (neither confirming nor excluding allergy).

  • Selected Tests of Interest in Pediatrics

    Neonatal immunoglobulin levels. Maternal IgG can cross the placenta, but IgA or IgM cannot. Chronic infections involving the fetus, such as congenital syphilis, toxoplasmosis, rubella, and cytomegalic inclusion disease, induce IgM production by the fetus. Increased IgM levels in cord blood at birth or in neonatal blood during the first few days of life suggest chronic intrauterine infection. Infection near term or subsequent to birth results in an IgM increase beginning 6-7 days postpartum. Unfortunately, there are pitfalls when such data are interpreted. Many cord blood samples become contaminated with maternal blood, thus falsely raising IgM values. Normal values are controversial; 20 mg/dl is the most widely accepted upper limit. Various techniques have different reliabilities and sensitivities. Finally, some investigators state that fewer than 40% of rubella or cytomegalovirus infections during pregnancy produce elevated IgM levels before birth.

    Agammaglobulinemia. This condition may lead to frequent infections. Electrophoresis displays decreased gamma-globulin levels, which can be confirmed by quantitative measurement of IgG, IgA, and IgM. There are several methods available to quantitatively measure IgG, IgA, and IgM such as radial immunodiffusion, immunonephelometry, and immunoassay. Immunoelectrophoresis provides only semiquantitative estimations of the immunoglobulins and should not be requested if quantitative values for IgG, IgA, or IgM are desired.

    Nitroblue tetrazolium test. Chronic granulomatous disease of childhood is a rare hereditary disorder of the white blood cells (WBCs) that is manifested by repeated infections and that ends in death before puberty. Inheritance is sex-linked in 50% of cases and autosomal recessive in about 50%. Polymorphonuclear leukocytes are able to attack high-virulence organisms, such as streptococci and pneumococci, which do not produce the enzyme catalase, but are unable to destroy staphylococci and certain organisms of lower virulence such as the gram-negative rods, which are catalase producers. Normal blood granulocytes are able to phagocytize yellow nitroblue tetrazolium (NBT) dye particles and then precipitate and convert (reduce) this substance to a dark blue. The test is reported as the percentage of granulocytes containing blue dye particles. Monocytes also ingest NBT, but they are not counted when performing the test. Granulocytes from patients with chronic granulomatous disease are able to phagocytize but not convert the dye particles, so that the NBT result will be very low or zero, and the NBT test is used to screen for this disorder. In addition, because neutrophils increase their phagocytic activity during acute bacterial infection, the nitroblue tetrazolium test has been used to separate persons with bacterial infection from persons with leukocytosis of other etiologies. In general, acute bacterial infection increases the NBT count, whereas viral or tuberculous infections do not. It has also been advocated as a screening test for infection when the WBC count is normal and as a means to differentiate bacterial and viral infection in febrile patients. Except for chronic granulomatous disease there is a great divergence of opinion in the literature on the merits of the NBT test, apportioned about equally between those who find it useful and those who believe that it is not reliable because of unacceptable degrees of overlap among patients in various diagnostic categories. Many modifications of the original technique have been proposed that add to the confusion, including variations in anticoagulants, incubation temperature, smear thickness, method of calculating data, and use of phagocytosis “stimulants,” all of which may affect test results.

    Some conditions other than bacterial infection that may elevate the NBT score (false positives) include normal infants aged less than 2 months, echovirus infection, malignant lymphomas (especially Hodgkin’s disease), hemophilia A, malaria, certain parasitic infestations, Candida albicans and Nocardia infections, and possibly the use of oral contraceptives. Certain conditions may (to varying degree) induce normal scores in the presence of bacterial infection (false negatives); these include antibiotic therapy, localized infection, systemic lupus erythematosus, sickle cell anemia, diabetes mellitus, agammaglobulinemia, and certain antiinflammatory medications (corticosteroids, phenylbutazone).

  • Fat Embolization

    Fat embolization is most often associated with severe bone trauma, but may also occur in fatty liver, diabetes, and other conditions. Symptoms may be immediate or delayed. If they are immediate, shock is frequent. Delayed symptoms occur 2 – 3 days after injury, and pulmonary or cerebral manifestations are most prominent. Frequent signs are fever, tachycardia, tachypnea, upper body petechiae (50% of patients), and decreased hemoglobin values. Laboratory diagnosis includes urine examination for free fat, results of which are positive in 50% of cases during the first 3 days; and serum lipase, results of which are elevated in nearly 50% of patients from about day 3 to day 7. Fat in sputum is unreliable; there are many false positive and negative results. Chest x-ray films sometimes demonstrate diffuse tiny infiltrates, occasionally coalescing, described in the literature as having a “snowstorm” appearance. Some patients have a laboratory picture suggestive of disseminated intravascular coagulation. One report has indicated that diagnosis by cryostat frozen section of peripheral blood clot is sensitive and specific, but adequate confirmation of this method is not yet available. The most sensitive test for fat embolism is said to be a decrease in arterial PO2, frequently to levels less than 60%. However, patients with chronic lung disease may already have decreased PO2.

  • C-Reactive Protein

    C-reactive protein (CRP) is a glycoprotein produced during acute inflammation or tissue destruction. The protein gets its name from its ability to react (or cross-react) with Pneumococcus somatic C-polysaccharide and precipitate it. The CRP level is not influenced by anemia or plasma protein changes. It begins to rise about 4-6 hours after onset of inflammation and has a half-life of 5-7 hours, less than one-fourth that of most other proteins that react to acute inflammation. For many years the standard technique was a slide or tube precipitation method, with the degree of reaction estimated visually and reported semiquantitatively. The test never enjoyed the same popularity as the ESR because the result was not quantitative and the end point was difficult to standardize due to subjective visual estimations. Recently, new methods such as rate reaction nephelometry and fluorescent immunoassay have enabled true quantitative CRP measurement. CRP determination using the new quantitative methods offers several important advantages over the ESR, including lack of interference by anemia or serum protein changes, fewer technical problems, and greater sensitivity to acute inflammation because of shorter half-life of the protein being measured. Many now consider quantitative CRP measurements the procedure of choice to detect and monitor acute inflammation and acute tissue destruction. ESR determination is preferred, however, in chronic inflammation. There is some evidence that CRP levels are useful in evaluation of postoperative recovery. Normally, CRP reaches a peak value 48-72 hours after surgery and then begins to fall, entering the reference range 5-7 days after operation. Failure to decrease significantly after 3 days postoperatively or a decrease followed by an increase suggests postoperative infection or tissue necrosis. For maximal information and easier interpretation of the data, a preoperative CRP level should be obtained with serial postoperative CRP determinations.

    General clinical indications for CRP are essentially the same as those listed for the ESR. A growing number of investigators feel that the true quantitative CRP is superior in many ways to the ESR.

  • Erythrocyte Sedimentation Rate

    The erythrocyte sedimentation rate (ESR) is determined by filling a calibrated tube of standard diameter with anticoagulated whole blood and measuring the rate of red blood cell (RBC) sedimentation during a specified period, usually 1 hour. When the RBCs settle toward the bottom of the tube, they leave an increasingly large zone of clear plasma, which is the area measured. Most changes in RBC sedimentation rate are caused by alterations in plasma proteins, mainly fibrinogen, with a much smaller contribution from alpha-2 globulins. Fibrinogen increases 12-24 hours after onset of an acute inflammatory process or acute tissue injury. Many conditions cause abnormally great RBC sedimentation (rate of fall in the tube system). These include acute and chronic infection, tissue necrosis and infarction, well-established malignancy, rheumatoid-collagen diseases, abnormal serum proteins, and certain physiologic stress situations such as pregnancy or marked obesity. The ESR is frequently increased in patients with chronic renal failure, with or without dialysis. In one study, 75% had Westergren ESRs more than 30 mm/hour, and some had marked elevations. One study found elevated ESR in 50% of patients with symptomatic moderate or severe congestive heart failure, with elevation correlating directly with plasma fibrinogen valves. Low ESR was found in 106 of the patients and was associated with severe CHF. Marked elevation of the Westergren ESR (defined as a value > 100 mm/hour) was reported in one study to be caused by infectious diseases, neoplasia, noninfectious inflammatory conditions, and chronic renal disease. This degree of ESR abnormality was found in about 4% of patients who had ESR determined.

    ESR determination has three major uses: (1) as an aid in detection and diagnosis of inflammatory conditions or to help exclude the possibility of such conditions, (2) as a means of following the activity, clinical course, or therapy of diseases with an inflammatory component, such as rheumatoid arthritis, acute rheumatic fever or acute glomerulonephritis, and (3) to demonstrate or confirm the presence of occult organic disease, either when the patient has symptoms but no definite physical or laboratory evidence of organic disease or when the patient is completely asymptomatic.

    The ESR has three main limitations: (1) it is a very nonspecific test, (2) it is sometimes normal in diseases where usually it is abnormal, and (3) technical factors may considerably influence the results. The tubes must be absolutely vertical; even small degrees of tilt have great effect on degree of sedimentation. Most types of anemia falsely increase the ESR as determined by the Wintrobe method. The Wintrobe method may be “corrected” for anemia by using a nomogram, but this is not accurate. The Westergren method has a widespread reputation for being immune to the effects of anemia, but studies have shown that anemia does have a significant effect on the Westergren method (although not quite as much as the Wintrobe method). Although no well-accepted method is available to correct the Westergren ESR for effect of anemia, one report included a formula that is easy to use and provides a reasonable degree of correction: Corrected (Westergren) ESR = ESR – [(Std. Ht – Actual Ht) x 1.75], where Std. Ht (standard hematocrit) is 45 for males and 42 for females.

    Besides anemia and changes in fibrinogen and alpha-2 globulins, other factors affect the ESR. Changes in serum proteins that alter plasma viscosity influence RBC sedimentation. A classic example is the marked increase in ESR seen with the abnormal globulins of myeloma. Certain diseases such as sickle cell anemia and polycythemia falsely decrease the ESR. The majority of (but not all) investigators report that normal values are age related; at least 10 mm/hour should be added to young adult values after age 60. Some use a formula for Westergren values: for men, age in years x 2; for women, (age in years + 10) x 2.

  • Sarcoidosis

    This disease, of as yet unknown etiology, is manifested by noncaseating granulomatous lesions in many organ systems, most commonly in the lungs and thoracic lymph nodes. The disease is much more common in African Americans. Laboratory results are variable and nonspecific. Anemia is not frequent but appears in about 5% of cases. Splenomegaly is present in 10%-30% of cases. Leukopenia is found in approximately 30%. Eosinophilia is reported in 10%-60%, averaging 25% of cases. Thrombocytopenia is very uncommon, reported in less than 2% of patients in several large series. Serum protein abnormalities are common, with polyclonal hyperglobulinemia in nearly 50% of patients and with albumin levels frequently decreased. Hypercalcemia is reported in about 10%-20% of cases, with a range in the literature of 2%-63%. Uncommonly, primary hyperparathyroidism and sarcoidosis coexist. Alkaline phosphatase (ALP) levels are elevated in nearly 35% of cases, which probably reflects either liver or bone involvement.

    The major diagnostic tests that have been used include the Kveim skin test, biopsy (usually of lymph nodes), and assay of angiotensin-converting enzyme (ACE).

    The Kveim test consists of intradermal inoculation of an antigen composed of human sarcoidal tissue. A positive reaction is indicated by development of a papule in 4-6 weeks, which, on biopsy, yields the typical noncaseating granulomas of sarcoidosis. The test is highly reliable, yielding less than 3% false positives. The main difficulty is inadequate supplies of sufficiently potent antigen. For this reason, few laboratories are equipped to do the Kveim test. Between 40% and 80% of cases give positive results, depending on the particular lot of antigen and the duration of disease. In chronic sarcoidosis (duration more than 6 months after onset of illness), the patient is less likely to have a positive result on the Kveim test. Steroid treatment depresses the Kveim reaction and may produce a negative test result. The value of the Kveim test is especially great when no enlarged lymph nodes are available for biopsy, when granulomas obtained from biopsy are nonspecific, or when diagnosis on an outpatient basis is necessary. One report has challenged the specificity of the Kveim test, suggesting that a positive test is related more to chronic lymphadenopathy than to any specific disease.

    Biopsy is the most widely used diagnostic procedure at present. Peripheral lymph nodes are involved in 60%-95% of cases, although often they are small. The liver is said to show involvement in 75% of cases, although it is palpable in 20% or less. Difficulties with biopsy come primarily from the fact that the granuloma of sarcoidosis, although characteristic, is nonspecific. Other diseases that sometimes or often produce a similar histologic pattern are early miliary tuberculosis, histoplasmosis, some fungal diseases, some pneumoconioses, and the so-called pseudosarcoid reaction sometimes found in lymph nodes draining areas of carcinoma.

    Angiotensin-converting enzyme (ACE) is found in lung epithelial cells and converts angiotensin I (derived from inactive plasma angiotensinogen in a reaction catalyzed by renin) to the vasoconstrictor angiotensin II. It has been found that serum ACE values are elevated in approximately 75%-80% of patients with active sarcoidosis (literature range, 45%-86%). Sensitivity is much less in patients with inactive sarcoidosis (11% in one report) or in patients undergoing therapy. Unfortunately, 5%-10% of ACE elevations are not due to sarcoidosis (literature range, 1%-33%). The highest incidence of ACE abnormality in diseases other than sarcoidosis is seen in Gaucher’s disease, leprosy, active histoplasmosis, and alcoholic cirrhosis. Other conditions reported include tuberculosis, non-Hodgkin’s lymphoma, Hodgkin’s disease, scleroderma, hyperthyroidism, myeloma, pulmonary embolization, nonalcoholic cirrhosis, and idiopathic pulmonary fibrosis. Usually patients with these diseases (and normal persons) have a less than 5% incidence of elevated ACE values. However, either normal or increased ACE levels must be interpreted with caution. ACE levels are useful to follow a patient’s response to therapy. Certain conditions such as adult respiratory distress syndrome, diabetes, hypothyroidism, and any severe illness may decrease ACE levels.

  • Pulmonary Embolism

    Pulmonary emboli are often difficult both to diagnose and to confirm. Sudden dyspnea is the most common symptom; but clinically there may be any combination of chest pain, dyspnea, and possibly hemoptysis. Diseases that must be also considered are acute myocardial infarction (MI) and pneumonia. Pulmonary embolism is often associated with chronic congestive heart failure, cor pulmonale, postoperative complications of major surgery, and fractures of the pelvis or lower extremities, all situations in which MI itself is more likely. The classic x-ray finding of a wedge-shaped lung shadow is often absent or late in developing, because not all cases of embolism develop actual pulmonary infarction, even when the embolus is large.

    Laboratory tests in pulmonary embolism have not been very helpful. Initial reports of a characteristic test triad (elevated total bilirubin and lactic dehydrogenase [LDH] values with normal aspartate aminotransferase [AST] proved disappointing, because only 20%-25% of patients display this combination. Reports that LDH values are elevated in 80% of patients are probably optimistic. In addition, LDH values may be elevated in MI or liver passive congestion, conditions that could mimic or be associated with embolism. Theoretically, LDH isoenzyme fractionation should help, since the classic isoenzyme pattern of pulmonary embolism is a fraction 3 increase. Unfortunately this technique also has proved disappointing, since a variety of patterns have been found in embolization (some due to complication of embolization, such as liver congestion), and fraction 3 may be normal. Total creatine phosphokinase (CK) initially was advocated to differentiate embolization (normal CK) from MI (elevated CK value), but later reports indicate that the total CK value may become elevated in some patients with embolism. The CK isoenzymes, however, are reliable in confirming MI, and normal CK-MB values plus normal LDH-1/LDH-2 ratios (obtained at proper times) is also reliable in ruling out MI.

    Arterial oxygen saturation has been proposed as a screening test for pulmonary embolism, since most patients with embolism develop arterial oxygen saturation values less than 80 mm Hg. However, 15%-20% (range, 10%-26%) of patients with pulmonary embolization have oxygen saturation greater than 80 mm Hg, and 5%-6% have values greater than 90%. Conversely, many patients have chronic lung disease or other reasons for decreased oxygen saturation, so that in many patients one would need a previous normal test result to interpret the value after a possible embolism.

    The most useful screening procedure for pulmonary embolism is the lung scan. Serum albumin is tagged with a radioisotope, and the tagged albumin molecules are treated in such a way as to cause aggregation into larger molecular groups (50-100 µm). This material is injected into a vein, passes through the right side of the heart, and is sent into the pulmonary artery. The molecules then are trapped in small arterioles of the pulmonary artery circulation, so that a radiation detector scan of the lungs shows a diffuse radioactive uptake throughout both lungs from these trapped radioactive molecules. A scan is a visual chart of the radioactivity counts over a specified area that are received by the radiation detector. The isotope solution is too dilute to cause any difficulty by its partial occlusion of the pulmonary circulation; only a small percentage of the arterioles are affected, and the albumin is metabolized in 3-4 hours. If a part of the pulmonary artery circulation is already occluded by a thrombus, the isotope does not reach that part of the lung, and the portion of lung affected does not show any uptake on the scan (abnormal scan).

    The lung scan becomes abnormal immediately after total occlusion of the pulmonary artery or any branches of the pulmonary artery that are of significant size. There does not have to be actual pulmonary infarction, since the scan results do not depend on tissue necrosis, only on mechanical vessel occlusion. However, in conditions that temporarily or permanently occlude or cut down lung vascularity, there will be varying degrees of abnormality on lung scan; these conditions include cysts, abscesses, many cases of carcinoma, scars, and a considerable number of pneumonias, especially when necrotizing. However, many of these conditions may be at least tentatively ruled out by comparison of the scan results with a chest x-ray film. A chest x-ray film should therefore be obtained with the lung scan.

    Asthma in the acute phase may also produce focal perfusion defects due to bronchial obstruction. These disappear after treatment and therefore can mimic emboli. Congestive heart failure or pulmonary emphysema often cause multiple perfusion abnormalities on the lung scan. This is a major problem in the elderly, since dyspnea is one of the symptoms associated with embolization or may be a source of confusion when emphysema and emboli coexist. Emphysema abnormality can be differentiated from that of embolization by a follow-up lung scan after 6-8 days. Defects due to emphysema persist unaltered, whereas those due to emboli tend to change configuration. The repeat study could be performed earlier but with increased risk of insufficient time lapse to permit diagnostic changes.

    The lung scan, like the chest x-ray, is nonspecific; that is, a variety of conditions produce abnormality. Certain findings increase the probability of embolization and serial studies provide the best information. In some cases, a xenon isotope lung ventilation study may help differentiate emboli from other etiologies of perfusion defect; but when congestive heart failure is present, when the defect is small, and when embolization is superimposed on severe emphysema, the xenon study may not be reliable. Pulmonary artery angiography provides a more definitive answer than the lung scan, but it is a relatively complicated invasive procedure, entails some risk, and may miss small peripheral clots. The lung scan, therefore, is more useful than angiography as a screening procedure. A normal lung scan effectively rules out pulmonary embolization. A minimum of four lung scan views (anterior, posterior, and both lateral projections) is required to constitute an adequate perfusion lung scan study.

  • Toxicology

    This section includes a selected list of conditions that seem especially important in drug detection, overdose, or poisoning. Treatment of drug overdose by dialysis or other means can often be assisted with the objective information derived from drug levels. In some cases, drug screening of urine and serum may reveal additional drugs or substances, such as alcohol, which affect management or clinical response.

    Lead. Lead exposure in adults is most often due to occupational hazard (e.g., exposure to lead in manufacture or use of gasoline additives and in smelting) or to homemade “moonshine” whiskey distilled in lead-containing equipment. When children are severely affected, it is usually from eating old lead-containing paint chips. One group found some indications of chronic lead exposure in about one half of those persons examined who had lived for more than 5 years near a busy automobile expressway in a major city. Fertilization of crops with city sewage sludge is reported to increase the lead content of the crops. Several studies report that parents who smoke cigarettes are risk factors for increased blood lead values in their children. Living in houses built before 1960 is another risk factor because lead-based paint was used before it was banned. Renovating these houses may spread fragments or powder from the lead-containing paint. Living near factories manufacturing lead batteries is another risk factor.

    Symptoms. Acute lead poisoning is uncommon. Symptoms may include “lead colic” (crampy abdominal pain, constipation, occasional bloody diarrhea) and, in 50% of patients, hypertensive encephalopathy. Chronic poisoning is more common. Its varying symptoms may include lead colic, constipation with anorexia (85% of patients), and peripheral neuritis (wrist drop) in adults and lead encephalopathy (headache, convulsions) in children. A “lead line” is frequently present just below the epiphyses (in approximately 70% of patients with clinical symptoms and 20%-40% of persons with abnormal exposure but no symptoms).

    Hematologic findings. Most patients develop slight to moderate anemia, usually hypochromic but sometimes normochromic. RBCs with basophilic stippling is the most characteristic peripheral blood finding. Some authors claim stippling is invariably present; others report that stippling is present in only 20%-30% of cases. Normal persons may have as many as 500 stippled cells/1 million RBCs. The reticulocyte count is usually greater than 4%.

    d-Aminolevulinic acid dehydrase. Body intake of lead produces biochemical effects on heme synthesis (see Fig. 34-1). The level of d-aminolevulinic acid dehydrase (ALA-D), which converts ALA to porphobilinogen, is decreased as early as the fourth day after exposure begins. Once the ALA-D level is reduced, persistence of abnormality correlates with the amount of lead in body tissues (body burden), so that the ALA-D level remains reduced as long as significant quantities of lead remain. Therefore, after chronic lead exposure, low ALA-D values may persist for years even though exposure has ceased. The level of ALA-D is also a very sensitive indicator of lead toxicity and is usually reduced to 50% or less of normal activity when blood lead values are in the 30-50 µg/100 ml (1.4-2.4 µmol/L) range. Unfortunately, the ALA-D level reaches a plateau when marked reduction takes place, so it cannot be used to quantitate degree of exposure. In addition, this enzyme must be assayed within 24 hours after the blood specimen is secured. Relatively few laboratories perform the test, although it has only a moderate degree of technical difficulty.

    Blood lead assay. Intake of lead ordinarily results in rapid urinary lead excretion. If excessive lead exposure continues, lead is stored in bone. If bone storage capacity is exceeded, lead accumulates in soft tissues. Blood lead levels depend on the relationship between intake, storage, and excretion. The blood lead level is primarily an indication of acute (current) exposure but is also influenced by previous storage. According to 1991 Centers for Disease Control (CDC) guidelines, whole blood lead values over 10 µg/100 ml (0.48 µmol/L) are considered abnormal in children less than 6 years old. Values higher than 25 µg/100 ml (1.21 µmol/L) are considered abnormal in children over age 6 years and in adolescents. Values more than 40 µg/100 ml (1.93 µmol/L) are generally considered abnormal in adults, although the cutoff point for children may also be valid for adults. Symptoms of lead poisoning are associated with levels higher than 80 µ/100 ml (3.86 µmol/L), although mild symptoms may occur at 50 µg/100 ml (2.41 µmol/L) in children. Blood lead assay takes considerable experience and dedication to perform accurately. Contamination is a major headache—in drawing the specimen, in sample tubes, in laboratory glassware, and in the assay procedure itself. Special Vacutainer-type tubes for trace metal determination are commercially available and are strongly recommended.

    Urine d-aminolevulinic acid (ALA) assay. Another procedure frequently used is urine ALA assay. Blood and urine ALA levels increase when the blood ALA-D level is considerably reduced. Therefore, ALA also becomes an indicator of body lead burden, and urine ALA begins to increase when blood lead values are higher than 40 µg/100 ml (1.93 µmol/L). Disadvantages of urine ALA assay are difficulties with 24-hour urine collection or, if random specimens are used, the effects of urine concentration or dilution on apparent ALA concentration. In addition, at least one investigator found that the urine ALA level was normal in a significant number of cases when the blood lead level was in the 40-80 µg/100 ml (1.93-3.86 µmol/L) (mildly to moderately abnormal) range. Light, room temperature, and alkaline pH all decrease ALA levels. If ALA determination is not done immediately, the specimen must be refrigerated and kept in the dark (the collection bottle wrapped in paper or foil) with the specimen acidified, using glacial acetic or tartaric acid.

    Detecting lead exposure. If a patient is subjected to continuous lead exposure of sufficient magnitude, blood lead level, urine lead excretion, ALA-D level, and urine ALA level all correlate well. If the exposure ceases before laboratory tests are made, blood lead level (and sometimes even urine lead level) may decrease relative to ALA-D or urine ALA. Assay of ALA-D is the most sensitive of these tests. In fact, certain patients whose urine ALA and blood lead levels are within normal limits may display a mild to moderate decrease in ALA-D levels. It remains to be determined whether this reflects previous toxicity in all cases or simply means that ALA-D levels between 50% and 100% of normal are too easily produced to mean truly abnormal lead exposure.

    Urine lead excretion has also been employed as an index of exposure, since blood lead values change more rapidly than urine lead excretion. However, excretion values depend on 24-hour urine specimens, with the usual difficulty in complete collection. A further problem is that excretion values may be normal in borderline cases or in cases of previous exposure. Urine lead has been measured after administration of a chelating agent such as ethylenediamine tetraacetic acid (EDTA), which mobilizes body stores of lead. This is a more satisfactory technique than ordinary urine excretion for determining body burden (i.e., previous exposure). Abnormal exposure is suggested when the 24-hour urine lead excretion is greater than 1 µg for each milligram of calcium-EDTA administered. Disadvantages are those of incomplete urine collection, difficulty in accurate lead measurement, and occasional cases of EDTA toxicity.

    Erythrocyte protoporphyrin (zinc protoporphyrin, or ZPP) is still another indicator of lead exposure. Lead inhibits ferrochelatase (heme synthetase), an enzyme that incorporates iron into protoporphyrin IX (erythrocyte protoporphyrin) to form heme. Decreased erythrocyte protoporphyrin conversion leads to increased erythrocyte protoporphyrin levels. The standard assay for erythrocyte protoporphyrin involved extraction of a mixture of porphyrins, including protoporphyrin IX, from blood, and measurement of protoporphyrin using fluorescent wavelengths. In normal persons, protoporphyrin IX is not complexed to metal ions. Under the conditions of measurement it was thought that the protoporphyrin being measured was metal free, since iron-complexed protoporphyrin did not fluoresce, so that what was being measured was called “free erythrocyte protoporphyrin.” However, in lead poisoning, protoporphyrin IX becomes complexed to zinc; hence, the term ZPP. The protoporphyrin-zinc complex will fluoresce although the protoporphyrin-iron complex will not. Therefore, most laboratory analytic techniques for ZPP involve fluorescent methods. In fact, some have used visual RBC fluorescence (in a heparinized wet preparation using a microscope equipped with ultraviolet light) as a rapid screening test for lead poisoning. Zinc protoporphyrin levels are elevated in about 50%-75% of those who have a subclinical increase in blood lead levels (40-60 µg/100 ml) and are almost always elevated in symptomatic lead poisoning. However, the method is not sensitive enough for childhood lead screening (10 µg/100 ml or 0.48 µmol/L). An instrument called the “hematofluorometer” is available from several manufacturers and can analyze a single drop of whole blood for ZPP. The reading is affected by the number of RBCs present and must be corrected for hematocrit level. The ZPP test results are abnormal in chronic iron deficiency and hemolytic anemia as well as in lead poisoning. The ZPP level is also elevated in erythropoietic protoporphyria (a rare congenital porphyria variant) and in chronic febrile illness. An increased serum bilirubin level falsely increases ZPP readings, and fluorescing drugs or other substances in plasma may interfere.

    Urinary coproporphyrin III excretion is usually, although not invariably, increased in clinically evident lead poisoning. Since this compound fluoresces under Wood’s light, simple screening tests based on fluorescence of coproporphyrin III in urine specimens under ultraviolet light have been devised.

    Diagnosis of lead poisoning. The question arises as to which test should be used to detect or diagnose lead poisoning. The ALA-D assay is the most sensitive current test, and ALA-D levels may be abnormal (decreased) when all other test results are still normal. Disadvantages are the long-term persistence of abnormality once it is established, which may represent past instead of recent exposure. The specimen is unstable, and few laboratories perform the test. Zinc protoporphyrin is sensitive for lead poisoning and detects 50%-70% of cases of subclinical lead exposures in adults but is not sensitive enough to detect mandated levels of subclinical exposure in young children. There would be a problem in differentiating acute from chronic exposure because of the irreversible change induced in the RBCs, which remains throughout the 120-day life span of the RBCs. Thus, ZPP represents biologic effects of lead averaged over 3-4 months’ time. Also, the test is not specific for lead exposure. Blood lead assay is considered the best diagnostic test for actual lead poisoning. Blood lead indicates either acute or current exposure; levels in single short exposures rise and fall fairly quickly. However, small elevations (in the 40-60 µg/100 ml range), especially in single determinations, may be difficult to interpret because of laboratory variation in the assay. Some investigators recommend assay of blood lead together with ZPP, since elevation of ZPP values would suggest that exposure to lead must have been more than a few days’ duration.

    Heavy metals. Mercury, arsenic, bismuth, and antimony are included. Urine samples are preferred to blood samples. Hair and nails are useful for detection or documentation of long-term exposure to arsenic or mercury.

    Organic phosphates (cholinesterase inhibitors). Certain insecticides such as parathion and the less powerful malathion are inhibitors of the enzyme acetylcholinesterase. Acetylcholinesterase inactivates excess acetylcholine at nerve endings. Inhibition or inactivation of acetylcholinesterase permits overproduction of acetylcholine at nerve-muscle junctions. Symptoms include muscle twitching, cramps, and weakness; parasympathetic effects such as pinpoint pupils, nausea, sweating, diarrhea, and salivation; and various CNS aberrations. Organic phosphate poisons inactivate not only acetylcholinesterase (which is found in RBCs as well as at nerve endings) but also pseudocholinesterase, which is found in plasma. Therefore, laboratory diagnosis of organophosphate poisoning is based on finding decreased acetylcholines-terase levels in RBCs or pseudocholinesterase in serum (these two cholinesterase types are frequently referred to simply as “cholinesterase.” Levels in RBCs reflect chronic poisoning more accurately than serum values since RBC levels take longer to decrease than serum pseudocholinesterase and take longer to return to normal after exposure. Also, serum levels are reduced by many conditions and drugs. However, plasma measurement is much easier, so that screening tests are generally based on plasma measurement. In acute poisoning, RBC or serum cholinesterase activity is less than 50% of normal. In most cases, a normal result rules out severe acute anticholinesterase toxicity. However, the population reference range is fairly wide, so that a person with a preexposure value in the upper end of the population range might have his or her value decreased 50% and still be within the population reference range. Therefore, lowormal values do not exclude the possibility of organophosphate toxicity. It is strongly recommended that persons who may be occupationally exposed to the organophosphates should have their baseline serum cholinesterase (pseudocholinesterase) value established. Once this is done, periodic monitoring could be done to detect subclinical toxicity. It may take up to 6 weeks for serum pseudocholinesterase to return to normal after the end of exposure. Severe acute or chronic liver disease or pregnancy can decrease cholinesterase levels.

    Barbiturates and glutethimide. Barbiturates and glutethimide (Doriden) are the most common vehicles of drug-overdose suicide. In testing, either anticoagulated (not heparinized) whole blood or urine can be used; blood is preferred. TLC is used both for screening and to identify the individual substance involved. Chemical screening tests are also available. It is preferable to secure both blood and urine specimens plus gastric contents, if available. Many of the larger laboratories can perform quantitative assay of serum phenobarbital.

    Phenothiazine tranquilizers. Urine can be screened with the dipstick Phenistix or the ferric chloride procedure. TLC, GC, and other techniques are available for detection and quantitation.

    Acetaminophen. Acetaminophen (Paracetamol; many different brand names) has been replacing aspirin for headache and minor pain because of the gastric irritant and anticoagulant side effects of aspirin. With greater use of acetaminophen has come occasional cases of overdose. Acetaminophen is rapidly absorbed from the small intestine. Peak serum concentration is reached in 0.5-2 hours, and the serum half-life is 1-4 hours. About 15%-50% is bound to serum albumin. Acetaminophen is 80%-90% metabolized by the liver microsome pathway, 4%-14% is excreted unchanged by the kidneys, and a small amount is degraded by other mechanisms.

    Liver toxocity. The usual adult dose is 0.5 gm every 3-4 hours. In adults, liver toxicity is unlikely to occur if the ingested dose is less than 10 gm at one time, and death is unlikely if less than 15 gm is ingested. However, 10 gm or more at one time may produce liver damage, and 25 gm can be fatal. Children under age 5 years are less likely to develop liver injury. The toxic symptoms of overdose usually subside in 24 hours after the overdose, even in persons who subsequently develop liver injury. Liver function test results are typical of acute hepatocellular injury, with AST (SGOT) levels similar to those of acute hepatitis virus hepatitis. The peak of AST elevation most often occurs 4-6 days after onset. The liver completely recovers in about 3 months if the patient survives.

    Laboratory evaluation. Serum acetaminophen levels are helpful to estimate the likelihood of hepatic damage. These levels are used as a guide to continue or discontinue therapy. Peak acetaminophen levels provide the best correlation with toxicity. Current recommendations are that the assay specimen should be drawn 4 hours after ingestion of the dose—not earlier—to be certain that the peak has been reached. A serum level greater than 200 µg/ml (13.2 µmol/L) at 4 hours is considered potentially toxic, and a level less than 150 µg/ml (9.9 µmol/L) is considered nontoxic. The assay should be repeated 12 hours after ingestion. A value greater than 50 µg/ml (3.3 µmol/L) is considered toxic, and a value less than 35 µg/ml (2.3 µmol/L) is considered nontoxic. Colorimetric assay methods are available in kit form that are technically simple and reasonably accurate. However, inexperienced persons can obtain misleading results, so the amount of drug ingested and other factors should also be considered before therapy is terminated. Salicylates, ketones, and ascorbic acid (vitamin C) in high concentration interfere in some assay methods. Either AST or alanine aminotransferase levels should be determined daily for at least 4 days as a further check on liver function.

    Acetylsalicylic acid (aspirin). Absorption of aspirin takes place in the stomach and small intestine. Absorption is influenced by rate of tablet dissolution and by pH (acid pH assists absorption and alkaline pH retards it). Peak plasma concentration from ordinary aspirin doses is reached in about 2 hours (1-4 hours), with the peak from enteric-coated aspirin about 4-6 hours later. In some cases of overdose serum values from ordinary aspirin may take several hours to reach their maximum level due to pylorospasm. Aspirin is rapidly metabolized to salicylic acid in GI tract mucosa and liver; it is further metabolized by the liver to metabolically inactive salicyluric acid. Of the original dose, 10%-80% is excreted by the kidneys as salicylate, 5%-15% as salicylic acid, and 15%-40% as salicyluric acid. The half-life of salicylate or its active metabolites in serum at usual drug doses is 2-4.5 hours. The half-life is dose dependent, since the degradation pathways can be saturated. At high doses the half-life may be 15-30 hours. Also, steady-state serum concentration is not linear with respect to dose, with disproportionate serum levels being produced by much smaller increments in dose.

    Laboratory tests. Mild toxicity (tinnitus, visual disturbances, GI tract disturbances) correlates with serum salicylate levels more than 30 mg/100 ml (300 µg/ml), and severe toxicity (CNS symptoms) is associated with levels more than 50 mg/100 ml (500 µg/ml). In younger children, severe toxicity is often associated with ketosis and metabolic acidosis, whereas in older children and adults, respiratory alkalosis or mixed acidosis-alkalosis is more frequent. Peak serum salicylate values correlate best with toxicity. It is recommended that these be drawn at least 6 hours after the overdose to avoid serum values falsely below peak levels due to delayed absorption. Enteric-coated aspirin delays absorption an additional 4-6 hours. Screening tests for salicylates include urine testing with a ferric chloride reagent or Phenistix (both of these tests are also used in the diagnosis of phenylketonuria). The most commonly used quantitative test is a colorimetric procedure based on ferric chloride. Ketone bodies and phenothiazine tranquilizers can interfere.

    Carbon monoxide. Carbon monoxide combines with hemoglobin to form carboxyhemoglobin. While doing this it occupies oxygen-binding sites and also produces a change in the hemoglobin molecule that binds the remaining oxygen more tightly, with less being available for tissue cell respiration. Headache, fatigue, and lightheadedness are the most frequent symptoms.

    Laboratory diagnosis. Carbon monoxide poisoning is detected by hemoglobin analysis for carboxyhemoglobin. This is most readily done on an instrument called a CO-Oximeter. A 30%-40% carboxyhemoglobin content is associated with severe symptoms, and more than 50% is associated with coma. Cigarette smoking may produce levels as high as 10%-15%. Carboxyhemoglobin is stable for more than 1 week at room temperature in EDTA anticoagulant. The specimen should be drawn as soon as possible after exposure, since carbon monoxide is rapidly cleared from hemoglobin by breathing normal air.

    Carbon monoxide poisoning can be suspected from an arterial blood gas specimen when a measured oxygen saturation (percent O2 saturation of hemoglobin) value is found to be significantly below what would be expected if oxygen saturation were calculated from the PO2 and pH values. For this screening procedure to be valid, the O2 saturation must be measured directly, not calculated. Some blood gas machines measure O2 saturation, but the majority calculate it from the PO2 and pH values.

    Ethyl alcohol (ethanol). Ethanol is absorbed from the small intestine and, to a lesser extent, from the stomach. Factors that influence absorption are (1) whether food is also ingested, since food delays absorption, and if so, the amount and kind of food; (2) the rate of gastric emptying; and (3) the type of alcoholic beverage ingested. Without food, the absorptive phase (time period during which alcohol is being absorbed until the peak blood value is reached) may be as short as 15 minutes or as long as 2 hours. In one study, peak values occurred at 30 minutes after ingestion in nearly 50% of experimental subjects and in about 75% by 1 hour, but 6% peaked as late as 2 hours. With food, absorption is delayed to varying degrees. Once absorbed, ethanol rapidly equilibrates throughout most body tissues. The liver metabolizes about 75% of absorbed ethanol. The predominant liver cell metabolic pathway of ethanol is the alcohol dehydrogenase enzyme system, whose product is acetaldehyde. Acetaldehyde, in turn, is metabolized by the hepatic microsome system. About 10%-15% of absorbed ethanol is excreted unchanged through the kidneys and through the lungs.

    Ethanol measurement. There are several methods for patient alcohol measurement. The legal system generally recognizes whole blood as the gold standard specimen. Arterial blood ethanol is somewhat higher than venous blood levels, especially in the active absorption phase. Capillary blood (fingerstick or ear lobe blood) is about 70%-85% of the arterial concentration. The major problems with whole blood are that values are influenced by the hematocrit, and most current chemistry analyzers must use serum. A serum value is about 18%-20% higher than a whole blood value obtained on the same specimen, whereas blood levels correlated by law to degrees of physical and mental impairment are defined by whole blood assay. Serum values theoretically can be converted to equivalent whole blood values by means of a serum/whole blood (S/WB) conversion ratio. Most laboratories apparently use a S/WB conversion ratio of 1.20. Unfortunately, there is significant disagreement in the literature on which ratio to use; different investigators report S/WB ratios varying between 1.03 and 1.35. Based on the work of Rainey (1993), the median S/WB conversion ratio is 1.15 (rather than 1.20), the range of ratios included in 95% certainty is 0.95-1.40, and the range of ratios included in 99% certainty is 0.90-1.49. Whole blood values can be obtained directly by using serum analytic methods on a protein-free filtrate from a whole blood specimen.

    Enzymatic methods using alcohol dehydrogenase or alcohol oxidase are replacing the classic potassium dicromate methods. There is some dispute in the literature whether or not alcohol dehydrogenase methods are affected by isopropanol (commonly used for venipuncture skin cleansing). In experiments performed in my laboratory, no cross-reaction was found in concentrations much stronger than what should be encountered from skin cleansing. Nevertheless, because of legal considerations, specimens for ethanol should be drawn without using any type of alcohol as a skin-cleansing agent. Increased blood ketones, as found in diabetic ketoacidosis, can falsely elevate either blood or breath alcohol test results.

    Urine is not recommended for analysis to estimate degree of alcohol effect because the blood/urine ratio is highly variable and there may be stasis of the specimen in the bladder. However, urine can be used to screen for the presence of alcohol. Breath analyzers are the assay method most commonly used for police work since the measurement can be done wherever or whenever it is desirable. Breath analyzers measure the ethanol content at the end of expiration following a deep inspiration. The measurement is then correlated to whole blood by multiplying the measured breath ethanol level by the factor 2,100. On the average, breath alcohol concentration correlates reasonably well with whole blood alcohol concentration using this factor. However, there is significant variation between correlation factors reported in different individuals and average factors in different groups, so that use of any single “universal” factor will underestimate the blood ethanol concentration in some persons and overestimate it in others. Also, correlation with blood ethanol levels is better when breath ethanol is measured in the postabsorptive state than in the absorptive state. When breath analyzers are used, it is important that there be a period of at least 15 minutes before testing during which no alcohol ingestion, smoking, food or drink consumption, or vomiting has taken place to avoid contamination of the breath specimen by alcohol in the mouth. Some alcohol-containing mouthwashes may produce legally significant breath alcohol levels at 2 minutes after applying the mouthwash, but not at 10 minutes after use. Ketone bodies in patients with diabetic acidosis may interfere with breath ethanol measurement. One further advantage of breath testing in the field is the usefulness of a negative test for ethanol in a person whose behavior suggests effects of alcohol; this result could mean a serious acute medical problem that needs immediate attention.

    Legal use of blood alcohol assay. Most courts of law follow the recommendations of the National Safety Council on alcohol and drugs (the following ethanol values are whole blood values):

    • Below 0.05% (50 mg/100 ml): No influence by alcohol within the meaning of the law.
    • Between 0.05% and 0.10% (50-100 mg/100 ml): A liberal, wide zone in which alcohol influence usually is present, but courts of law are advised to consider the person’s behavior and circumstances leading to the arrest in making their decision.
    • Above 0.10% (100 mg/100 ml): Definite evidence of being “under the influence,” since most persons with this concentration will have lost, to a measurable extent, some of the clearness of intellect and self-control they would normally possess.

    Based on the work of Rainey, the minimal serum alcohol level that would correspond to a whole blood alcohol level of 0.10% (100 mg/100 ml, w/v) with 95% certainty is 140 mg/100 ml (30.4 mmol/L) and at 99% certainty is 149 mg/100 ml (32.3 mmol/L).

    Some organizations, including the American Medical Association (AMA) Council on Scientific Affairs (1986), suggest adopting 0.05% blood alcohol content as per se evidence of alcohol-impaired driving.

    Estimating previous blood alcohol levels. In certain situations it would be desirable to estimate the blood alcohol level at some previous time from the results of a subsequent alcohol level. The usual method for this is the Widmark equation: P = A + (F Ч T), when P is the concentration of blood alcohol (in milligrams per liter) at the previous time, A is the concentration of blood alcohol (in milligrams per liter) when it was measured, F is a factor (or constant) whose value is 130 (in milligrams per kilogram per hour), and T is the time (in hours) elapsed between the time the blood alcohol was measured and the previous time that the blood alcohol value must be estimated.

    There is considerable controversy regarding the usefulness of the Widmark equation. The equation is valid for a person only in the postabsorptive state (i.e., after the peak blood alcohol level is reached). Time to peak is most often considered to be 0.5-2.0 hours, so that the blood specimen must be drawn no earlier than 2 hours after the beginning of alcohol intake. The Widmark equation is based on kinetics of alcohol taken during fasting. Food increases alcohol elimination, so that food would cause the Widmark equation to overestimate the previous alcohol level. The factor (constant) of 130 is not necessarily applicable to any individual person, since a range of experimentally measured individual values from 100-340 has been reported.

    Clinical and laboratory effects of alcohol. Alcohol has a considerable number of metabolic and toxic effects that may directly or indirectly involve the clinical laboratory. Liver manifestations include gamma-glutamyltransferase (GGT; formerly gamma-glutamyl transpeptidase) elevation, fatty liver, acute alcoholic hepatitis (“active cirrhosis”), or Laennec’s cirrhosis, and may lead indirectly to bleeding from esophageal varices or to cytopenia either from hypersplenism or through other mechanisms. RBC macrocytosis is frequently associated with chronic alcoholism, and nutritional anemia such as that due to folic acid may be present. Other frequently associated conditions include acute pancreatitis, hypertriglyceridemia, alcoholic gastritis, alcoholic hypoglycemia, various neurologic abnormalities, and subdural hematoma. The chronic alcoholic is more susceptible to infection. Finally, alcohol interacts with a variety of medications. It potentiates many of the CNS depressants, such as various sedatives, narcotics, hypnotics, and tranquilizers (especially chlordiazepoxide and diazepam). Alcohol is a factor in many cases of overdose, even when the patient has no history of alcohol intake or denies intake. The presence of alcohol should be suspected when toxicity symptoms from barbiturates or other medications are associated with blood levels that normally would be considered safe. Alcohol may antagonize the action of various other medications, such as coumarin and phenytoin. Alcohol intake in pregnancy has been reported to produce increased rates of stillbirth and infant growth deficiency as well as a specific “fetal alcohol syndrome.” Fetal alcohol syndrome includes a particular pattern of facial appearance, postnatal growth deficiency with normal bone age, various skeletal and organ malformations, and various neurologic abnormalities (including average IQ below normal).

    Ethanol is one of a number of substances (other alcohols, lactic acid, etc.) that elevate serum osmolality using freezing point depression methods. This creates a gap between measured osmolality and calculated osmolality. Osmolality using vapor pressure instruments is not affected by ethanol.

    Laboratory screening for alcoholism. Various tests have been used to screen for chronic alcoholism. Of these, the most commonly advocated is the GGT. This test and the AST reflect the effect of alcohol on the liver. A third possibility is mean corpuscular volume (MCV), the average size of the patient RBC. This reflects macrocytosis induced by liver disease and possibly also by folic acid deficiency in alcoholics. In heavy drinkers or alcoholics, GGT has a sensitivity of about 70% (literature range, 63%-81%), MCV detects about 60% (26%-90%), and the AST value is elevated in about 50% (27%-77%). Most (but not all) reports indicate some correlation in likelihood and degree of GGT elevation with the amount and frequency of alcohol consumption. Thus, heavy drinkers are more likely to have GGT elevations, and these elevations are (on the average) higher than those of less heavy drinkers. However, there are many exceptions. There is some disagreement as to whether so-called social drinkers have a significant incidence of elevated GGT levels. The majority of investigators seem to believe that they do not.

    Other biochemical abnormalities associated with alcoholism (but found in <40% of cases) include hypophosphatemia, hypomagnesemia, hyponatremia, hypertriglyceridemia, and hyperuricemia.

    An isoform of transferrin that contains fewer sialic acid molecules than normal transferrin and thus is called carbohydrate-deficient transferrin has been advocated by some researchers. A few studies claim that in alcoholism its levels become elevated more often than those of GGT and that therefore it is a much more specific indicator of alcohol abuse. One study claimed 55% sensitivity in detecting moderate alcohol intake and nearly 100% sensitivity in heavy chronic drinkers. A commercial assay kit is available. However, at present the test would most likely have to be performed in a large reference laboratory.

    Tests for Tobacco Use. In some instances, such as in smoking-cessation clinics, life insurance company examinations, and tests to determine degree of passive exposure to tobacco smoke, it is desirable to detect and quantitate tobacco exposure. The modalities that have been investigated are carboxyhemoglobin (based on effect of carbon monoxide generated by tobacco combustion), thiocyanate (a metabolite of cyanide derived from tobacco tar), and cotinine (a metabolite of nicotine). Most current tests are based on thiocyanate or cotinine. Thiocyanate is absorbed in the lungs and has a biological half-life of about 14 days. Ingestion of certain vegetables can falsely elevate serum thiocyanate levels. Cotinine is specific for nicotine, is not affected by diet, has a serum within-day variation of about 15%-20%, and has a biological half-life of about 19 hours. Thus, cotinine tests become negative after tobacco abstinence of a week or less, whereas thiocyanate requires considerably longer before becoming nondetectable. Also, thiocyanate can be assayed chemically and less expensively than cotinine, which is done by immunoassay. Nevertheless, because cotinine is specific for nicotine and is affected only by active or passive exposure to tobacco, cotinine seems to be favored by investigators. Cotinine can be assayed in serum, saliva, or urine; the levels are higher in urine.