Tag: tests

  • Tests for Allergy

    The atopic diseases were originally defined as sensitization based on hereditary predisposition (thus differentiating affected persons from nonaffected persons exposed to the same commonly found antigens) and characterized by immediate urticarial skin reaction to offending antigen and by the Prausnitz-Kьstner reaction. Prausnitz and Kьstner demonstrated in 1921 that serum from a sensitized person, when injected into the skin of a nonsensitized person, would produce a cutaneous reaction on challenge with appropriate antigen (cutaneous passive transfer). The serum factor responsible was known as reagin (skin-sensitizing antibody). In 1966, reagin was found to be IgE, which has subsequently been shown to trigger immediate local hypersensitivity reactions by causing release of histamines and vasoactive substances from mast cells, which, in turn, produce local anaphylaxis in skin or mucous membranes. The IgE system thus mediates atopic dermatitis, allergic rhinitis, and many cases of asthma. In patients with rhinitis, nasal itching is the most suggestive symptom of IgE-associated allergy. Allergens may come from the environment (pollens, foods, allergenic dust, molds), certain chronic infections (fungus, parasites), medications (penicillin), or industrial sources (cosmetics, chemicals). Sometimes there is a strong hereditary component; sometimes none is discernible. Discovery that IgE is the key substance in these reactions has led to measurement of serum IgE levels as a test for presence of atopic allergy sensitization.

    Total immunoglobulin E levels

    Serum total IgE levels are currently measured by some type of immunoassay technique. The most common method is a paper-based radioimmunosorbent test procedure. Values are age dependent until adulthood. Considerably elevated values are characteristically found in persons with allergic disorders, such a atopic dermatitis and allergic asthma, and also in certain parasitic infections and Aspergillus-associated asthma. Values above reference range, however, may be found in some clinically nonallergic persons and therefore are not specific for allergy. On the other hand, many patients with allergy have total IgE levels within normal population range. It has been reported, however, that total IgE values less than 20 international units/ml suggest small probability of detectable specific IgE. Besides IgE, there is some evidence that IgG4 antibodies may have some role in atopic disorders.

    Specific immunoglobulin E levels

    Specific serum IgE (IgE directed against specific antigens) can be measured rather than total IgE. This is being employed to investigate etiology of asthma and atopic dermatitis. The current system is called the radioallergosorbent test (RAST). Specific antigen is bound to a carrier substance and allowed to react with specific IgE antibody. The amount of IgE antibody bound is estimated by adding radioactive anti-IgE antibody and quantitating the amount of labeled anti-IgE attached to the IgE-antigen complex. The type of antigen, the degree and duration of stimulation, and current exposure to antigen all influence IgE levels to any particular antigen at any point in time. Studies thus far indicate that RAST has an 80%-85% correlation with results of skin testing using the subcutaneous injection method (range, 35%-100%, depending on the investigator and the antigen used). It seems a little less sensitive than the intradermal skin test method, but some claim that it predicts the results of therapy better (in other words, it is possibly more specific). Since only a limited number of antigens are available for use in the RAST system, each antigen to be tested for must be listed by the physician. Some advise obtaining a serum total IgE assay in addition to RAST; if results of the RAST panel are negative and the serum IgE level is high, this raises the question of allergy to antigens not included in the RAST panel. Total serum IgE values can be normal, however, even if the findings of one or more antigens on the RAST panel are positive. There is some cross-reaction between certain antigens in the RAST system. The RAST profile is more expensive than skin testing with the same antigens. However, the skin test is uncomfortable, and in a few hyperallergic patients it may even produce anaphylactic shock. Modifications of the RAST technique that are more simple and easy to perform are being introduced, and a dipstick method with a limited number of selected antigens is now commercially available.

    Eosinophilia

    Peripheral blood eosinophilia is frequently present in persons with active allergic disorders, although a rather large minority of these patients do not display abnormal skin tests. Correlation is said to be better in persons less than 50 years old. Unfortunately, there are many possible causes for peripheral blood eosinophilia (see Chapter 6), which makes interpretation more difficult. Presence of more than occasional eosinophil in sputum suggests an allergic pulmonary condition.

    In some patients with nasopharyngeal symptoms, a nasal smear for eosinophils may be helpful. The specimen can be collected with a calcium alginate swab and thin smears prepared on glass slides, which are air-dried and stained (preferably) with Hansel’s stain or Wright’s stain. If more than a few eosinophils are present but not neutrophils, this suggests allergy without infection. If neutrophils outnumber eosinophils, this is considered nondiagnostic (neither confirming nor excluding allergy).

  • Selected Tests of Interest in Pediatrics

    Neonatal immunoglobulin levels. Maternal IgG can cross the placenta, but IgA or IgM cannot. Chronic infections involving the fetus, such as congenital syphilis, toxoplasmosis, rubella, and cytomegalic inclusion disease, induce IgM production by the fetus. Increased IgM levels in cord blood at birth or in neonatal blood during the first few days of life suggest chronic intrauterine infection. Infection near term or subsequent to birth results in an IgM increase beginning 6-7 days postpartum. Unfortunately, there are pitfalls when such data are interpreted. Many cord blood samples become contaminated with maternal blood, thus falsely raising IgM values. Normal values are controversial; 20 mg/dl is the most widely accepted upper limit. Various techniques have different reliabilities and sensitivities. Finally, some investigators state that fewer than 40% of rubella or cytomegalovirus infections during pregnancy produce elevated IgM levels before birth.

    Agammaglobulinemia. This condition may lead to frequent infections. Electrophoresis displays decreased gamma-globulin levels, which can be confirmed by quantitative measurement of IgG, IgA, and IgM. There are several methods available to quantitatively measure IgG, IgA, and IgM such as radial immunodiffusion, immunonephelometry, and immunoassay. Immunoelectrophoresis provides only semiquantitative estimations of the immunoglobulins and should not be requested if quantitative values for IgG, IgA, or IgM are desired.

    Nitroblue tetrazolium test. Chronic granulomatous disease of childhood is a rare hereditary disorder of the white blood cells (WBCs) that is manifested by repeated infections and that ends in death before puberty. Inheritance is sex-linked in 50% of cases and autosomal recessive in about 50%. Polymorphonuclear leukocytes are able to attack high-virulence organisms, such as streptococci and pneumococci, which do not produce the enzyme catalase, but are unable to destroy staphylococci and certain organisms of lower virulence such as the gram-negative rods, which are catalase producers. Normal blood granulocytes are able to phagocytize yellow nitroblue tetrazolium (NBT) dye particles and then precipitate and convert (reduce) this substance to a dark blue. The test is reported as the percentage of granulocytes containing blue dye particles. Monocytes also ingest NBT, but they are not counted when performing the test. Granulocytes from patients with chronic granulomatous disease are able to phagocytize but not convert the dye particles, so that the NBT result will be very low or zero, and the NBT test is used to screen for this disorder. In addition, because neutrophils increase their phagocytic activity during acute bacterial infection, the nitroblue tetrazolium test has been used to separate persons with bacterial infection from persons with leukocytosis of other etiologies. In general, acute bacterial infection increases the NBT count, whereas viral or tuberculous infections do not. It has also been advocated as a screening test for infection when the WBC count is normal and as a means to differentiate bacterial and viral infection in febrile patients. Except for chronic granulomatous disease there is a great divergence of opinion in the literature on the merits of the NBT test, apportioned about equally between those who find it useful and those who believe that it is not reliable because of unacceptable degrees of overlap among patients in various diagnostic categories. Many modifications of the original technique have been proposed that add to the confusion, including variations in anticoagulants, incubation temperature, smear thickness, method of calculating data, and use of phagocytosis “stimulants,” all of which may affect test results.

    Some conditions other than bacterial infection that may elevate the NBT score (false positives) include normal infants aged less than 2 months, echovirus infection, malignant lymphomas (especially Hodgkin’s disease), hemophilia A, malaria, certain parasitic infestations, Candida albicans and Nocardia infections, and possibly the use of oral contraceptives. Certain conditions may (to varying degree) induce normal scores in the presence of bacterial infection (false negatives); these include antibiotic therapy, localized infection, systemic lupus erythematosus, sickle cell anemia, diabetes mellitus, agammaglobulinemia, and certain antiinflammatory medications (corticosteroids, phenylbutazone).

  • Toxicology

    This section includes a selected list of conditions that seem especially important in drug detection, overdose, or poisoning. Treatment of drug overdose by dialysis or other means can often be assisted with the objective information derived from drug levels. In some cases, drug screening of urine and serum may reveal additional drugs or substances, such as alcohol, which affect management or clinical response.

    Lead. Lead exposure in adults is most often due to occupational hazard (e.g., exposure to lead in manufacture or use of gasoline additives and in smelting) or to homemade “moonshine” whiskey distilled in lead-containing equipment. When children are severely affected, it is usually from eating old lead-containing paint chips. One group found some indications of chronic lead exposure in about one half of those persons examined who had lived for more than 5 years near a busy automobile expressway in a major city. Fertilization of crops with city sewage sludge is reported to increase the lead content of the crops. Several studies report that parents who smoke cigarettes are risk factors for increased blood lead values in their children. Living in houses built before 1960 is another risk factor because lead-based paint was used before it was banned. Renovating these houses may spread fragments or powder from the lead-containing paint. Living near factories manufacturing lead batteries is another risk factor.

    Symptoms. Acute lead poisoning is uncommon. Symptoms may include “lead colic” (crampy abdominal pain, constipation, occasional bloody diarrhea) and, in 50% of patients, hypertensive encephalopathy. Chronic poisoning is more common. Its varying symptoms may include lead colic, constipation with anorexia (85% of patients), and peripheral neuritis (wrist drop) in adults and lead encephalopathy (headache, convulsions) in children. A “lead line” is frequently present just below the epiphyses (in approximately 70% of patients with clinical symptoms and 20%-40% of persons with abnormal exposure but no symptoms).

    Hematologic findings. Most patients develop slight to moderate anemia, usually hypochromic but sometimes normochromic. RBCs with basophilic stippling is the most characteristic peripheral blood finding. Some authors claim stippling is invariably present; others report that stippling is present in only 20%-30% of cases. Normal persons may have as many as 500 stippled cells/1 million RBCs. The reticulocyte count is usually greater than 4%.

    d-Aminolevulinic acid dehydrase. Body intake of lead produces biochemical effects on heme synthesis (see Fig. 34-1). The level of d-aminolevulinic acid dehydrase (ALA-D), which converts ALA to porphobilinogen, is decreased as early as the fourth day after exposure begins. Once the ALA-D level is reduced, persistence of abnormality correlates with the amount of lead in body tissues (body burden), so that the ALA-D level remains reduced as long as significant quantities of lead remain. Therefore, after chronic lead exposure, low ALA-D values may persist for years even though exposure has ceased. The level of ALA-D is also a very sensitive indicator of lead toxicity and is usually reduced to 50% or less of normal activity when blood lead values are in the 30-50 µg/100 ml (1.4-2.4 µmol/L) range. Unfortunately, the ALA-D level reaches a plateau when marked reduction takes place, so it cannot be used to quantitate degree of exposure. In addition, this enzyme must be assayed within 24 hours after the blood specimen is secured. Relatively few laboratories perform the test, although it has only a moderate degree of technical difficulty.

    Blood lead assay. Intake of lead ordinarily results in rapid urinary lead excretion. If excessive lead exposure continues, lead is stored in bone. If bone storage capacity is exceeded, lead accumulates in soft tissues. Blood lead levels depend on the relationship between intake, storage, and excretion. The blood lead level is primarily an indication of acute (current) exposure but is also influenced by previous storage. According to 1991 Centers for Disease Control (CDC) guidelines, whole blood lead values over 10 µg/100 ml (0.48 µmol/L) are considered abnormal in children less than 6 years old. Values higher than 25 µg/100 ml (1.21 µmol/L) are considered abnormal in children over age 6 years and in adolescents. Values more than 40 µg/100 ml (1.93 µmol/L) are generally considered abnormal in adults, although the cutoff point for children may also be valid for adults. Symptoms of lead poisoning are associated with levels higher than 80 µ/100 ml (3.86 µmol/L), although mild symptoms may occur at 50 µg/100 ml (2.41 µmol/L) in children. Blood lead assay takes considerable experience and dedication to perform accurately. Contamination is a major headache—in drawing the specimen, in sample tubes, in laboratory glassware, and in the assay procedure itself. Special Vacutainer-type tubes for trace metal determination are commercially available and are strongly recommended.

    Urine d-aminolevulinic acid (ALA) assay. Another procedure frequently used is urine ALA assay. Blood and urine ALA levels increase when the blood ALA-D level is considerably reduced. Therefore, ALA also becomes an indicator of body lead burden, and urine ALA begins to increase when blood lead values are higher than 40 µg/100 ml (1.93 µmol/L). Disadvantages of urine ALA assay are difficulties with 24-hour urine collection or, if random specimens are used, the effects of urine concentration or dilution on apparent ALA concentration. In addition, at least one investigator found that the urine ALA level was normal in a significant number of cases when the blood lead level was in the 40-80 µg/100 ml (1.93-3.86 µmol/L) (mildly to moderately abnormal) range. Light, room temperature, and alkaline pH all decrease ALA levels. If ALA determination is not done immediately, the specimen must be refrigerated and kept in the dark (the collection bottle wrapped in paper or foil) with the specimen acidified, using glacial acetic or tartaric acid.

    Detecting lead exposure. If a patient is subjected to continuous lead exposure of sufficient magnitude, blood lead level, urine lead excretion, ALA-D level, and urine ALA level all correlate well. If the exposure ceases before laboratory tests are made, blood lead level (and sometimes even urine lead level) may decrease relative to ALA-D or urine ALA. Assay of ALA-D is the most sensitive of these tests. In fact, certain patients whose urine ALA and blood lead levels are within normal limits may display a mild to moderate decrease in ALA-D levels. It remains to be determined whether this reflects previous toxicity in all cases or simply means that ALA-D levels between 50% and 100% of normal are too easily produced to mean truly abnormal lead exposure.

    Urine lead excretion has also been employed as an index of exposure, since blood lead values change more rapidly than urine lead excretion. However, excretion values depend on 24-hour urine specimens, with the usual difficulty in complete collection. A further problem is that excretion values may be normal in borderline cases or in cases of previous exposure. Urine lead has been measured after administration of a chelating agent such as ethylenediamine tetraacetic acid (EDTA), which mobilizes body stores of lead. This is a more satisfactory technique than ordinary urine excretion for determining body burden (i.e., previous exposure). Abnormal exposure is suggested when the 24-hour urine lead excretion is greater than 1 µg for each milligram of calcium-EDTA administered. Disadvantages are those of incomplete urine collection, difficulty in accurate lead measurement, and occasional cases of EDTA toxicity.

    Erythrocyte protoporphyrin (zinc protoporphyrin, or ZPP) is still another indicator of lead exposure. Lead inhibits ferrochelatase (heme synthetase), an enzyme that incorporates iron into protoporphyrin IX (erythrocyte protoporphyrin) to form heme. Decreased erythrocyte protoporphyrin conversion leads to increased erythrocyte protoporphyrin levels. The standard assay for erythrocyte protoporphyrin involved extraction of a mixture of porphyrins, including protoporphyrin IX, from blood, and measurement of protoporphyrin using fluorescent wavelengths. In normal persons, protoporphyrin IX is not complexed to metal ions. Under the conditions of measurement it was thought that the protoporphyrin being measured was metal free, since iron-complexed protoporphyrin did not fluoresce, so that what was being measured was called “free erythrocyte protoporphyrin.” However, in lead poisoning, protoporphyrin IX becomes complexed to zinc; hence, the term ZPP. The protoporphyrin-zinc complex will fluoresce although the protoporphyrin-iron complex will not. Therefore, most laboratory analytic techniques for ZPP involve fluorescent methods. In fact, some have used visual RBC fluorescence (in a heparinized wet preparation using a microscope equipped with ultraviolet light) as a rapid screening test for lead poisoning. Zinc protoporphyrin levels are elevated in about 50%-75% of those who have a subclinical increase in blood lead levels (40-60 µg/100 ml) and are almost always elevated in symptomatic lead poisoning. However, the method is not sensitive enough for childhood lead screening (10 µg/100 ml or 0.48 µmol/L). An instrument called the “hematofluorometer” is available from several manufacturers and can analyze a single drop of whole blood for ZPP. The reading is affected by the number of RBCs present and must be corrected for hematocrit level. The ZPP test results are abnormal in chronic iron deficiency and hemolytic anemia as well as in lead poisoning. The ZPP level is also elevated in erythropoietic protoporphyria (a rare congenital porphyria variant) and in chronic febrile illness. An increased serum bilirubin level falsely increases ZPP readings, and fluorescing drugs or other substances in plasma may interfere.

    Urinary coproporphyrin III excretion is usually, although not invariably, increased in clinically evident lead poisoning. Since this compound fluoresces under Wood’s light, simple screening tests based on fluorescence of coproporphyrin III in urine specimens under ultraviolet light have been devised.

    Diagnosis of lead poisoning. The question arises as to which test should be used to detect or diagnose lead poisoning. The ALA-D assay is the most sensitive current test, and ALA-D levels may be abnormal (decreased) when all other test results are still normal. Disadvantages are the long-term persistence of abnormality once it is established, which may represent past instead of recent exposure. The specimen is unstable, and few laboratories perform the test. Zinc protoporphyrin is sensitive for lead poisoning and detects 50%-70% of cases of subclinical lead exposures in adults but is not sensitive enough to detect mandated levels of subclinical exposure in young children. There would be a problem in differentiating acute from chronic exposure because of the irreversible change induced in the RBCs, which remains throughout the 120-day life span of the RBCs. Thus, ZPP represents biologic effects of lead averaged over 3-4 months’ time. Also, the test is not specific for lead exposure. Blood lead assay is considered the best diagnostic test for actual lead poisoning. Blood lead indicates either acute or current exposure; levels in single short exposures rise and fall fairly quickly. However, small elevations (in the 40-60 µg/100 ml range), especially in single determinations, may be difficult to interpret because of laboratory variation in the assay. Some investigators recommend assay of blood lead together with ZPP, since elevation of ZPP values would suggest that exposure to lead must have been more than a few days’ duration.

    Heavy metals. Mercury, arsenic, bismuth, and antimony are included. Urine samples are preferred to blood samples. Hair and nails are useful for detection or documentation of long-term exposure to arsenic or mercury.

    Organic phosphates (cholinesterase inhibitors). Certain insecticides such as parathion and the less powerful malathion are inhibitors of the enzyme acetylcholinesterase. Acetylcholinesterase inactivates excess acetylcholine at nerve endings. Inhibition or inactivation of acetylcholinesterase permits overproduction of acetylcholine at nerve-muscle junctions. Symptoms include muscle twitching, cramps, and weakness; parasympathetic effects such as pinpoint pupils, nausea, sweating, diarrhea, and salivation; and various CNS aberrations. Organic phosphate poisons inactivate not only acetylcholinesterase (which is found in RBCs as well as at nerve endings) but also pseudocholinesterase, which is found in plasma. Therefore, laboratory diagnosis of organophosphate poisoning is based on finding decreased acetylcholines-terase levels in RBCs or pseudocholinesterase in serum (these two cholinesterase types are frequently referred to simply as “cholinesterase.” Levels in RBCs reflect chronic poisoning more accurately than serum values since RBC levels take longer to decrease than serum pseudocholinesterase and take longer to return to normal after exposure. Also, serum levels are reduced by many conditions and drugs. However, plasma measurement is much easier, so that screening tests are generally based on plasma measurement. In acute poisoning, RBC or serum cholinesterase activity is less than 50% of normal. In most cases, a normal result rules out severe acute anticholinesterase toxicity. However, the population reference range is fairly wide, so that a person with a preexposure value in the upper end of the population range might have his or her value decreased 50% and still be within the population reference range. Therefore, lowormal values do not exclude the possibility of organophosphate toxicity. It is strongly recommended that persons who may be occupationally exposed to the organophosphates should have their baseline serum cholinesterase (pseudocholinesterase) value established. Once this is done, periodic monitoring could be done to detect subclinical toxicity. It may take up to 6 weeks for serum pseudocholinesterase to return to normal after the end of exposure. Severe acute or chronic liver disease or pregnancy can decrease cholinesterase levels.

    Barbiturates and glutethimide. Barbiturates and glutethimide (Doriden) are the most common vehicles of drug-overdose suicide. In testing, either anticoagulated (not heparinized) whole blood or urine can be used; blood is preferred. TLC is used both for screening and to identify the individual substance involved. Chemical screening tests are also available. It is preferable to secure both blood and urine specimens plus gastric contents, if available. Many of the larger laboratories can perform quantitative assay of serum phenobarbital.

    Phenothiazine tranquilizers. Urine can be screened with the dipstick Phenistix or the ferric chloride procedure. TLC, GC, and other techniques are available for detection and quantitation.

    Acetaminophen. Acetaminophen (Paracetamol; many different brand names) has been replacing aspirin for headache and minor pain because of the gastric irritant and anticoagulant side effects of aspirin. With greater use of acetaminophen has come occasional cases of overdose. Acetaminophen is rapidly absorbed from the small intestine. Peak serum concentration is reached in 0.5-2 hours, and the serum half-life is 1-4 hours. About 15%-50% is bound to serum albumin. Acetaminophen is 80%-90% metabolized by the liver microsome pathway, 4%-14% is excreted unchanged by the kidneys, and a small amount is degraded by other mechanisms.

    Liver toxocity. The usual adult dose is 0.5 gm every 3-4 hours. In adults, liver toxicity is unlikely to occur if the ingested dose is less than 10 gm at one time, and death is unlikely if less than 15 gm is ingested. However, 10 gm or more at one time may produce liver damage, and 25 gm can be fatal. Children under age 5 years are less likely to develop liver injury. The toxic symptoms of overdose usually subside in 24 hours after the overdose, even in persons who subsequently develop liver injury. Liver function test results are typical of acute hepatocellular injury, with AST (SGOT) levels similar to those of acute hepatitis virus hepatitis. The peak of AST elevation most often occurs 4-6 days after onset. The liver completely recovers in about 3 months if the patient survives.

    Laboratory evaluation. Serum acetaminophen levels are helpful to estimate the likelihood of hepatic damage. These levels are used as a guide to continue or discontinue therapy. Peak acetaminophen levels provide the best correlation with toxicity. Current recommendations are that the assay specimen should be drawn 4 hours after ingestion of the dose—not earlier—to be certain that the peak has been reached. A serum level greater than 200 µg/ml (13.2 µmol/L) at 4 hours is considered potentially toxic, and a level less than 150 µg/ml (9.9 µmol/L) is considered nontoxic. The assay should be repeated 12 hours after ingestion. A value greater than 50 µg/ml (3.3 µmol/L) is considered toxic, and a value less than 35 µg/ml (2.3 µmol/L) is considered nontoxic. Colorimetric assay methods are available in kit form that are technically simple and reasonably accurate. However, inexperienced persons can obtain misleading results, so the amount of drug ingested and other factors should also be considered before therapy is terminated. Salicylates, ketones, and ascorbic acid (vitamin C) in high concentration interfere in some assay methods. Either AST or alanine aminotransferase levels should be determined daily for at least 4 days as a further check on liver function.

    Acetylsalicylic acid (aspirin). Absorption of aspirin takes place in the stomach and small intestine. Absorption is influenced by rate of tablet dissolution and by pH (acid pH assists absorption and alkaline pH retards it). Peak plasma concentration from ordinary aspirin doses is reached in about 2 hours (1-4 hours), with the peak from enteric-coated aspirin about 4-6 hours later. In some cases of overdose serum values from ordinary aspirin may take several hours to reach their maximum level due to pylorospasm. Aspirin is rapidly metabolized to salicylic acid in GI tract mucosa and liver; it is further metabolized by the liver to metabolically inactive salicyluric acid. Of the original dose, 10%-80% is excreted by the kidneys as salicylate, 5%-15% as salicylic acid, and 15%-40% as salicyluric acid. The half-life of salicylate or its active metabolites in serum at usual drug doses is 2-4.5 hours. The half-life is dose dependent, since the degradation pathways can be saturated. At high doses the half-life may be 15-30 hours. Also, steady-state serum concentration is not linear with respect to dose, with disproportionate serum levels being produced by much smaller increments in dose.

    Laboratory tests. Mild toxicity (tinnitus, visual disturbances, GI tract disturbances) correlates with serum salicylate levels more than 30 mg/100 ml (300 µg/ml), and severe toxicity (CNS symptoms) is associated with levels more than 50 mg/100 ml (500 µg/ml). In younger children, severe toxicity is often associated with ketosis and metabolic acidosis, whereas in older children and adults, respiratory alkalosis or mixed acidosis-alkalosis is more frequent. Peak serum salicylate values correlate best with toxicity. It is recommended that these be drawn at least 6 hours after the overdose to avoid serum values falsely below peak levels due to delayed absorption. Enteric-coated aspirin delays absorption an additional 4-6 hours. Screening tests for salicylates include urine testing with a ferric chloride reagent or Phenistix (both of these tests are also used in the diagnosis of phenylketonuria). The most commonly used quantitative test is a colorimetric procedure based on ferric chloride. Ketone bodies and phenothiazine tranquilizers can interfere.

    Carbon monoxide. Carbon monoxide combines with hemoglobin to form carboxyhemoglobin. While doing this it occupies oxygen-binding sites and also produces a change in the hemoglobin molecule that binds the remaining oxygen more tightly, with less being available for tissue cell respiration. Headache, fatigue, and lightheadedness are the most frequent symptoms.

    Laboratory diagnosis. Carbon monoxide poisoning is detected by hemoglobin analysis for carboxyhemoglobin. This is most readily done on an instrument called a CO-Oximeter. A 30%-40% carboxyhemoglobin content is associated with severe symptoms, and more than 50% is associated with coma. Cigarette smoking may produce levels as high as 10%-15%. Carboxyhemoglobin is stable for more than 1 week at room temperature in EDTA anticoagulant. The specimen should be drawn as soon as possible after exposure, since carbon monoxide is rapidly cleared from hemoglobin by breathing normal air.

    Carbon monoxide poisoning can be suspected from an arterial blood gas specimen when a measured oxygen saturation (percent O2 saturation of hemoglobin) value is found to be significantly below what would be expected if oxygen saturation were calculated from the PO2 and pH values. For this screening procedure to be valid, the O2 saturation must be measured directly, not calculated. Some blood gas machines measure O2 saturation, but the majority calculate it from the PO2 and pH values.

    Ethyl alcohol (ethanol). Ethanol is absorbed from the small intestine and, to a lesser extent, from the stomach. Factors that influence absorption are (1) whether food is also ingested, since food delays absorption, and if so, the amount and kind of food; (2) the rate of gastric emptying; and (3) the type of alcoholic beverage ingested. Without food, the absorptive phase (time period during which alcohol is being absorbed until the peak blood value is reached) may be as short as 15 minutes or as long as 2 hours. In one study, peak values occurred at 30 minutes after ingestion in nearly 50% of experimental subjects and in about 75% by 1 hour, but 6% peaked as late as 2 hours. With food, absorption is delayed to varying degrees. Once absorbed, ethanol rapidly equilibrates throughout most body tissues. The liver metabolizes about 75% of absorbed ethanol. The predominant liver cell metabolic pathway of ethanol is the alcohol dehydrogenase enzyme system, whose product is acetaldehyde. Acetaldehyde, in turn, is metabolized by the hepatic microsome system. About 10%-15% of absorbed ethanol is excreted unchanged through the kidneys and through the lungs.

    Ethanol measurement. There are several methods for patient alcohol measurement. The legal system generally recognizes whole blood as the gold standard specimen. Arterial blood ethanol is somewhat higher than venous blood levels, especially in the active absorption phase. Capillary blood (fingerstick or ear lobe blood) is about 70%-85% of the arterial concentration. The major problems with whole blood are that values are influenced by the hematocrit, and most current chemistry analyzers must use serum. A serum value is about 18%-20% higher than a whole blood value obtained on the same specimen, whereas blood levels correlated by law to degrees of physical and mental impairment are defined by whole blood assay. Serum values theoretically can be converted to equivalent whole blood values by means of a serum/whole blood (S/WB) conversion ratio. Most laboratories apparently use a S/WB conversion ratio of 1.20. Unfortunately, there is significant disagreement in the literature on which ratio to use; different investigators report S/WB ratios varying between 1.03 and 1.35. Based on the work of Rainey (1993), the median S/WB conversion ratio is 1.15 (rather than 1.20), the range of ratios included in 95% certainty is 0.95-1.40, and the range of ratios included in 99% certainty is 0.90-1.49. Whole blood values can be obtained directly by using serum analytic methods on a protein-free filtrate from a whole blood specimen.

    Enzymatic methods using alcohol dehydrogenase or alcohol oxidase are replacing the classic potassium dicromate methods. There is some dispute in the literature whether or not alcohol dehydrogenase methods are affected by isopropanol (commonly used for venipuncture skin cleansing). In experiments performed in my laboratory, no cross-reaction was found in concentrations much stronger than what should be encountered from skin cleansing. Nevertheless, because of legal considerations, specimens for ethanol should be drawn without using any type of alcohol as a skin-cleansing agent. Increased blood ketones, as found in diabetic ketoacidosis, can falsely elevate either blood or breath alcohol test results.

    Urine is not recommended for analysis to estimate degree of alcohol effect because the blood/urine ratio is highly variable and there may be stasis of the specimen in the bladder. However, urine can be used to screen for the presence of alcohol. Breath analyzers are the assay method most commonly used for police work since the measurement can be done wherever or whenever it is desirable. Breath analyzers measure the ethanol content at the end of expiration following a deep inspiration. The measurement is then correlated to whole blood by multiplying the measured breath ethanol level by the factor 2,100. On the average, breath alcohol concentration correlates reasonably well with whole blood alcohol concentration using this factor. However, there is significant variation between correlation factors reported in different individuals and average factors in different groups, so that use of any single “universal” factor will underestimate the blood ethanol concentration in some persons and overestimate it in others. Also, correlation with blood ethanol levels is better when breath ethanol is measured in the postabsorptive state than in the absorptive state. When breath analyzers are used, it is important that there be a period of at least 15 minutes before testing during which no alcohol ingestion, smoking, food or drink consumption, or vomiting has taken place to avoid contamination of the breath specimen by alcohol in the mouth. Some alcohol-containing mouthwashes may produce legally significant breath alcohol levels at 2 minutes after applying the mouthwash, but not at 10 minutes after use. Ketone bodies in patients with diabetic acidosis may interfere with breath ethanol measurement. One further advantage of breath testing in the field is the usefulness of a negative test for ethanol in a person whose behavior suggests effects of alcohol; this result could mean a serious acute medical problem that needs immediate attention.

    Legal use of blood alcohol assay. Most courts of law follow the recommendations of the National Safety Council on alcohol and drugs (the following ethanol values are whole blood values):

    • Below 0.05% (50 mg/100 ml): No influence by alcohol within the meaning of the law.
    • Between 0.05% and 0.10% (50-100 mg/100 ml): A liberal, wide zone in which alcohol influence usually is present, but courts of law are advised to consider the person’s behavior and circumstances leading to the arrest in making their decision.
    • Above 0.10% (100 mg/100 ml): Definite evidence of being “under the influence,” since most persons with this concentration will have lost, to a measurable extent, some of the clearness of intellect and self-control they would normally possess.

    Based on the work of Rainey, the minimal serum alcohol level that would correspond to a whole blood alcohol level of 0.10% (100 mg/100 ml, w/v) with 95% certainty is 140 mg/100 ml (30.4 mmol/L) and at 99% certainty is 149 mg/100 ml (32.3 mmol/L).

    Some organizations, including the American Medical Association (AMA) Council on Scientific Affairs (1986), suggest adopting 0.05% blood alcohol content as per se evidence of alcohol-impaired driving.

    Estimating previous blood alcohol levels. In certain situations it would be desirable to estimate the blood alcohol level at some previous time from the results of a subsequent alcohol level. The usual method for this is the Widmark equation: P = A + (F Ч T), when P is the concentration of blood alcohol (in milligrams per liter) at the previous time, A is the concentration of blood alcohol (in milligrams per liter) when it was measured, F is a factor (or constant) whose value is 130 (in milligrams per kilogram per hour), and T is the time (in hours) elapsed between the time the blood alcohol was measured and the previous time that the blood alcohol value must be estimated.

    There is considerable controversy regarding the usefulness of the Widmark equation. The equation is valid for a person only in the postabsorptive state (i.e., after the peak blood alcohol level is reached). Time to peak is most often considered to be 0.5-2.0 hours, so that the blood specimen must be drawn no earlier than 2 hours after the beginning of alcohol intake. The Widmark equation is based on kinetics of alcohol taken during fasting. Food increases alcohol elimination, so that food would cause the Widmark equation to overestimate the previous alcohol level. The factor (constant) of 130 is not necessarily applicable to any individual person, since a range of experimentally measured individual values from 100-340 has been reported.

    Clinical and laboratory effects of alcohol. Alcohol has a considerable number of metabolic and toxic effects that may directly or indirectly involve the clinical laboratory. Liver manifestations include gamma-glutamyltransferase (GGT; formerly gamma-glutamyl transpeptidase) elevation, fatty liver, acute alcoholic hepatitis (“active cirrhosis”), or Laennec’s cirrhosis, and may lead indirectly to bleeding from esophageal varices or to cytopenia either from hypersplenism or through other mechanisms. RBC macrocytosis is frequently associated with chronic alcoholism, and nutritional anemia such as that due to folic acid may be present. Other frequently associated conditions include acute pancreatitis, hypertriglyceridemia, alcoholic gastritis, alcoholic hypoglycemia, various neurologic abnormalities, and subdural hematoma. The chronic alcoholic is more susceptible to infection. Finally, alcohol interacts with a variety of medications. It potentiates many of the CNS depressants, such as various sedatives, narcotics, hypnotics, and tranquilizers (especially chlordiazepoxide and diazepam). Alcohol is a factor in many cases of overdose, even when the patient has no history of alcohol intake or denies intake. The presence of alcohol should be suspected when toxicity symptoms from barbiturates or other medications are associated with blood levels that normally would be considered safe. Alcohol may antagonize the action of various other medications, such as coumarin and phenytoin. Alcohol intake in pregnancy has been reported to produce increased rates of stillbirth and infant growth deficiency as well as a specific “fetal alcohol syndrome.” Fetal alcohol syndrome includes a particular pattern of facial appearance, postnatal growth deficiency with normal bone age, various skeletal and organ malformations, and various neurologic abnormalities (including average IQ below normal).

    Ethanol is one of a number of substances (other alcohols, lactic acid, etc.) that elevate serum osmolality using freezing point depression methods. This creates a gap between measured osmolality and calculated osmolality. Osmolality using vapor pressure instruments is not affected by ethanol.

    Laboratory screening for alcoholism. Various tests have been used to screen for chronic alcoholism. Of these, the most commonly advocated is the GGT. This test and the AST reflect the effect of alcohol on the liver. A third possibility is mean corpuscular volume (MCV), the average size of the patient RBC. This reflects macrocytosis induced by liver disease and possibly also by folic acid deficiency in alcoholics. In heavy drinkers or alcoholics, GGT has a sensitivity of about 70% (literature range, 63%-81%), MCV detects about 60% (26%-90%), and the AST value is elevated in about 50% (27%-77%). Most (but not all) reports indicate some correlation in likelihood and degree of GGT elevation with the amount and frequency of alcohol consumption. Thus, heavy drinkers are more likely to have GGT elevations, and these elevations are (on the average) higher than those of less heavy drinkers. However, there are many exceptions. There is some disagreement as to whether so-called social drinkers have a significant incidence of elevated GGT levels. The majority of investigators seem to believe that they do not.

    Other biochemical abnormalities associated with alcoholism (but found in <40% of cases) include hypophosphatemia, hypomagnesemia, hyponatremia, hypertriglyceridemia, and hyperuricemia.

    An isoform of transferrin that contains fewer sialic acid molecules than normal transferrin and thus is called carbohydrate-deficient transferrin has been advocated by some researchers. A few studies claim that in alcoholism its levels become elevated more often than those of GGT and that therefore it is a much more specific indicator of alcohol abuse. One study claimed 55% sensitivity in detecting moderate alcohol intake and nearly 100% sensitivity in heavy chronic drinkers. A commercial assay kit is available. However, at present the test would most likely have to be performed in a large reference laboratory.

    Tests for Tobacco Use. In some instances, such as in smoking-cessation clinics, life insurance company examinations, and tests to determine degree of passive exposure to tobacco smoke, it is desirable to detect and quantitate tobacco exposure. The modalities that have been investigated are carboxyhemoglobin (based on effect of carbon monoxide generated by tobacco combustion), thiocyanate (a metabolite of cyanide derived from tobacco tar), and cotinine (a metabolite of nicotine). Most current tests are based on thiocyanate or cotinine. Thiocyanate is absorbed in the lungs and has a biological half-life of about 14 days. Ingestion of certain vegetables can falsely elevate serum thiocyanate levels. Cotinine is specific for nicotine, is not affected by diet, has a serum within-day variation of about 15%-20%, and has a biological half-life of about 19 hours. Thus, cotinine tests become negative after tobacco abstinence of a week or less, whereas thiocyanate requires considerably longer before becoming nondetectable. Also, thiocyanate can be assayed chemically and less expensively than cotinine, which is done by immunoassay. Nevertheless, because cotinine is specific for nicotine and is affected only by active or passive exposure to tobacco, cotinine seems to be favored by investigators. Cotinine can be assayed in serum, saliva, or urine; the levels are higher in urine.

  • Biochemical Tests for Congenital Anomalies

    Besides giving information on fetal well-being, amniocentesis makes it possible to test for various congenital anomalies via biochemical analysis of amniotic fluid and tissue culture chromosome studies of fetal cells (see Chapter 34). In addition, certain substances of fetal origin may appear in maternal serum. In some cases it is possible to detect certain fetal malformations by screening tests in maternal serum.

    Maternal serum alpha-fetoprotein

    One of the most widely publicized tests for congenital anomalies is the alpha-fetoprotein (AFP) test in maternal serum for detection of open neural tube defects. Although neural tube defects are much more common in infants born to families in which a previous child had such an abnormality, about 90% occur in families with no previous history of malformation. AFP is an alpha-1 glycoprotein with a molecular weight of about 70,000. It is first produced by the fetal yolk sac and then mostly by the fetal liver. It becomes the predominant fetal serum protein by the 12th or 13th week of gestation but then declines to about 1% of peak levels by delivery. It is excreted via fetal urine into amniotic fluid and from there reaches maternal blood. After the 13th week both fetal serum and amniotic fluid AFP levels decline in parallel, the fetal blood level being about 200 times the amniotic fluid level. In contrast, maternal serum levels become detectable at about the 12th to 14th week and reach a peak between the 26th and 32nd week of gestation. Although maternal serum screening could be done between the 15th and 20th weeks, the majority of investigators have decided that the interval between the 16th and 18th weeks is optimal, since the amniotic fluid AFP level is still relatively high and the fetus is still relatively early in gestation. Maternal AFP normal levels differ for each week of gestation and ideally should be determined for each laboratory. Results are reported as multiples (e.g., 1.5Ч, 2.3Ч) of the normal population mean value for gestational age. In any patients with abnormal AFP values it is essential to confirm fetal gestational age by ultrasound, since 50% of abnormal AFP results are found to be normal due to ultrasound findings that result in a change being made in a previously estimated gestational date. Some reports suggest that maternal weight is also a factor, with heavier women tending to have lower serum AFP values (one group of investigators does not agree that maternal AFP values should be corrected for maternal weight). There are also some reports that AFP values are affected by race, at least when comparing values from Europeans and African Americans.

    Maternal AFP levels reportedly detect about 85%-90% (literature range, 67%-97%) of open neural tube defects; about one half are anencephaly and about one half are open or closed spinabifida. There is an incidence of about 1-2 per 1,000 live births. The test also detects a lesser (but currently unknown) percentage of certain other abnormalities, such as fetal ventral wall defects, Turner’s syndrome, pilonidal sinus, hydrocephalus, duodenal atresia, multiple hypospadias, congenital nephrosis, and cystic hygroma. In addition, some cases of recent fetal death, threatened abortion, and Rh erythroblastosis produce elevated maternal AFP levels, as well as some cases of maternal chronic liver disease and some maternal serum specimens obtained soon after amniocentesis. A theoretical but unlikely consideration is AFP-producing tumors such as hepatoma. More important, twin pregnancies cause maternal values that are elevated in terms of the reference range established on single-fetus pregnancies. A large minority of elevated maternal AFP levels represent artifact due to incorrect estimation of fetal gestational age, which, in turn, would result in comparing maternal values to the wrong reference range. There is also the possibility of laboratory error. Most authorities recommend a repeat serum AFP test 1 week later to confirm an abnormal result. If results of the second specimen are abnormal, ultrasound is usually suggested to date the age of gestation more accurately, to examine the fetus for anencephaly, and to exclude twin pregnancy. However, even ultrasonic measurements may vary from true gestational age by as much as 5-7 days. Some perform ultrasonic evaluation if the first AFP test result is abnormal; if ultrasound confirms fetal abnormality, a second AFP specimen would be unnecessary. In some medical centers, about 40%-59% of elevated maternal AFP levels can be explained on the basis of technical error, incorrect fetal gestation date, and multiple pregnancy.

    Some conditions produce abnormal decrease in maternal serum AFP values. The most important is Down’s syndrome (discussed later). Other conditions that are associated with decreased maternal serum AFP levels include overestimation of fetal age and absence of pregnancy (including missed abortion).

    Amniotic fluid alpha-fetoprotein

    Amniocentesis is another technique that can be used to detect open neural tube defects. It is generally considered the next step after elevated maternal AFP levels are detected and confirmed and the age of gestation is accurately determined. As mentioned previously, amniocentesis for this purpose is generally considered to be optimal at 16-18 weeks of gestation. Assay of amniotic fluid AFP is said to be about 95% sensitive for open neural tube defects (literature range, 80%-98%), with a false positive rate in specialized centers less than 1%. Most false positive results are due to contamination by fetal blood, so a test for fetal red blood cells or hemoglobin is recommended when the amniotic fluid AFP level is elevated. Amniotic fluid AFP normal values are age related, similar to maternal serum values.

    Screening for Down’s syndrome

    While maternal serum AFP screening was being done to detect neural tube defects, it was noticed that decreased AFP levels appeared to be associated with Down’s syndrome (trisomy 21, the most common multiple malformation congenital syndrome). Previously, it had been established that women over age 35 had a higher incidence of Down’s syndrome pregnancies. In fact, although these women represent only 5%-8% of pregnancies, they account for 20%-25% (range, 14%-30%) of congenital Down’s syndrome. Since it was discovered that mothers carrying a Down’s syndrome fetus had AFP values averaging 25% below average values in normal pregnancy, it became possible to detect about 20% of all Down’s syndrome fetuses in pregnant women less than age 35 years in the second trimester. Combined with approximately 20% of all Down’s syndrome fetuses detected by amniocentesis on all possible women over age 35, the addition of AFP screening to maternal age criteria potentially detected about 40% of all Down’s syndrome pregnancies. Later, it was found that serum unconjugated estriol (uE3) was decreased about 25% below average values seen in normal pregnancies, and hCG values were increased at least 200% above average normal levels; both were independent of maternal age. Addition of hCG and uE3 to AFP screening raised the total detection rate of all Down’s syndrome patients to about 60%. Later, there was controversy whether including uE3 was cost effective. Even more recently it was found that substituting beta-hCG for total hCG increased the total Down’s syndrome detection rate to 80%-86%. Also, it was found that screening could be done in the first trimester as well as the second trimester (although AFP was less often abnormal). Finally, it was found that if AFP, uE3, and beta-hCG were all three decreased (beta-hCG decreased rather than elevated), about 60% of fetal trisomy 18 could be detected. Trisomy 18 (Edward’s syndrome) is the second most common congenital trisomy. Decreased AFP can also be caused by hydatidiform mole, insulin-dependent diabetes, and incorrect gestational age estimation.

    Amniotic fluid acetylcholinesterase

    Acetylcholinesterase (ACE) assay in amniotic fluid has been advocated as another way to detect open neural tube defects and to help eliminate diagnostic errors caused by false positive AFP results. Acetylcholinesterase is a major enzyme in spinal fluid. Results from a limited number of studies in the late 1970s and early 1980s suggest that the test has 98%-99% sensitivity for open neural tube defects. Acetylcholinesterase assay has the further advantage that it is not as dependent as AFP on gestational age. It is not specific for open neural tube defects; amniotic fluid elevations have been reported in some patients with exomphalos (protrusion of viscera outside the body due to a ventral wall defect) and certain other serious congenital anomalies and in some patients who eventually miscarry. Not enough data are available to properly evaluate risk of abnormal ACE results in normal pregnancies, with reports in the literature ranging from 0%-6%. There is also disagreement as to how much fetal blood contamination affects ACE assay. The test is less affected than AFP assay, but substantial contamination seems capable of producing falsely elevated results.

    Chromosome analysis (cytogenetic karyotyping) on fetal amniotic cells obtained by amniocentesis is the standard way for early prenatal diagnosis of fetal trisomies and other congenital abnormalities. However, standard karyotyping is very time-consuming, requires a certain minimum number of fetal cells that need culturing, and usually takes several days to complete. A new method called fluorescent in situ hybridization (FISH) uses nucleic acid (deoxyribonucleic acid, DNA) probes to detect certain fetal cell chromosomes such as 13, 18, 21, X, and Y, with identification accomplished by a fluorescent dye coupled to the probe molecules. Correlation with traditional cytogenetics has generally been over 95%, with results in 24 hours or less. FISH modifications have made it possible to detect fetal cells in maternal blood and subject them to the same chromosome analysis. One company has a combined-reagent procedure that can be completed in 1 hour. Disadvantages of FISH include inability to detect abnormalities in chromosomes other than the ones specifically targeted by the probes and inability to detect mosaic abnormalities or translocations, thereby missing an estimated 35% of chromosome defects that would have been identified by standard karyotyping methods.

    Preterm labor and placental infection

    It has been estimated that about 7% of deliveries involve mothers who develop preterm labor. It has also been reported that chorioamnionitis is frequently associated with this problem (about 30%; range, 16%-82%). Less than 20% of infected patients are symptomatic. Diagnosis of infection has been attempted by amniotic fluid analysis. Amniotic fluid culture is reported to be positive in about 20% of cases (range, 4%-38%). Mycoplasmas are the most frequent organisms cultured. Amniotic fluid Gram stain is positive in about 20% of patients (range, 12%-64%). Amniotic fluid white blood cell count was reported to be elevated in 57%-64% of cases. However, there was great overlap between patients with or without infection and also between those with proven infection. In three reports, the most sensitive amniotic fluid test for infection was amniotic fluid interleukin-6 (IL-6) assay (81%-100%). However, at present most hospitals would have to obtain IL-6 assay from large reference laboratories.

  • Fetal Maturity Tests

    Tests for monitoring fetal maturity via amniocentesis are also available. Bilirubin levels in erythroblastosis are discussed in chapter 11. Amniotic creatinine assay, amniotic epithelial cell stain with Nile blue sulfate, fat droplet evaluation, osmolality, and the Clemens shake test, alone or in combination, have been tried with varying and not entirely satisfactory results. Most current tests measure one or more components of alveolar surfactant. Surfactant is a substance composed predominantly of phospholipids; it is found in lung alveoli, lowers the surface tension of the alveolar lining, stabilizes the alveoli in expiration, and helps prevent atelectasis. Surfactant deficiency causes neonatal respiratory distress syndrome (RDS), formerly called “hyaline membrane disease.” The major phospholipid components of surfactant are phosphatidylcholine (lecithin, about 80%; range, 73%-88%), phosphatidylglycerol (PG, about 3%; range, 1.8%-4.2%), and sphingomyelin (about 1.6%). The current most widely used tests are the lecithin/sphingomyelin (L/S) ratio, assay of phosphatidylglycerol (PG), the foam stability index (FSI), and TDx fluorescent polarization.

    Lecithin/Sphingomyelin (L/S) ratio. The L/S ratio has been the most widely accepted fetal maturity procedure. Lecithin (phosphatidylcholine), a phospholipid, is the major component of alveolar surfactant. There is a 60% or greater chance of RDS in uncomplicated pregnancies when the fetus is less than 29 weeks old; about 8%-23% at 34 weeks; 0%-2% at 36 weeks; and less than 1% after 37 weeks. In amniotic fluid, the phospholipid known as sphingomyelin normally exceeds lecithin before the 26th week; thereafter, lecithin concentration is slightly predominant until approximately the 34th week, when the lecithin level swiftly rises and the sphingomyelin level slowly decreases so that lecithin levels in the 35th or 36th week become more than twice sphingomyelin levels. After that happens it was originally reported (not entirely correctly), that there was no longer any danger of neonatal RDS. The L/S ratio thus became a test for fetal lung and overall maturity. Certain precautions must be taken. Presence of blood or meconium in the amniotic fluid or contamination by maternal vaginal secretions may cause a false increase in lecithin and sphingomyelin levels, so that “mature” L/S ratios are decreased and “immature” L/S ratios are increased. The amniotic fluid specimen must be cooled immediately, centrifuged to eliminate epithelial cells and other debris, and kept frozen if not tested promptly to prevent destruction of lecithin by certain enzymes in the amniotic fluid.

    Evaluations in unselected amniocentesis patients have revealed that about 55% of neonates with immature L/S ratios using the 2.0 ratio cutoff point do not develop RDS and about 5% (literature range, 0%-17%) of neonates with a mature L/S ratio (ratio >2.0) develop RDS. Some have attempted to eliminate the falsely mature cases by changing the cutoff point to a ratio of 2.5, but this correspondingly increases the number of falsely immature results. In clinically normal pregnancies, only about 3% of neonates with a mature L/S ratio, using proper technique, develop RDS. In complicated pregnancies, especially those with maternal type I insulin-dependent diabetes, hypertension, or premature rupture of the amniotic membrane, about 15% (literature range, 3%-28%) of neonates with mature L/S ratios are reported to develop RDS. In other words, RDS can develop at higher L/S ratios in a relatively small number of infants. The wide range in the literature reflects differences in opinion among investigators as to the effect of diabetes on neonatal L/S ratios. Also, the L/S ratio can produce falsely mature results if contaminated by blood or meconium. It takes experience and careful attention to technical details to obtain consistently accurate L/S results.

    Phosphatidylglycerol (PG). A number of other tests have been developed in search of a procedure that is more accurate in predicting or excluding RDS and that is also technically easy to perform. PG is a relatively minor component (about 10%) of lung surfactant phospholipids. However, PG is almost entirely synthesized by mature lung alveolar cells and therefore is a good indicator of lung maturity. In normal pregnancies PG levels begin to increase after about 30 weeks’ gestation and continue to increase until birth. It normally becomes detectable about the 36th week. In conditions that produce severe fetal stress, such as maternal insulin-dependent diabetes, hypertension, and premature membrane rupture, PG levels may become detectable as early as 30 weeks’ gestation. Most of the limited studies to date indicate that the presence of PG in more than trace amounts strongly suggests that RDS will not develop, whether the pregnancy is normal or complicated. Overall incidence of RDS when PG is present seems to be about 2% (range 0%-10%). It is considered to be a more reliable indicator of fetal lung maturity than the L/S ratio in complicated pregnancy. It may be absent in some patients with clearly normal L/S ratios and occasionally may be present when the L/S ratio is less than 2.0. PG assay is not significantly affected by usual amounts of contamination by blood or meconium.

    PG can be assayed in several ways, including gas chromatography, thin-layer chromatography (TLC), enzymatically, and immunologically. The TLC technique is roughly similar to that of the L/S ratio. Some report the visual presence or absence of PG, with or without some comment as to how much appears to be present (trace or definite). Some report a PG/sphingomyelin (PG/S) ratio. A PG/S ratio of 2.0 or more is considered mature. A commercially available enzymatic PG method (“PG-Numeric”) separates phospholipids from the other components of amniotic fluid (using column chromatography or other means), followed by enzymatic assay of glycerol in the phospholipid fraction. After several years there is still an insufficient number of published evaluations of this technique. Immunological methods are still restricted to a slide agglutination kit called Amniostat FLM-Ultra (improved second-generation test). Current small number of evaluations indicate that Amniostat FLM-Ultra detects about 85%-90% of patients who are positive for PG on TLC. The risk of RDS is about 1%-2% if the test is reactive (positive).

    Foam stability index (FSI). The FSI is a surfactant function test based on the ability of surfactant to lower surface tension sufficiently to permit stabilized foaming when the amniotic fluid is shaken. This depends on the amount and functional capability of surfactant as challenged by certain amounts of the antifoaming agent ethanol. It is thought that the phospholipid dipalmitoyl lecithin is the most important stabilizing agent. The FSI is actually a modification of the Clemens shake test, which used a final amniotic fluid-ethanol mixture of 47.5% ethanol. The FSI consists of a series of seven tubes containing amniotic fluid with increasing percentages of ethanol in 1% increments from 44%-50%. The endpoint is the tube with the highest percentage of ethanol that maintains foam after shaking. An endpoint of the 47% tube predicts about a 4% chance of RDS and an endpoint in the 48% tube predicts less than 1% chance. Because even tiny inaccuracies or fluctuations of ethanol concentration can influence results considerably, and also the tendency of absolute ethanol to adsorb water, some problems were encountered in laboratories making their own reagents. To solve these problems a commercial version of the FSI called Lumidex was introduced featuring sealed tubes containing the 1% increments of ethanol to which aliquots of amniotic fluid are added through the rubber caps that seal the tubes. The FSI (or Lumidex) has been reported to be more reliable than the L/S ratio in predicting fetal lung maturity. At least two reports indicate that the FSI correctly demonstrates fetal lung maturity much more frequently than the L/S ratio in fetuses who are small for their gestational age. Drawbacks of the FSI method in general are interference (false positive) by blood, meconium, vaginal secretions, obstetrical creams, and mineral oil. A major drawback of the current Lumidex kit is a shelf-life of only 3 weeks without refrigeration. Although the shelf life is 3 months with refrigeration, it is necessary to stabilize the tubes at room temperature for at least 3 hours before the test is performed.

    TDx-FLM fluorescent polarization. The TDx is a commercial instrument using fluorescent polarization to assay drug levels and other substances. It has been adapted to assay surfactant quantity indirectly by staining surfactant in amniotic fluid with a fluorescent dye and assaying surfactant (in mg/gm of albumin content) using the molecular viscosity of the fluid as an indicator of surfactant content. The assay in general produces results similar to the L/S ratio and a little better than the FSI. There is some difference in results depending on whether a single cutoff value is used, what that value is, and whether multiple cutoff values are applied depending on the situation. Test technical time is about 30 minutes. Specimens contaminated with meconium, blood, or urine (in vaginal pool material) interfere with the test.

    Lamellar body number density. Surfactant is produced by alveolar type II pneumocytes in the form of a concentrically wrapped small structure about 3 microns in size that on cross-section looks like an onion and is called a lamellar body. It is possible to count the lamellar bodies using some hematology platelet counting machines, with the result calculated in units of particle density per microliter of amniotic fluid. In the very few evaluations published to date, results were comparable to those of the L/S ratio and FSI.

    Amniocentesis laboratory problems. Occasionally, amniotic puncture may enter the maternal bladder instead of the amniotic sac. Some advocate determining glucose and protein levels, which are high in amniotic fluid and low in normal urine. To prevent confusion in diabetics with glucosuria, it has been suggested that urea and potassium levels be measured instead; these are relatively high in urine and low in amniotic fluid. Another potential danger area is the use of spectrophotometric measurement of amniotic fluid pigment as an estimate of amniotic fluid bilirubin content. Before 25 weeks’ gestation, normal pigment levels may be greater than those usually associated with abnormality.

  • Pregnancy Tests

    Most pregnancy tests are based on the fact that the placenta secretes human chorionic gonadotropin (hCG), a hormone that has a luteinizing action on ovarian follicles and probably has other functions that are not completely known. Serum hCG levels of about 25 milli-international units (mIU)/ ml (IU/L) are reached about 8-10 days after conception. The hCG levels double approximately every 2 days (various investigators have reported doubling times ranging from 1-3 days) during approximately the first 6 weeks of gestation. Levels of about 500 mIU/ml are encountered about 14-18 days after conception (28-32 days after the beginning of the last menstrual period). Serum levels are generally higher than urine levels for about the first 2 weeks after conception and about the same as urine levels during the third week. Thereafter, urine levels are higher than serum. The serum (and urine) hCG levels peak about 55-70 days (8-10 weeks) after conception (literature range, 40-77 days). Peak serum values are about 30,000 mIU/ml (range, 20,000-57,000 mIU/ml). Serum and urine levels then decline rather rapidly during the last part of the first trimester, with serum levels eventually stabilizing at about 10,000 mIU/ml. These levels are maintained for the remainder of pregnancy, although some investigators describe a brief rise and fall in the third trimester. Urine levels generally parallel serum levels, but the actual quantity of urine hCG obtained in terms of milliinternational units per milliliter is considerably dependent on technical aspects of the kit method being used (discussed later).

    The hCG molecule is composed of two subunits, alpha and beta. The alpha subunit is also a part of the pituitary hormones LH, FSH, and TSH. The beta subunit, however, is different for each hormone. The hCG molecule in serum becomes partially degraded or metabolized to beta subunits and other fragments that are excreted in urine.

    Biologic tests. The first practical biologic test for pregnancy was the Ascheim-Zondek test, published in 1928. Urine was injected into immature female mice, and a positive result was indicated by corpus luteum development in the ovaries. This took 4-5 days to perform. The next major advance took place in the late 1950s when frog tests were introduced. These took about 2 hours to complete. The result was almost always positive by the 40th day after the last menses, although it could become positive earlier.

    Immunologic tests. In the 1960s it was learned that antibodies to hCG could be produced by injecting the hCG molecule into animals. This was the basis for developing immunologic pregnancy tests using antigen-antibody reactions. In the late 1960s and during the 1970s, both latex agglutination slide tests and hemagglutination tube tests became available. The slide tests took about 2 minutes to perform and had a sensitivity of 1,500-3,000 mIU/ml, depending on the manufacturer. The tube tests required 2 hours to complete and had a sensitivity of 600-1,500 mIU/ml. The antibody preparations used at that time were polyclonal antibodies developed against the intact hCG molecule, and they cross-reacted with LH and TSH. This did not permit tests to be sensitive enough to detect small amounts of hCG, because urine LH could produce false positive results.

    Beta subunit antibody tests. In the late 1970s, methods were found to develop antibodies against the beta subunit of hCG rather than against the whole molecule. Antibody specific against the beta subunit could greatly reduce or even eliminate the cross-reaction of hCG with LH. However, the degree of current beta subunit antibody specificity varies with different commercial companies. By 1980, sensitivity of the slide tests using beta hCG antibody had reached 500-1,500 mIU/ml, and sensitivity of the beta hCG tube tests was approximately 200 mIU/ml. Both the slide and the tube tests required a urine specimen. In the 1980s, standard immunoassay methods were developed for beta hCG in serum that provide a sensitivity of 3-50 mIU/ml. These methods detect pregnancy 1-2 weeks after conception. The great majority of current tests use monoclonal antibodies, either alone or with a polyclonal antibody that captures the hCG molecule and a monoclonal antibody that identifies it. Several manufacturers developed abbreviated serum pregnancy immunoassays that compared patient serum with a single standard containing a known amount of beta hCG (usually in the range of 25 mIU/ml). A result greater than the standard means that beta hCG is present in a quantity greater than the standard value, which in usual circumstances indicates pregnancy. Current serum immunoassay procedures take between 5 minutes and 2 hours to perform (depending on the manufacturer). The abbreviated method is much less expensive and is usually quicker. Several urine tests are available that detect 50 mIU/ml of hCG.

    Technical problems with human chorionic gonadotropin. Some (not all) of the kits that detect less than 25 mIU/ml of hCG may have problems with false-positive results of several different etiologies. First, of course, there may be incorrect performance of the test or patient specimen mishandling. The antibodies used in the different manufacturer’s tests have different specificities. Serum hCG molecules may exist in different forms in some patients: whole (“intact”) molecule, free beta subunit, free alpha subunit, or other degraded hCG fragments. Considerable quantities of serum free beta or alpha subunits are more often seen with tumors. Different antibodies may detect different amounts of hCG material depending on whether the antibody detects only the whole molecule, the beta subunit on the whole molecule, or the free beta subunit only (including in urine a partially degraded free beta subunit known as the “core beta fragment”). Most anti-beta antibodies actually detect both whole molecule (because of the structural beta subunit), free beta subunit, and core beta fragments. Therefore, the amount of hCG (in mIU/ml) detected in urine depends on several factors: (1) whether a specific whole-molecule or a beta-hCG method is used. The specific whole-molecule method reports about the same quantity of intact hCG in serum or urine, whereas the beta-specific assay would report higher amounts of hCG in urine than in serum since it detects intact hCG plus the beta subunits and fragments that are present in greater quantities in urine than serum; (2) degree of urine concentration or dilution; (3) stage of pregnancy, since more beta fragments appear in urine after the first few weeks; (4) how the particular kit is standardized (discussed later). Some beta-hCG antibodies have a certain degree of cross-reaction with LH, although theoretically a beta-specific antibody should not do so. The serum of occasional patients contains heterophil-type antibodies capable of cross-reacting with monoclonal test antibodies (HAMA) that were produced in mouse tissue cells and could produce a false positive result. This most often happens with double antibody “sandwich” test methods. Some kits are affected in a similar way by renal failure.

    Another confusing aspect of pregnancy testing relates to standardization of the tests by the manufacturers (that is, adjusting the test method to produce the same result as a standard, which is a known quantity of the material being assayed). In hCG testing, the manufacturers use a standard from the World Health Organization (WHO). The confusion arises because the earliest WHO standard used for this purpose (Second International Standard; second IS) was composed of a mixture of whole-molecule hCG, free beta subunits, and other hCG fragments. When the supply of second IS was exhausted, the WHO developed the first (and then the third) International Reference Preparation (IRP), which is mostly whole-molecule hCG without free beta subunit. However, hCG kits standardized with the original second IS give results about half as high as current kits standardized against the first or third IRP. Also, certain current kits specific for whole-molecule hCG would not detect some of the hCG fragments in the original second IS. This difference in antibody behavior may at least partially explain discrepant reports in the literature of equal quantities of hCG in pregnancy serum and urine and other reports of urine values as high as 10 times serum values. After the first few weeks of pregnancy, maternal serum contains primarily intact hCG; maternal urine contains some intact hCG but much larger quantities of free beta subunits and core beta fragments.

    Finally, it has been reported in several studies that occasionally normal nonpregnant women may have low-level circulating levels of an hCG-like substance, usually less than 25 mIU/ml. This was reported in about 2.5% (range, 0%-14%) of patients in these studies, although most evaluations of hCG test kits have not reported false positive results. When one kit was reactive, sometimes one or more different kits would also be reactive, but usually some kits do not react with these substances. At present, in most laboratories there is no satisfactory way to know immediately whether a positive result is due to pregnancy, is due to hCG-producing tumor, or is false positive, especially when the test is a yes-no method. Although there are ways to investigate possible discrepancies, it usually takes considerable time and retesting to solve the problem or it may necessitate consultation with a reference laboratory.

    Other uses for hCG assay. Pregnancy tests are useful in certain situations other than early diagnosis of normal pregnancy. These conditions include ectopic pregnancy, spontaneous abortion (which occurs in about 15% of all pregnancies; literature range, 12%-31%), and hCG-producing neoplasms. Ectopic pregnancy and neoplasms will be discussed in detail later. When the differential diagnosis includes normal intrauterine pregnancy, ectopic pregnancy, and threatened, incomplete, or complete abortion, the pattern obtained from serum quantitative beta-hCG assays performed every other day may be helpful. During the first 4 weeks of pregnancy (beginning at conception), there is roughly a doubling of hCG every 2 days (range, 1-3 days). As noted earlier, serum beta hCG by immunoassay first detects embryonic placental hCG in titers of 2-25 IU/L between 1 and 2 weeks after conception. Ectopic pregnancy and abortions may demonstrate an increase in their hCG levels at the same rate as in normal pregnancy up to a certain point. In the case of ectopic pregnancy, that point is usually less than 4 weeks (possibly as long as 6 weeks) after conception, since the ectopic location usually limits placental growth or rupture occurs. The typical pattern of ectopic pregnancy is a leveling off (plateau) at a certain time. The usual pattern of abortion is either a decrease in beta-hCG levels as abortion takes place or a considerable slowing in the rate of increase. However, these are only rules of thumb. About 15% of normal intrauterine pregnancies display less increase (decreased rate of increase) than expected, and thus could be mistaken for beginning abortion by this criterion alone. Also, ectopic pregnancy values may sometimes decline rather than plateau if the fetus dies.

    Ectopic pregnancy

    Ectopic pregnancy is a common gynecologic problem, either by itself or in differential diagnosis. Symptoms include abdominal pain of various types in about 97% of patients (literature range, 91%-100%), abnormal uterine bleeding in about 75% (54%-80%), delayed menses in about 75% (68%-84%), adnexal tenderness on palpation in about 90%-95%, unilateral adnexal mass in about 50% (30%-76%), and fever (usually lowgrade) in about 5% (3%-9%). Hypovolemic shock is reported as the presenting symptom in about 14%. It is obvious that these signs and symptoms can suggest a great number of conditions. In one study, 31% of patients with ectopic pregnancy in the differential diagnosis had a strongly suggestive triad of abdominal pain, uterine bleeding, and an adnexal mass. Only 14% of these patients were found to have ectopic pregnancy. Some conditions that frequently mimic ectopic pregnancy are pelvic inflammatory disease; threatened, incomplete, or complete abortion; corpus luteum rupture; dysfunctional uterine bleeding; and bleeding ovarian cyst. Among routine laboratory tests, a hemoglobin value less than 10 gm/100 ml is reported in about 40% of ectopic pregnancy cases (28%-55%) and leukocytosis in about 50%. Among other diagnostic procedures, culdocentesis for fresh blood is reported to produce about 10% false negative results (5%-18%). Pregnancy test results vary according to the sensitivity of the test. Urine or serum pregnancy tests with a sensitivity of 500-1,000 mIU/ml result in about 25% false negative results (8%-60%). Tests with a sensitivity of 50 mIU/ml yield about 5%-10% false negative results (0%-13%). Serum tests with a sensitivity of 25 IU/L or better have a false negative rate of about 1%-2% (range, 0%-3%). A positive pregnancy test result is not a diagnosis of ectopic pregnancy but signifies only that the patient has increased levels of hCG, for which there could be several possible causes. Also, some manufacturers’ kits are subject to a certain number of false positive results. Interpretation of a negative test result depends on the sensitivity of the test. If the test is a serum hCG immunoassay with a sensitivity of 25 mIU/ml (IU/L) or better, a negative test result is about 98%-99% accurate in excluding pregnancy. However, there are rare cases in which the specimen might be obtained 2-4 days before the patient hCG reaches detectable levels or there could be a technical laboratory error. A repeat test 48 hours later helps to exclude these possibilities.

    As noted previously, failure to double hCG values in 24 hours at gestational age 4-8 weeks occurs in about 66% of ectopic pregnancies, about 85% of spontaneous abortion cases, and about 15% of normal pregnancies. Such an abnormally slow hCG increase rate would warrant closer followup or possibly other diagnostic tests, such as a quantitative serum hCG assay if the 48-hour increase is considerably low. A substantially low serum hCG level for gestational age suggests abnormal pregnancy. Another use for quantitative hCG assay in appropriate cases is to see if the “discriminatory zone” of Kadar has been reached. Originally, this was the range of 6,000-6,500 mIU/ml (IU/L, IRP standard) above which standard transabdominal ultrasound (TAUS) can visualize a normal pregnancy gestational sac in the uterus in about 94% of cases (although TAUS could detect an intrauterine gestational sac below 6,000 mIU/ml in some cases, failure to do so gives no useful information). With more sensitive ultrasound equipment and use of a vaginal transducer, it has been reported that the discriminatory zone upper limit can be reduced to the area of 1,000-1,500 mIU/ml (IU/L), but the exact value must be established by each institution using its particular pregnancy test and ultrasound equipment. Transvaginal ultrasound is more sensitive than TAUS in detecting an adnexal mass or free cul-de-sac fluid that would suggest ectopic pregnancy.

    Neoplasms producing human chorionic gonadotropin

    Neoplasms arising from chorionic villi, the fetal part of the placenta, are known as gestational trophoblastic neoplasms and include hydatidiform mole (the counterpart in tumor classification of benign adenoma) and choriocarcinoma (chorioepithelioma, the equivalent of carcinoma). Hydatidiform mole also has a subdivision, chorioadenoma destruens, in which the neoplasm invades the placenta but there is no other evidence of malignancy. The major importance of hydatidiform mole is a very high (і 10%) incidence of progression to choriocarcinoma.

    Several hormone assays have been proposed as aids in diagnosis. By far the most important is hCG, which is produced by the trophoblast cell component of fetal placental tissue. Current pregnancy tests using monoclonal antibodies to beta subunit of hCG or to the whole molecule can detect levels of 25 mIU/ml (IU/L), sometimes less, without interference by LH, permitting detection of nearly all gestational tumors (except a very few that predominately secrete the free beta fragment of hCG, which would necessitate an assay that would detect this hCG metabolite). Since normal placental tissue secretes hCG, the problem then is to differentiate normal pregnancy from neoplasm. Suspicion is raised by clinical signs and also by finding hCG levels that are increased more than expected by the duration of pregnancy or that persist after removal of the placenta. Twin or other multiple pregnancies can also produce hCG levels above expected values. Although serum levels of hCG greater than 50,000 mIU/ml (or urine levels > 300,000 mIU/ml) are typically associated with gestational neoplasms, especially if these levels persist, a considerable number of patients with gestational tumors have hCG values less than this level. About 25% of patients in one report had values less than 1,000 mIU/ml. In normal pregnancies, serum hCG levels become nondetectable by about 14 days (range, 3-30 days) after delivery. In one study of elective abortions, it took 23-52 days for hCG levels to become nondetectable. After removal of a hydatidiform mole, hCG levels should become nondetectable in about 2-3 months (range, 21-278 days). Once neoplasm is diagnosed and treated, hCG measurement is a guideline for success of therapy and follow-up of the patient for possible recurrence.

    Other hormones useful in possible gestational neoplasms. Fetal and placental tissue produces other hormones that may be useful. Progesterone (or its metabolite pregnanediol) and estradiol are secreted by the placenta in slowly increasing quantity throughout most of pregnancy. It has been reported that during the first 20 weeks of gestation, hydatidiform moles are associated with serum estradiol-17b values that are increased from values expected in normal pregnancy, with good separation of normal pregnancy from molar pregnancy. Serum progesterone levels were increased in about 75% of nonaborted moles up to the 20th week. Urinary pregnanediol levels, on the other hand, are frequently decreased. Finding increased serum progesterone and estradiol-17b levels during the time that peak hCG values are expected (between the 50th and 80th days after the last menstrual period), accompanied by a decreased urine pregnanediol level, would suggest a hydatidiform mole or possibly a choriocarcinoma. Serum human placental lactogen (hPL), or somatomammotropin, is another placental hormone whose level rises during the first and second trimesters and then reaches a plateau during the last 2-3 months. The association of decreased levels of hPL in the first and second trimesters with increased hCG levels suggests neoplasm. There is, however, a small degree of overlap of hPL level in patients with mole and the normal range for pregnancy. One report suggests a possible inverse ratio between hPL values and degree of malignancy (the greater the degree of malignancy, the less serum hPL produced).

    Production of hCG has been reported to occur in nearly two thirds of testicular embryonal cell carcinomas and in about one third of testicular seminomas. Instances of hCG secretion by adenocarcinomas from other organs and, rarely, from certain other tumors have been reported.

  • Tests of Gonadal Function

    The most common conditions in which gonadal function tests are used are hypogonadism in males and menstrual disorders, fertility problems, and hirsutism or virilization in females. The hormones currently available for assistance include lutropin (luteinizing hormone; LH), follitropin (follicle-stimulating hormone; FSH), testosterone, human chorionic gonadotropin (hCG), and gonadotropin-releasing hormone (GnRH).

    Gonadal function is regulated by hypothalamic secretion of a peptide hormone known as gonadotropin-releasing hormone (GnRH; also called luteinizing hormone-releasing hormone, LHRH). Secretion normally is in discontinuous bursts or pulses, occurring about every 1.5-2.0 hours (range, 0.7-2.5 hours). GnRH half-life normally is about 2-4 minutes (range, 2-8 minutes). There is little if any secretion until the onset of puberty (beginning at age 8-13 years in females and age 9-14 years in males). Then secretion begins predominately at night but later extends throughout the night into daytime, until by late puberty secretion takes place all 24 hours. GnRH is secreted directly into the circulatory channels between the hypothalamus and pituitary. GnRH causes the basophilic cells of the pituitary to secrete LH and FSH. In males LH acts on Leydig cells of the testis to produce testosterone, whereas FSH stimulates Sertoli cells of the testis seminiferous tubules into spermatogenesis, assisted by testosterone. In females, LH acts on ovarian thecal cells to produce testosterone, whereas FSH stimulates ovarian follicle growth and also causes follicular cells to convert testosterone to estrogen (estradiol). There is a feedback mechanism from target organs to the hypothalamus to control GnRH secretion. In males, testosterone and estradiol inhibit LH secretion at the level of the hypothalamus, and testosterone also inhibits LH production at the level of the pituitary. A hormone called inhibitin, produced by the Sertoli cells of the testis tubules, selectively inhibits FSH at both the hypothalamic and pituitary levels. There appears to be some inhibitory action of testosterone and estradiol on FSH production as well, in some circumstances. In females, there is some inhibitory feedback to the hypothalamus by estradiol.

    Luteinizing hormone and follicle-stimulating hormone

    Originally FSH and LH were measured together using bioassay methods, and so-called FSH measurements included LH. Immunoassay methods can separate the two hormones. The LH cross-reacts with hCG in some RIA systems. Although this ordinarily does not create a problem, there is difficulty in pregnancy and in patients with certain neoplasms that secrete hCG. Several of the pituitary and placental glycopeptides (TSH, FSH, hCG, LH) are composed of at least two immunologic units, alpha and beta. All of these hormones have the same immunologic response to their alpha fraction but a different response to each beta subunit. Antisera against the beta subunit of hCG eliminate interference by LH, and some investigators have reported success in producing antisera against the beta subunit of LH, which does not detect hCG. There is some cross-reaction between FSH and TSH, but as yet this has not seemed important enough clinically in TSH assay to necessitate substitution of TSH beta subunit antiserum for antisera already available. Assay of FSH in serum is thought to be more reliable than in urine due to characteristics of the antibody preparations available.

    Serum luteinizing hormone

    LH is secreted following intermittent stimulation by GnRH in pulses occurring at a rate of about 2-4 every 6 hours, ranging from 30% to nearly 300% over lowest values. Therefore, single isolated LH blood levels may be difficult to interpret and could be misleading. It has been suggested that multiple samples be obtained (e.g., four specimens, each specimen collected at a 20-minute interval from another). The serum specimens can be pooled to obtain an average value. FSH and testosterone have a relatively constant blood level in females.

    Urine luteinizing hormone

    In contrast to serum LH, urine LH is more difficult to obtain, since it requires a 24-hour specimen with the usual problem of complete specimen collection. It also has the disadvantage of artifact due to urine concentration or dilution. The major advantage is averaging of 24-hour secretion, which may prevent misleading results associated with serum assays due to LH pulsatile secretion. Another role for urine LH assay is detection of ovulation during infertility workups. LH is rapidly excreted from plasma into urine, so that a serum LH surge of sufficient magnitude is mirrored in single urine samples. An LH surge precedes ovulation by about 24-36 hours (range, 22-4 hours). Since daily urine specimens are obtained beginning 9-10 days after onset of menstruation (so as not to miss the LH surge preceding ovulation if ovulation should occur earlier than the middle of the patient cycle). It has been reported that the best time to obtain the urine specimen is during midday (11 A.M.-3 P.M.) because the LH surge in serum usually takes place in the morning (5 A.M.-9 A.M.). In one study, the LH surge was detected in 56% of morning urine specimens, 94% of midday specimens, and 88% of evening specimens. In contrast, basal body temperature, another method of predicting time of ovulation, is said to be accurate in only 40%-50% of cases. Several manufacturers have marketed simple immunologic tests for urine LH with a color change endpoint that can be used by many patients to test their own specimens. Possible problems include interference by hCG with some of the LH kits, so that early pregnancy could simulate a LH surge. Since the LH surge usually does not last more than 2 days, positive test results for more than 3 consecutive days suggest some interfering factor. Similar interference may appear in some patients with elevated serum LH levels due to ovarian failure (polycystic ovary [PCO] disease, early menopause, etc.).

    Finally, an estimated 10% of patients have more than one urine LH spike, although the LH surge has the greatest magnitude. It may be necessary to obtain serum progesterone or urine pregnanediol glucuronide assays to confirm that elevation of LH values is actually the preovulation LH surge.

    Urine pregnanediol

    The ovarian corpus luteum formed shortly after ovulation secretes increasing amounts of progesterone. Progesterone or its metabolite pregnanediol glucuronide begins to appear in detectable quantities about 2-3 days after ovulation (3-5 days after the LH surge) and persists until about the middle of the luteal phase that ends in menstruation. Conception is followed by progesterone secretion by the placenta. A negative baseline urine pregnanediol assay before the LH surge followed by a positive pregnanediol test result on at least 1 day of early morning urine specimens obtained 7, 8, and 9 days after the LH surge confirm that the LH surge did occur and was followed by ovulation and corpus luteum formation. Urine specimens are collected in the morning rather than at the LH collection time of midday. At least one manufacturer markets a simple kit for urine pregnanediol assay.

    Problems with interpretation of a positive urine pregnanediol glucuronide assay include the possibility that early pregnancy may be present. Also, 5%-10% of patients throughout their menstrual cycle nonovulatory phase have slightly or mildly increased pregnanediol levels compared to the reference range. A urine sample collected before the LH surge can be tested to exclude or demonstrate this phenomenon.

    Serum androgens

    The most important androgens are dehydroepiandrosterone (DHEA), a metabolite of DHEA called DHEA-sulfate (DHEA-S), androstenedione, and testosterone. DHEA is produced in the adrenal cortex, ovary, and testis (in the adrenal cortex, the precursors of DHEA are also precursors of cortisol and aldosterone, which is not the case in the ovary or testis). DHEA is the precursor of androstenedione, and androstenedione is a precursor both of testosterone and of estrogens (see Fig. 30-2). About 50%-60% of testosterone in normal females is derived from androstenedione conversion in peripheral tissues, about 30% is produced directly by the adrenal, and about 20% is produced by the ovary. DHEA blood levels in females are 3 times androstenedione blood levels and about 10 times testosterone blood levels. In normal males after onset of puberty, testosterone blood levels are twice as high as all androgens combined in females. Androstenedione blood levels in males are about 60% of those in females and about 10%-15% of testosterone blood levels.

    Serum testosterone

    About 60%-75% of circulating serum testosterone is bound to a beta globulin variously known as sex hormone-binding globulin (SHBG) or as testosterone-binding globulin (TBG, a misleading abbreviation because of its similarity to the more widely used abbreviation representing thyroxine-binding globulin). About 20%-40% of testosterone is bound to serum albumin and 1%-2% is unbound (“free”). The unbound fraction is the only biologically active portion. Serum assays for testosterone measure total testosterone (bound plus unbound) values. Special techniques to assay free testosterone are available in some reference laboratories and in larger medical centers. Circulating androstenedione and DHEA are bound to albumin only.

    Several conditions can affect total serum testosterone levels by altering the quantity of SHBG without affecting free testosterone. Factors that elevate SHBG levels include estrogens (estrogen-producing tumor, oral contraceptives, or medication), cirrhosis, hyperthyroidism, and (in males) decreased testis function or testicular feminization syndrome. Conditions that decrease SHBG levels include androgens and hypothyroidism.

    There is a relatively small diurnal variation of serum testosterone in men, with early morning levels about 20% higher than evening levels. There is little diurnal change in women. Serum levels of androstenedione in men or women are about 50% higher in the early morning than in the evening.

    In males, testosterone production is regulated by a feedback mechanism with the hypothalamus and pituitary. The hypothalamus produces gonadotropin-releasing hormone (GnRH; also called LHRH), which induces the pituitary to secrete LH (and FSH). LH, in turn, stimulates the Leydig cells of the testis to secrete testosterone.

    Serum estrogens

    There are three major estrogens: estrone ((E1), estradiol (estradiol-17b; E2), and estriol (E3). All of the estrogens are ultimately derived from androgenic precursors (DHEA and androstenedione), which are synthesized by the adrenal cortex, ovaries, or testis. The adrenal is unable to convert androgens to estrogens, so estrogens are directly produced in the ovary or testis (from precursors synthesized directly in those organs) or are produced in nonendocrine peripheral tissues such as the liver, adipose tissue, or skin by conversion of androgenic precursors brought by the bloodstream from one of the organs of androgen synthesis. Peripheral tissue estrogens are derived mainly from adrenal androgens. Estradiol is the predominant ovarian estrogen. The ovary also secretes a much smaller amount of estrone. The primary pathway of estradiol synthesis is from androstenedione to estrone and then from estrone to estradiol by a reversible reaction. This takes place in the ovary and to a much lesser extent in peripheral conversion of testosterone to estradiol. Estrone is produced directly from androstenedione (mostly in peripheral tissues) and to a lesser extent in the ovary from already formed estradiol by the reversible reaction. Estriol in nonpregnant women is formed as a metabolite of estradiol or estrone. In pregnancy, estriol is synthesized by the placenta from DHEA (actually from DHEA-S) derived from the fetal adrenals. This is a different pathway from the one in nonpregnant women. Estriol is the major estrogen of the placenta. The placenta also produces some estradiol, derived from both fetal and maternal precursors. Nonendocrine peripheral tissues (liver, skin, fat) synthesize a small amount of estrone and estradiol in pregnancy, mainly from adrenal precursors.

    In females, there is a feedback mechanism for estradiol production involving the ovaries, hypothalamus, and pituitary. As already mentioned, the hypothalamus produces GnRH. The GnRH is secreted in a pulsatile manner about every 70-90 minutes and is excreted by glomerular filtration. The GnRH stimulates FSH and LH secretion by the pituitary. Some investigators believe there may be a separate (undiscovered) factor that regulates FSH secretion. The FSH acts on the ovarian follicles to stimulate follicle growth and development of receptors to the action of LH. The LH stimulates the follicles to produce estradiol.

    Estradiol values can be measured in serum, and estriol values can be measured in serum or urine by immunoassay. Total estrogen values are measured in urine by a relatively old but still useful procedure based on the Kober biochemical reaction. All of the estrogens can be assayed by gas chromatography. The DHEA and androstenedione values can also be measured by special techniques, but these are available only in large reference laboratories.

  • Prolactin Secretion Abnormalities

    Prolactin is another peptide pituitary hormone. It stimulates lactation (galactorrhea) in females, but its function in males is less certain. The major regulatory mechanism for prolactin secretion is an inhibitory effect exerted by the hypothalamus, with one known pathway being under control of dopamine. There is also a hypothalamic stimulating effect, although a specific prolactin-stimulating hormone has not yet been isolated. TRH stimulates release of prolactin from the pituitary as well as release of TSH. Dopamine antagonists such as chlorpromazine or reserpine block the hypothalamic inhibition pathway, leading to increased prolactin secretion. Serum prolactin is measured by immunoassay. Prolactin secretion in adults has a diurnal pattern much like that of GH, with highest levels during sleep.

    Some Conditions Associated With Generalized Retardation or Acceleration of Bone Maturation Compared to Chronologic Age (as Seen on Hand-Wrist X-ray Films
    Bone age retarded
    Hypopituitarism with GH deficiency
    Constitutional growth delay
    Gonadal dysgenesis (Turner’s syndrome)
    Primary Hypothyroidism (20%-30% of patients)
    Cushing’s syndrome
    Severe chronic disease (renal, inflammatory gastrointestinal [GI] disease, malnutrition, chronic
    anemia, cyanotic congenital heart disease)
    Poorly controlled severe type I diabetes mellitus
    Bone age accelerated
    Excess androgens (adrenogenital syndrome; tumor; iatrogenic)
    Excess estrogens (tumor; iatrogenic)
    Albright’s syndrome (polyostotic fibrous dysplasia)
    Hyperthyroidism

    Prolactin assay

    Prolactin-secreting pituitary tumors. Prolactin assay has aroused interest for two reasons. First, about 65% of symptomatic pituitary ad enomas (literature range, 25%-95%), including both microadenomas (<1 cm) and adenomas, produce elevated serum levels of prolactin. The pituitary cell type most often associated with hyperprolactinemia is the acidophil cell adenoma, but chromophobe adenomas (which are by far the most frequent adenoma type) are also involved. In addition, about 20%-30% of women with postpubertal (secondary) amenorrhea (literature range, 15%-50%) have been found to have elevated serum prolactin levels. The incidence of pituitary tumors in such persons is 35%-45%. Many patients have been cured when pituitary adenomas were destroyed or when drug therapy directed against prolactin secretion was given. Hyperprolactinemia has also been reported to be an occasional cause of male infertility.

    Some reports indicate that extension of a pituitary adenoma outside the confines of the sella turcica can be detected by assay of cerebrospinal fluid (CSF) prolactin. Prolactin in CSF rises in proportion to blood prolactin levels but is disproportionately elevated when tumor extension from the sella takes place. A CSF/plasma prolactin ratio of 0.2 or more suggests suprasellar extension of a pituitary tumor. Simultaneously obtained CSF and venous blood specimens are required.

    Not all pituitary tumors secrete prolactin. Autopsy studies have demonstrated nonsymptomatic pituitary adenomas in 2.7%-27% of patients. Theoretically, nonfunctional tumors should have normal serum prolactin levels. Reports indicate, however, that some tumors that do not secrete prolactin may be associated with elevated serum prolactin levels, although the values are usually not as high as levels found with prolactin-secreting tumors.

    Prolactin assay drawbacks. Elevated serum prolactin levels may be associated with conditions other than pituitary tumors, idiopathic pituitary hyperprolactinemia, or hypothalamic dysfunction. Some of these conditions are listed in the box. Especially noteworthy are the empty sella syndrome, stress, and medication. The empty sella syndrome is associated with an enlarged sella turcica on x-ray film. Serum prolactin is elevated in some of these patients, although the elevation is most often not great; and the combination of an enlarged sella plus elevated serum prolactin level could easily lead to a false diagnosis of pituitary tumor. Stress is important since many conditions place the patient under stress. In particular, the stress of venipuncture may itself induce some elevation in serum prolactin levels, so some investigators place an indwelling heparin lock venous catheter and wait as long as 2 hours with the patient resting before the sample is drawn. Estrogens and other medications may contribute to diagnostic problems. In general, very high prolactin levels are more likely to be due to pituitary adenoma than to other causes, but there is a great deal of overlap in the low- and medium-range elevations, and only a minority of pituitary adenomas display marked prolactin elevation. Statistics depend on the diagnostic cutoff level being used. The level of 100 ng/ml (100 µg/L) is most frequently quoted; the majority (45%-81%) of pituitary adenomas are above this level, but only 25%-57% of patients with prolactin levels above 100 ng/ml are reported to have a pituitary adenoma. A value of 300 ng/ml gives clear-cut separation but includes only about one third of the adenomas (literature range, 12%-38%).

    Conditions Associated With Increased Serum Prolactin (% With Elevation Varies)
    Sleep
    Stress (including exercise, trauma, illness)
    Nursing infant
    Pregnancy and estrogens
    Pituitary adenoma
    Hypothalamic space-occupying, granulomatous, or destructive diseases
    Hypothyroidism
    Chronic renal failure
    Hypoglycemia
    Certain medications
    Postpartum amenorrhea syndrome
    Postpill amenorrhea-galactorrhea syndrome
    Empty sella syndrome
    Addison’s disease (Nelson’s syndrome)
    Polycystic ovary (PCO) disease
    Ectopic prolactin secretion (nonpituitary neo-plasms)

    Prolactin stimulation and suppression tests Several stimulation and suppression tests have been used in attempts to differentiate pituitary adenoma from other causes of hyperprolactinemia. For example, several investigators have reported that pituitary adenomas display a normal response to levodopa but a blunted response to chlorpromazine. Normally there is a considerable elevation of prolactin level (at least twofold) after TRH or chlorpromazine administration and a decrease of the prolactin level after levodopa administration. In pituitary insufficiency there typically is failure to respond to TRH, chlorpromazine, or levodopa. In hypothalamic dysfunction there typically is normal response to TRH (which directly stimulates the pituitary), little, if any, response to chlorpromazine, and blunted response to levodopa. Pituitary adenomas are supposed to give a blunted response to TRH and chlorpromazine but a normal decrease with levodopa. Unfortunately, there are sufficient inconsistent results or overlap in adenoma and nonadenoma response that most investigators believe none of these tests is sufficiently reliable. There have also been some conflicting results in differentiating hypothalamic from pituitary disease. Diseases such as hypothyroidism and other factors that affect pituitary function or prolactin secretion may affect the results of these tests.

  • Pituitary Insufficiency

    Body organs affected by stimulatory hormones from the pituitary include the thyroid, adrenals, and gonads. Bone growth in childhood is dependent on pituitary GH. Pituitary failure does not produce a clear-cut syndrome analogous to syndromes produced by failure of pituitary-controlled organs (“end organs”) such as the thyroid. Therefore, pituitary hormone deficiency is considered only when there is deficiency-type malfunction in one or more end organs or metabolic processes such as growth that are dependent on pituitary hormones. Diagnosis is complicated by the fact that primary end organ failure is much more common than pituitary hormone deficiency. Another source of confusion is that most abnormal effects that can be produced by pituitary dysfunction can also be produced or simulated by nonpituitary etiologies. In addition, the hypothalamus controls pituitary secretion of several hormones, so that hypothalamic abnormality (e.g., defects produced by a craniopharyngioma or other hypothalamic tumor) can result in pituitary dysfunction.

    Pituitary insufficiency in adults most commonly results in deficiency of more than one pituitary hormone and is most frequently caused by postpartum hemorrhage (Sheehan’s syndrome). Pituitary tumor is also an important cause. Gonadal failure is ordinarily the first clinical deficiency to appear. It is followed some time later by hypothyroidism. The first laboratory abnormality is usually failure of the GH level to rise normally in response to stimulation.

    Diagnosis

    Diagnosis of pituitary insufficiency can be made by direct or indirect testing methods. Indirect methods demonstrate that a hypofunctioning organ that is pituitary dependent shows normal function after stimulation by injection of the appropriate pituitary hormone. Direct methods consist of blood level assays of pituitary hormones. Another direct method is injection of a substance that directly stimulates the pituitary. Now that pituitary hormone assays are available in most medical centers and sizable reference laboratories, indirect tests are much less frequently needed.

    Assay of pituitary hormones. Pituitary hormones are peptide hormones rather than steroid hormones. Currently available immunoassays are a great improvement over original bioassays and even the first-generation radioimmunoassays (RIAs). However, with the exception of thyroid-stimulating hormone (TSH), pituitary hormone assays are still ordered infrequently, so that most physician’s offices and ordinary hospital laboratories do not find it economically worthwhile to perform the tests. The assays for TSH and adrenal cortex stimulating hormone (adrenocorticotropin; ACTH) are discussed in the chapters on thyroid and adrenal function. Pituitary luteinizing hormone (LH) and follicle-stimulating hormone (FSH) assay are discussed later. GH deficiency is probably the most frequent pituitary hormone deficiency, either in overall pituitary failure or as an isolated defect leading to growth retardation in childhood. GH assay will be discussed in detail in relation to childhood growth disorders. GH assay has been used as an overall screen for pituitary insufficiency, but not all cases of pituitary hormone secretion deficiency have associated GH secretion deficiency. Prolactin is another pituitary hormone, but prolactin secretion is one of the last pituitary functions to disappear when the pituitary is injured.

    In primary end organ failure the blood level of stimulating hormone from the pituitary is usually elevated, as the pituitary attempts to get the last possible activity from the damaged organ. Therefore, in the presence of end organ failure, if values for the pituitary hormone that stimulates the organ are in the upper half of the reference range or are elevated, this is strong evidence against deficiency of that pituitary hormone. Theoretically, inadequate pituitary hormone secretion should result in serum assay values less than the reference range for that pituitary hormone. Unfortunately, low-normal or decreased values of some pituitary hormones may overlap with values found in normal persons, at least using many of the present-day commercial assay kits. In that situation, stimulation or suppression testing may be necessary for more definitive assessment of pituitary function.

    Stimulation and suppression tests. Pituitary suppression and stimulation tests available for diagnosis of pituitary hormone secretion deficiency include the following:

    1. Metyrapone (Metopirone) test based on the adrenal cortex-pituitary feedback mechanism involving ACTH and cortisol but dependent also on hypothalamic function.
    2. Thyrotropin-releasing hormone (TRH) test involving the ability of synthetic TRH to stimulate the pituitary by direct action to release TSH and prolactin.
    3. Tests involving pituitary gonadotropins (LH and FSH) such as clomiphene stimulation (involving the hypothalamic-pituitary axis) or LH-releasing hormone (LHRH) administration (direct pituitary stimulation).
    4. Tests of hypothalamic function, usually based on establishing pituitary normality by direct stimulation of the pituitary (TRH stimulation of TSH and prolactin, LHRH stimulation of LH secretion, etc.) followed by use of a test that depends on intact hypothalamic function to stimulate pituitary secretion of the same hormone. An example is the use of TRH to stimulate prolactin secretion by the pituitary, followed by chlorpromazine administration, which blocks normal hypothalamic mechanisms that inhibit hypothalamic stimulation of pituitary prolactin secretion and which leads to increased prolactin secretion if the hypothalamus is still capable of stimulating prolactin release.

  • Adrenal Medulla Dysfunction

    The only syndrome in this category is produced by pheochromocytomas. Pheochromocytoma is a tumor of the adrenal medulla that frequently secretes epinephrine or norepinephrine. This causes hypertension, which may be continuous (about 30% of patients) or in short episodes (paroxysmal). Although rare, pheochromocytoma is one of the few curable causes of hypertension and so should be considered as a possible etiology in any patient with hypertension of either sudden or recent onset. This is especially true for young or middle-aged persons.

    Approximately 90% of pheochromocytomas in adults arise in the adrenal (more often in the right adrenal). Of the 10% that are extraadrenal, the great majority are located in the abdomen, with 1%-2% in the thorax and neck. Extraadrenal abdominal tumors are usually found in the paraaortic sympathetic nerve chain (below the level of the renal artery), but perhaps one third are located in the remnants of the organ of Zuckerkandl (which is situated in the paraaortic region between the origin of the inferior mesenteric artery and the bifurcation of the aorta). About 20% of pheochromocytomas are multiple and about 10% are bilateral. Approximately 5%-10% (range, 3%-14%) of all pheochromocytomas are malignant, but the malignancy rate for extraadrenal abdominal tumors is reported to be about 25%-30%. Although hypertension is the most common significant clinical finding, 10%-20% of pheochromocytomas do not produce hypertension (Table 30-3). Hyperglycemia is reported in about 50% of patients. In one autopsy series of 54 cases, only 25% were diagnosed during life.

    Common signs and symptoms of pheochromocytoma

    Table 30-3 Common signs and symptoms of pheochromocytoma

    About 5% of pheochromocytoma patients have neurofibromatosis, and in 5%-10% the pheochromocytoma is associated with the multiple endocrine neoplasia (MEN) syndrome type II (also known as “IIA,” with medullary carcinoma of the thyroid) or III (also known as “IIB,” with mucosal neuromas). The MEN syndrome pheochromocytomas are bilateral in 50%-95% of cases and multiple in about 70% of cases, whereas nonfamilial (sporadic) pheochromocytomas are usually unilateral and are multiple in about 20% of cases. About 5% of the familial cases are said to be malignant.

    In children, extraadrenal pheochromocytomas are more common (30% of cases), more often bilateral (25%-70% of cases), and more often multiple (about 30% of cases) than in adults. About 10%-20% of adrenal or extraadrenal childhood pheochromocytomas are reported to be malignant.

    Tests for pheochromocytoma

    Regitine test. The first tests for pheochromocytomas were pharmacologic, based on neutralization of epinephrine effects by adrenergic-blocking drugs such as Regitine. After basal blood pressure has been established, 5 mg of Regitine is given intravenously, and the blood pressure is checked every 30 seconds. The result is positive if systolic blood pressure decreases more than 35 mm Hg or diastolic pressure decreases 25 mm Hg or more and remains decreased 3-4 minutes. Laboratory tests have proved much more reliable than the pharmacologic procedures, which yield an appreciable percentage of false positives and negatives.

    Clonidine suppression test. Clonidine is a centrally acting alpha-adrenergic agonist that inhibits sympathetic nervous system catecholamine release from postganglionic neurons. The patients should not be on hypertensive medication (if possible) for at least 12 hours before the test. After a baseline blood specimen is obtained, a single 0.3-mg oral dose of clonidine is administered and a second blood specimen is drawn 3 hours later. Most patients without pheochromocytoma had postdose plasma norepinephrine values more than 50% below baseline and plasma catacholamine values less than 500 pg/ml. Most patients with pheochromocytoma showed little change in plasma norepinephrine values between the predose and postdose specimens and had a plasma catacholamine postdose level greater than 500 pg/ml. Apparently, better results are obtained when baseline urine norepinephrine values are greater than 2000 pg/ml (86%-99% sensitivity) than when the baseline value is less than 2000 pg/ml (73%-97% sensitivity). Medication such as tricyclic antidepressants, thiazide diuretics, and beta blockers may interfere with the test.

    Catecholamine/vanillylmandelic acid/metanephrine assay. The catecholamines epinephrine and norepinephrine are excreted by the kidney, about 3%-6% free (unchanged), and the remainder as various metabolites. Of these metabolic products, the major portion is vanillylmandelic (vanilmandelic) acid (VMA), and the remainder (about 20%-40%) is compounds known as metanephrines. Therefore, one can measure urinary catecholamines, metanephrines, or VMA. Of these, urine metanephrine assay is considered by many to be the most sensitive and reliable single test. There are also fewer drugs that interfere.

    Most investigators report that urine metanephrine assay detects about 95% (range, 77%-100%) of patients with pheochromocytoma. Sensitivity of urine catecholamines is approximately 90%-95% (range, 67%-100%) and that of urine VMA assay is also about 90%-95% (range, 50%-100%). One report indicates that metanephrine excretion is relatively uniform and that a random specimen reported in terms of creatinine excretion can be substituted for the usual 24-hour collection in screening for pheochromocytoma. Methylglucamine x-ray contrast medium is said to produce falsely normal urine metanephrine values for up to 72 hours. One report found that 10%-15% of mildly elevated urine metanephrine values were falsely positive due to medications (such as methyldopa) or other reasons.

    Although the metanephrine excretion test is slowly gaining preference, VMA and catecholamine assay are still widely used. All three methods have detection rates within 5%-10% of one another. A small but significant percentage of pheochromocytomas are missed by any of the three tests (fewer by metanephrine excretion), especially if the tumor secretes intermittently. The VMA test has one definite advantage in that certain screening methods are technically more simple than catecholamine or metanephrine assay and therefore are more readily available in smaller laboratories. The VMA screening methods are apt to produce more false positive results, however, so that abnormal values (or normal results in patients with strong suspicion for pheochromocytoma) should be confirmed by some other procedure.

    Catecholamine production and plasma catecholamine levels may be increased after severe exercise (although mild exercise has no appreciable effect), by emotional stress, and by smoking. Uremia interferes with assay methods based on fluorescence. Other diseases or conditions that may increase plasma or urine catecholamines or urine catecholamine metabolites are hypothyroidism, diuretic therapy, heavy alcohol intake, hypoglycemia, hypoxia, severe acidosis, Cushing’s syndrome, myocardial infarction, hemolytic anemia, and occasionally lymphoma or severe renal disease. In addition, bananas, coffee, and various other foods as well as certain medications may produce falsely elevated results. These foods or medications produce appreciable numbers of false-positive results in some of the standard screening techniques for VMA. An abnormal result with the “screening” VMA techniques should be confirmed by some other VMA method. Although other VMA methods or methods for metanephrine and catecholamine assay are more reliable, they too may be affected by certain foods or medications, so it is best to check with individual laboratories for details on substances that affect their particular test method. Reports differ on whether some patients with essential hypertension may have elevated serum or urine catecholamine or catecholamine metabolite results; and if so, how often it occurs and what percentage are due to conditions known to elevate catecholamines, or to medications, or to unknown causes.

    Some investigators use urine fractionated catecholamines (epinephrine and norepinephrine, sometimes also dopamine) by high-pressure liquid chromatography as a confirmatory test for pheochromocytoma. It is said that about 50%-70% of pheochromocytomas produce epinephrine, about 75%-85% produce norepinephrine, and about 95% produce one or the other.

    Plasma catecholamine assay. Several investigators report better sensitivity in pheochromocytoma with plasma catecholamine assay than with the particular urine metabolite assays they were using. Another advantage is the simplicity of a single blood specimen versus 24-hour collection. However, upright posture and stress can greatly affect plasma catecholamine levels, even the stress of venipuncture, so it is best to draw the specimen in the early morning before the patient rises. Even then, some advocate placement of an indwelling venous catheter or heparinized scalp vein apparatus with an interval of 30 minutes between insertion of the catheter and withdrawal of the blood specimen. One investigator reported that plasma catecholamine values decreased rapidly if the red blood cells (RBCs) were not removed within 5 minutes after obtaining the specimen. Placing the specimen on ice was helpful but only partially retarded RBC catecholamine uptake. Plasma catecholamine assay detection rate in pheochromocytomas is about 90% (literature range, 53%-100%). Failure to detect the tumor in some instances is due to intermittent tumor secretion. Urine collection has the advantage of averaging the 24-hour excretion.

    Tumor localization methods. Once pheochromocytoma is strongly suspected by results of biochemical testing, certain procedures have been useful in localizing the tumor. Intravenous pyelography with nephrotomography is reported to have an accuracy of about 70% in detecting adrenal pheochromocytoma. CT has been reported to detect 85%-95% (range, 84%-over 95%) of pheochromocytomas, including some in extraadrenal locations. Consensus now is that CT (or magnetic resonance imaging [MRI]) is the best single localization test (better than ultrasound). Radioactive mIBG has been used to locate pheochromocytomas and other neural tumors with sensitivity of about 85%. However, this procedure is only available in a very few nuclear medicine centers. Angiographic procedures are reported to detect 60%-84% of pheochromocytomas; but angiography is somewhat controversial because it is invasive, because it yields about 10% false negative results in adrenal tumors, and because some investigators believe that the incidence of tumor bilaterality warrants exploration of both sides of the abdomen regardless of angiographic findings.

    Miscellaneous procedures. Serum chromogranin A (CgA), a substance produced in the adrenal medulla and some other endocrine and neuroendocrine tissues and tumors, has been used in diagnosis of pheochromocytoma. CgA is not affected by posture, venipuncture, and many medications that interfere with catecholamine assay; in one series CGA detected 86% of pheochromocytomas.

    Deoxyribonucleic acid (DNA) analysis by flow cytometry in small numbers of patients suggest that pheochromocytomas with diploid ploidy are most often (but not always) benign, whereas those that are aneuploid are most often (but not always) malignant.

    Serum neuron-specific enolase (NSE) in small numbers of patients suggest that pheochromocytomas with normal NSE levels are usually benign, but those with elevated values are often malignant.