Leave a comment

8.8 Risk assessment and risk management

8.8 Risk assessment and risk management
Oxford Textbook of Public Health

8.8
Risk assessment and risk management

Gilbert S. Omenn and Elaine M. Faustman

Introduction

Definitions
Hazard identification: epidemiology, lifetime rodent bioassay, short-term tests, and structure–activity relationships

Epidemiology

Lifetime rodent bioassays

Short-term tests

Structure–activity relationships

Integrating hazard identification information
Risk characterization: dose–response, exposure analysis, variation in susceptibility, and relation of effects in rodents to risk in humans

Dose–response

Exposure analysis

Variation in susceptibility

Extrapolation from rodents to humans

Information resources

Integrating qualitative and quantitative aspects of risk assessment: classification schemes

Ethylene oxide: an example of chemical specific risk assessment

The United States Commission on Risk Assessment and Risk Management (Risk Commission)

Comparative analyses of risks and perceptions of risk

Economic analyses

Risk communication
Conclusions
Chapter References

Introduction
Risk assessment as an organized activity of the federal agencies in the United States began in the 1970s. Earlier, the American Conference of Governmental Industrial Hygienists had set threshold limit values for exposures of workers and the Food and Drug Administration (FDA) had set acceptable daily intakes for dietary pesticide residues and food additives. In the ‘Delaney Clause’ of 1958, Congress instructed the FDA to prohibit substances found to cause cancer in animals (or humans, of course) from being used as food additives that could reach humans through the food supply. For some time, it was pragmatic to declare safe any food sources in which standard tests found no evidence of these chemicals (Albert 1994). However, advances in analytical chemistry exposed the fact that ‘not detectable’ was not the same as ‘not present’ or ‘zero risk’. The agencies had to develop ‘tolerance levels’ and ‘acceptable risk levels’.
In the mid-1970s the United States Environmental Protection Agency (EPA) and the FDA issued guidance for estimating risks from low-level exposures to potentially carcinogenic chemicals (Albert 1994). Their guidance set action levels for regulatory attention at estimated risks of one extra cancer over a lifetime of exposure per 100 000 people (EPA, at first) or per million people (FDA and later EPA). These estimated incremental risks represent very conservative acceptable or negligible risk levels. Cancers claim the lives of 230 000 of every million people in the United States. Thus the regulatory agencies seek to prevent an increase from a countable 23 per cent of deaths due to cancers to an estimated risk of 23.0001 per cent. Furthermore, as explained later, these estimates represent worst case or ‘upper-bound’ estimates, not actuarial counts like the 230 000 cancer deaths per million deaths in the general population. Other countries commonly use a safety factor approach generating, for example an ‘acceptable daily intake’.
During the period 1977 to 1980, an Interagency Regulatory Liaison Group was actively engaged in bridging scientific, statutory, and policy considerations with the activities of the EPA and FDA, the Occupational Safety and Health Administration, and the Consumer Product Safety Commission. The White House Office of Science and Technology Policy participated in the scientific discussions supporting risk assessment and risk management (Calkins et al. 1980). A framework was developed for identifying potential hazards, characterizing the risks, and managing the risks, usually by reduction of use or reduction of exposures (Table 1).

Table 1 Framework for regulatory decision-making about potential hazards and the environment: risk assessment and risk management

A National Research Council report Risk Assessment in the Federal Government: Managing the Process (National Research Council 1983), subsequently called theRed Book, helped the regulatory agencies set in gear a common framework for assessing risks from chemicals. TheRed Book provided a framework (Table 2) for the hazard identification and risk characterization components of the risk assessment/risk management framework in Table 1. A strong research base is an essential aspect (Office of Technology Assessment 1992; Faustman and Omenn 1996; EPA 1996b).

Table 2 Framework for risk assessment from the Red Book (National Research Council 1983)

The 1990 Amendments to the United States Clean Air Act led to two far-reaching reports. Science and Judgment in Risk Assessment (National Research Council 1994) captured the combination of qualitative and quantitative approaches essential to effective assessment of risks. Then the Presidential/Congressional Commission on Risk Assessment and Risk Management (Risk Commission 1997) formulated a comprehensive framework that is being applied widely. The two crucial concepts were putting each environmental problem or issue into public health (and/or ecological) context and proactively engaging the relevant stakeholders from the very beginning of the six-stage process shown in Fig. 1. Particular exposures and potential health effects must be evaluated across sources and exposure pathways and in light of multiple endpoints, not just one chemical, in one environmental medium (air, water, food, products), for one health effect at a time. A similar framework has been utilized by the Health and Safety Executive (HSE) Risk Assessment Policy Unit in the United Kingdom (HSE 2000).

Fig. 1 Environmental health risk management Framework from the United States Commission on Risk Assessment and Risk Management (Omenn Commission). The framework is comprised of six stages: (1) formulate the problem in a broad public health context; (2) analyse the risks; (3) define the options; (4) make sound risk reduction decisions; (5) implement those actions; (6) later evaluate the effectiveness of the actions taken. Interactions with stakeholders are at the centre of the process (Omenn 1996; Risk Commission 1997; Charnley and Omenn 1997; Ohanian et al. 1997).

Definitions
Risk assessment is the systematic scientific characterization of potential adverse health effects resulting from human exposures to hazardous agents or situations. Risk is defined as the probability of an adverse outcome. The term ‘hazard’ is used by North Americans to refer to intrinsic toxic properties; internationally, this term is defined as the probability of an adverse outcome. This chapter presents risk assessment approaches for both cancer and non-cancer hazards. Analogous approaches can be applied to ecological risks (National Research Council Committee on Risk Assessment Methodology 1993; EPA 1996a).
Both qualitative assessment of the nature of effects and strength of the evidence and quantitative estimation of the risk are essential components of the risk characterization (Table 1 and Table 2). We emphasize the importance of the phrase ‘characterization of risk’, as many public health practitioners, environmentalists, and regulators tend to equate risk assessment with quantitative risk assessment, getting a number (or a number with uncertainty bounds), and ignoring crucial information about the strength of the evidence, the nature of the health effect, and the means of avoiding or reversing effects of exposure.
Risk management refers to the process by which policy actions are chosen to deal with hazards identified in the risk assessment/risk characterization process. Risk managers consider the scientific evidence and risk estimates together with statutory, engineering, economic social and political factors in evaluating alternative regulatory options, selecting among the options, and discussing those options with interested parties, the stakeholders.
Risk communication is the challenging process of making risk assessment and risk management information comprehensible to community groups, lawyers, politicians, judges, business people, labour, and environmentalists (Fischhoff et al. 1996). Often these people have important inputs for various stages of this process, so listening is a crucial, too often neglected aspect of risk communication. Sometimes the decision-makers and stakeholders simply want to know the ‘bottom line’: Is a substance or a situation ‘safe’ or not? Others will be interested in knowing why the risk estimates are uncertain and complicated and may be eager to challenge underlying assumptions.
Risk management decisions are reached under diverse statutes in the United States (Table 3) and analogous statutes or regulations in other countries. Some statutes specify reliance on risk alone, while others require a balancing of risks and benefits of the product or activity (Table 4). Risk assessment has provided a valuable framework for priority setting within regulatory and health agencies, in the development process within companies, and in resource allocation in environmental organizations. Similar statutes and regulatory regimes have been developed in many other countries and through such international organizations as the International Programme for Chemical Safety of the World Health Organization (WHO). There are significant current efforts toward the harmonization of testing protocols and assessment of risks and standards.

Table 3 Major toxic chemical laws in the United States and agency responsible

Table 4 Objectives of risk assessment

A major challenge for risk assessment, risk communication, and better risk management is to work across disciplines to demonstrate the biological plausibility and clinical significance of the conclusions from epidemiological, lifetime animal, short-term, and structure–activity studies of chemicals thought to have potential adverse effects on human health and the environment. Biomarkers of exposure, effect, or individual susceptibility can link the presence of a chemical in various environmental compartments to specific sites of action in target organs and to host responses (National Research Council 1989a,b, 1992a,b). Mechanistic investigations of the actions of specific chemicals can help us penetrate the black box approach of simply counting tumours in exposed animals. Greater appreciation of the mechanisms and extent of individual variation in susceptibility among humans can help us better protect subgroups of susceptible people and better relate findings in animals to risk estimates in humans. Individual behavioural risk factors and social risk factors are also important. Finally, public and media attitudes toward the local polluters, other responsible parties, and relevant government agencies may be critically important, sometimes leading to what has been labelled ‘the outrage factor’ by Sandman (1993). Thus, all of the public health sciences are needed for comprehensive risk assessment and risk management (Omenn 1996; Risk Commission 1997).
This chapter reviews the status of certain facets of the framework approach and its application to environmental health problems. Details about the contributing scientific fields can be found in Chapter 8.1, Chapter 8.2, Chapter 8.3, Chapter 8.4, Chapter 8.5, Chapter 8.6 and Chapter 8.7 and other relevant chapters on epidemiological approaches, risk communication, determinants of health and disease, and public health functions.
Hazard identification: epidemiology, lifetime rodent bioassay, short-term tests, and structure–activity relationships
Epidemiology
The most convincing evidence for human risk is a well-conducted epidemiological study in which a positive association between exposure and disease has been observed (National Research Council 1983). Epidemiological approaches are basically opportunistic. Studies begin either with known or presumed exposures, comparing exposed versus non-exposed individuals, or with known cases, comparing with persons lacking the particular diagnosis. There are important limitations. When the study is exploratory, hypotheses are often weak. Exposure estimates are often crude and retrospective, in particular for conditions with a long latency before clinical manifestations appear, such as cancers. Generally, there are multiple exposures, in particular when a full week or a full lifetime is considered. Lifestyle factors, such as smoking, physical inactivity, and diet, may be important and are difficult to sort out. There is always a trade-off between detailed information on relatively few persons and very limited information on large numbers of persons. Humans are highly outbred, and so the method must consider variation in susceptibility among people who are exposed. Finally, the expression of results (odds ratios, relative risks, and confidence intervals) may be unfamiliar to non-epidemiologists; the caveats self-effacing epidemiologists cite often discourage risk managers (Omenn 1993). Frequently, ‘conflicting’ studies with results that disagree are not evaluated with respect to size and power of the study to detect the endpoint of interest. To help address these epidemiological challenges, epidemiologists use criteria for evaluating the robustness of associations (Hill 1965).
Epidemiology is in the midst of a transformation. Advances from the human genome project, molecular biomarkers and improved mechanistic hypotheses help epidemiology ‘get inside the black box’ of statistical associations to gain an understanding that enhances biological plausibility and clinical relevance. ‘Molecular epidemiology’ is a new phrase that refers to such studies of the molecular events in the causative pathway of human disease. Some hypotheses of causative relationships are being tested with prevention clinical trials. For example, the b-Carotene and Retinol Efficacy Trial (CARET) in the United States and the a-Tocopherol/b-Carotene (ATBC) Trial in Finland, tested the hypothesis arising from observational epidemiology studies that antioxidant vitamins might be chemopreventive agents against the development of lung cancer (and cardiovascular disease) in high-risk populations, namely smokers and asbestos-exposed workers. In these trials, the stunning findings were that not only was there no benefit from the vitamin supplements, but also there were significant increases in lung cancer incidence and in cardiovascular and total mortality (ATBC 1994; Omenn et al. 1996; Omenn 1998). These findings have stimulated new laboratory work on the properties and effects of b-carotene. Systematic studies of environmental exposure reduction actions should be considered analogues of such prevention trials.
Many questions arise in the assessment of results from epidemiological studies such as the following.

1.
What relative weights should be given to studies with differing results? Should positive results override negative results? Should a study be weighted in accord with its statistical power or its quality? Are there certain kinds of flaws, such as in choice of the control or comparison group, that should make a study be disregarded altogether?

2.
What relative weights should be given to the results from different types of epidemiological studies? Should the findings of a prospective study supersede those of a case–control study or case–control findings supersede ecological findings?

3.
What statistical significance should be required for results to be considered positive? Should that criterion be different for the primary hypothesis than for correlations that arise from massaging the data afterward?

4.
What is the significance of a positive finding in a study in which the route of exposure is different from that under analysis?

5.
Should evidence for different types of tumour response be combined (e.g. different tumour sites or benign versus malignant tumours)? What about cancer and non-cancer endpoints?
Lifetime rodent bioassays
Bioassays have been developed as standardized, experimental protocols to identify chemicals capable of causing cancers, birth defects, neurotoxicity, or other toxicity in laboratory animals. Typically, one chemical is tested at a time in rats and mice, both sexes, with 50 animals per dose group and near lifetime exposure to 90, 50, and 10 to 25 per cent of what is determined in preliminary studies to be the maximally tolerated dose. Based on results from 379 long-term carcinogenicity studies, Haseman and Lockhart (1993) concluded that most target sites showed a strong correlation (65 per cent) between males and females, in particular for forestomach, liver, and thyroid tumours; in fact, for efficiency they suggested that bioassays could rely on a combination of male rats and female mice, thereby halving the number of animals required for a lifetime rodent bioassay and assuming that other uses would be found for the female rats and male mice! Estimation and use of the maximally tolerated dose is a vexing problem, not at all resolved by an expert panel (National Research Council Committee on Risk Assessment Methodology 1993) or by others troubled by toxicity or cell proliferation responses at maximal doses that may be not at all be representative of responses at lower doses (Ames and Gold 1990).
Although these bioassays were initially designed for hazard identification, they are now used as the basis for quantitative assessments. Results are extrapolated from high dose to low dose, and then from animals to humans. Such extrapolations have historically required numerous choices, most importantly the choice of dataset, plus assumptions about the dose–response curve from observations in the 10 to 100 per cent range down to 10–6 risk estimates at the upper confidence limit or use of a benchmark or reference dose approach (see below). Lifetime bioassays can be enhanced by investigation of mechanisms and assessment of multiple endpoints in the same study (Bucher et al. 1996). Based on increasing information about critical mechanistic pathways in cancer, the National Toxicology Program now includes tests in transgenic animal models with sensitized genetic pathways known to be important for human cancers (Tennant et al. 1999). These assays (see below) will expedite detection of carcinogens, and can be linked with mechanistically oriented, short-term tests and biomarker and genetic studies in epidemiology. A second initiative is the Environmental Genome Project, aimed at identifying specific genes relevant for a broad range of environmentally induced diseases. Questions from the Red Book remain important.

1.
Is a positive result from a single animal study sufficient or should positive results from two or more animal studies be required? Should studies be weighted according to their quality and statistical power? Should negative results of similar quality be given less weight?

2.
How should evidence of different metabolic pathways or very different metabolic rates between animals and humans be factored into a risk assessment?
Substantial advances in the past decade are discussed later in this chapter.
Short-term tests
Many chemicals in widespread commerce have not been tested adequately for risk assessment purposes (National Research Council 1984; INFORM 1996; EPA 1998b). The costs of $1 to $2 million and 3 to 5 years’ work per chemical tested are prohibitive in the aggregate. For example, EPA’s recent chemical hazard data availability study found that 43 per cent of the United States high production volume chemicals (those produced in excess of 1 million lb/year) have no publicly available studies for any of six basic toxicity endpoints (acute toxicity, chronic toxicity, developmental/reproductive toxicity, genotoxicity/mutagenicity, ecotoxicity, and environmental fate). Only 7 per cent of the high production volume chemicals has a full set of publicly available studies for the six basic endpoints (EPA 1998b). The Environmental Defense Fund (1997) book Toxic Ignorance has catalysed widespread agreement from international chemical companies to conduct the necessary tests to meet the Organization for Economic Co-operation and Development (OECD) requirements for a screening information data system on such high-volume chemicals. Recent public interest in the potential of chemicals to cause endocrine effects has also drawn attention to similar data needs (EPA 1998a).
These critical data gaps have sparked renewed interest in devising inexpensive, short-term tests for screening chemicals. Such tests can also yield important information about mechanisms, distinguishing genotoxic from non-genotoxic effects between carcinogens, for example. The Salmonella reverse mutation test (Ames test) and certain cytogenetic tests, especially of bone marrow following in vivo exposure, seem useful and robust. Nevertheless, progress has been slow and frustrating. Many years have been spent trying to make the mouse lymphoma test and the sister chromatid exchange assay interpretable for risk assessment purposes.
Short-term tests for non-cancer endpoints such as developmental toxicity, reproductive toxicity, neurotoxicity, and immunotoxicity have become available (Atterwill et al. 1992; Harris et al. 1992; Shelby et al. 1993; Whittaker and Faustman 1994; Faustman and Omenn 1996; Lewandowski et al. 1999). Mechanistic information from these systems has been applied to risk assessment (Abbott et al. 1992; EPA 1994c; Leroux et al. 1996). A National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods recently peer reviewed the mouse local lymph node assay for assessing chemicals for their ability to produce allergic contact dermatitis and declared it a valid alternative to currently accepted guinea pig test methods, helping to refine and reduce animal use (NIEHS 1999). This centre is evaluating other tests as alternatives to in vivo assays.
A new class of short-term tests utilizes knockout transgenic mouse models (Nebert and Duffy 1997 or Tennant et al. 1999). Mutagenic carcinogens can be identified with high sensitivity and specificity using hemizygous p53(+/–) mice in which one allele of the p53 gene has been inactivated. A TG.AC transgenic mouse carrying a V-Ha-ras gene construct develops papillomas and malignant tumours in response to mutagenic and non-mutagenic carcinogens and tumour promoters, but not in response to non-carcinogens. It is likely that these animal models will supplant at least part of the two-species, two-sex rodent bioassay in the next decade.
Structure–activity relationships
A chemical’s structure, solubility, stability, pH sensitivity, electrophilicity, chemical reactivity, and pathways of metabolism can be important information for hazard identification by inference. Historically, certain key molecular structures have provided regulators with some of the most readily available information on which to assess hazard potential. For example, the majority of the first 14 occupational carcinogens regulated by the Occupational Safety and Health Administration belonged to the aromatic amine chemical class. The EPA Office of Toxic Substances relies on structure–activity relationships (SARs) to meet deadlines for responses to premanufacturing notices for new chemicals under the Toxic Substances Control Act (see Table 3). N-Nitroso, aromatic amine, amino azo, and phenanthrene structures are alerts to prioritize chemicals for additional evaluation as potential carcinogens. Chemicals with structures related to valproic acid or retinoic acid are suspected as developmental toxicants (Faustman and Omenn 1996).
SARs can be used in assessing the relative toxicity of chemically related compounds. A prominent example was the EPA’s reassessment of health risks associated with 2,3,7,8-tetrachlorodibenzo-p-dioxin and related chlorinated and brominated dibenzo-p-dioxins, dibenzofurans, and planar biphenyls, using toxicity equivalence factors, based on induction of the Ah receptor (EPA 1994b). The estimated toxicity of environmental mixtures containing these chemicals is the sum of the product of the concentration of each multiplied by its toxicity equivalence factor value. The WHO has organized efforts to reach a consensus on toxicity equivalence factors used for PCBs, PCDDs, and PCDFs for both humans and wildlife (Van den Berg et al. 1998).
Integrating hazard identification information
A remarkable collegial effort to relate the findings of short-term tests to the presumed gold standard of the lifetime rodent cancer bioassay result was conducted between 1989 and 1994 under the aegis of the National Toxicology Program and the National Institute of Environmental Health Sciences. For a set of 44 consecutive chemicals entered into the National Toxicology Program lifetime bioassay programme, Tennant et al. (1990) predicted the results, based upon knowledge of the structural features of the chemicals, results from short-term tests, and (sometimes) previous bioassay data. In response to their Carcinogen Prediction Challenge, nine other groups of scientists made predictions for the same set of chemicals, based on their own criteria. At the conclusion, an international workshop was held at which the National Toxicology Program lifetime rodent bioassay results for 40 chemicals were revealed: 20 had clear or some evidence of carcinogenicity in one or more of the four species/sex groups of rats and mice, of which 14 were clear positives and nine were positive in more than one organ site (Ashby and Tennant 1994). Tennant et al. (1990) correctly predicted 17 of 20 carcinogens and 13 of 20 non-carcinogens; thus, they had a ratio of false positives to false negatives of 2.3, a sensitivity of 0.85 (17 of 20), and a specificity of 0.65 (13 of 20). None of the nine groups did as well; some did no better than chance, using combinations of computerized structural alerts or SARs and results from in vitro and in vivo tests (Omenn et al. 1995). The pattern of results reveals very different approaches to balancing false-negative versus false-positive outcomes, apparently with different implicit ‘cut-points’.
Social cost analytical approaches have been published, relying upon short-term tests, lifetime rodent bioassays, or a combination of testing strategies to guide risk management for potentially hazardous chemicals. In the Lave–Omenn value of information model (Lave and Omenn 1986; Lave et al. 1988; Omenn and Lave 1988), ‘cost-effective’ means that the costs of testing plus the social costs of false positives (loss of the economic value of the chemical) and of false negatives (economic value of the disease burden incurred as a result of use of the chemical) are less than the costs of misclassification by simply treating all chemicals as carcinogenic and avoiding exposures to the extent feasible. For example, we might set the cost/consequences of a false negative at $10 million, 10 times the social cost of a false positive, on the average. Then, for the correct prediction of 30 of 40 of the National Toxicology Program results (Tennant et al. 1990), the social cost of misclassification would be $7 million for the seven false positives plus $30 million for the three false negatives (Omenn et al. 1995). In the Lave–Omenn estimation (Lave and Omenn 1986), if the bioassay were used at $1 million per chemical tested and the true proportion of rodent carcinogens is assumed to be 10 per cent among tested chemicals, the testing ($40 million for 40 chemicals) would have to be 100 per cent accurate just to break even against the alternative of simply calling all chemicals carcinogenic for rodents. Such an alternative practice would require instituting general approaches to minimize exposures. If regulatory decisions could be based on interpretation of far less expensive short-term tests, the margin for cost-effective decision-making would be much greater (Omenn and Lave 1988). Thus, there is considerable incentive to come up with more reliable and more predictive short-term biological and structural approaches.
Risk characterization: dose–response, exposure analysis, variation in susceptibility, and relation of effects in rodents to risk in humans
As noted above, the characterization of risk involves more than quantitative estimation of the risk (with or without uncertainty bounds). Crucial information about the nature of health (and ecological) risks, the strength of the evidence, the feasibility of prevention or treatment of the adverse effects, and the variation of risk in the population cannot be captured in a number; these attributes require careful qualitative and descriptive characterization that can be used in health advisories.
Dose–response
Analyses of dose–response relationships must start with the determination of the critical effect. Many chemicals have more than one adverse effect on health and/or ecological endpoints. The usual practice is to choose the dataset with adverse effects at the lowest levels of exposure, even if not representative, to extrapolate for potential health impacts on humans or ecosystems. Most studies are designed for hazard identification and not risk characterization and hence provide limited quantitative information. Nevertheless, they are used as the basis for quantitative risk estimations for lack of other data. Linearized multistage models for cancer have been used to estimate ‘virtually safe doses’, generally far below the observable range of our rodent bioassays. For other endpoints a reference dose is developed that is based on low or no effect level determinations, with safety factors driving extrapolation across species to find an acceptably safe exposure level for humans. We continue to use maximally tolerated dose regimens in animals, while awaiting better understanding of the underlying mechanisms, on an organ site by organ site or a chemical by chemical basis.
The fundamental basis of the quantitative relationships between exposure to an agent and the incidence of an adverse response is the dose–response assessment. Approaches for characterizing dose–response include determination of effect levels such as LD50 (dose producing 50 per cent lethality), ED10 (dose producing an effect in 10 per cent of exposed populations) and no observed adverse effect levels, margins of safety and therapeutic indexes, and various models for extrapolation to very low doses (National Research Council 1983).
For risk assessment purposes, human exposure data for the prediction of human response are usually quite limited. The risk assessor is interested in low environmental exposures of humans, which are way below the observable range of responses from animal assays or from high occupational exposures. Thus, high- to low-dose extrapolation and animal to human risk extrapolation methods comprise major aspects of dose–response assessment.
Threshold approaches
A challenge for risk assessors is the determination of critical adverse effect levels occurring at the lowest exposures. Each endpoint evaluated can have a no effect level (NOEL) as well as a no observed adverse effect level (NOAEL), lowest observed effect levels, and lowest observed adverse effect level (LOAEL). The dose–response curves for these endpoints frequently overlap. New specific responses, such as changes in body weight, can confound interpretation of endpoints, especially developmental toxicity (EPA 1991). EPA endpoint specific guidance documents provide useful information (e.g. EPA 1991). Usually, the critical adverse effect is defined as the significant adverse biological effect that occurs at the lowest exposure level; the NOAEL from that study is then used in quantitative risk evaluation (Barnes and Dourson 1988).
Significance usually refers to both biological and statistical criteria (Faustman et al. 1994) and is dependent upon the number of dose levels tested, the number of animals tested at each dose, and the background incidence of the adverse response in the non-exposed control groups. The NOAEL should not be perceived as risk free; NOAELs for continuous endpoints average 5 per cent risk and NOAELs based on quantal endpoints can be associated with a risk of greater than 10 per cent (Allen et al. 1994a,b; Faustman et al. 1994).
NOAELs can be used as a basis for risk assessment calculations, such as reference doses or acceptable daily intake values (Lehman and Fitzhugh 1954; Renwick and Walker 1993; Barnes and Dourson 1988). Reference doses (RfDs) or reference concentrations (RfCs) are estimates of a daily exposure to an agent that is assumed to have no adverse health impact on the human population. Acceptable daily intake (ADI) values used by the WHO for pesticides and food additives define the daily intake of a chemical, which, during an entire lifetime, appears to be without appreciable risk on the basis of all known facts (WHO 1962). RfDs and ADI values are typically calculated from NOAEL values by dividing by uncertainty and/or modifying factors (UF, MF) (EPA 1991):

These safety factors allow for interspecies (animal to human) and intraspecies (human) variation with default values of 10 each. An additional 10-fold uncertainty factor is used to extrapolate from short-exposure duration studies to a situation more relevant for long-term effects or to account for inadequate numbers of animals or other experimental limitations. The Food Quality Protection Act (1996) now requires the use of an extra 10-fold factor to be protective for children under certain conditions.
If only a LOAEL value is available, then an additional 10-fold factor is used routinely to arrive at a value more comparable with a NOAEL (Goldman 1998; Landrigan and Goldman 1998). MFs can be used to adjust the uncertainty factors if data on mechanisms, pharmacokinetics, or relevance of the animal response to human risk justify such modification (Dourson and Stara 1983; Dourson et al. 1985; Dourson and DeRosa 1991). Allen et al. (1994a) have shown developmental toxicity endpoints that application for the 10-fold factor for LOAEL to NOAEL conversion is too large.
Another way the NOAEL values have been utilized for risk assessment is to evaluate a ‘margin of exposure’ (MOE) or ‘margin of safety’, based on the ratio of the NOAEL determined in animals (expressed as mg/kg/day) to the intakes or exposure levels for humans. For example, if human exposures are calculated to be via drinking water containing 1 ppm of the chemical, then the exposure for a 50-kg woman would be

If the NOAEL for neurotoxicity is 100 mg/kg/day, then the MOE would be 2500 for the oral exposure route for neurotoxicity from the drinking water. Such a large value is reassuring for public health officials.
Low values of the MOE indicate that the human levels of exposure are close to levels for the NOAEL in animals. There is no factor included in this calculation for differences in human or animal susceptibility or animal to human extrapolation. Thus, MOE values of less than 100 have been used by regulatory agencies as flags for requiring further evaluation.
Some important common human exposures, such as the air pollutants lead, particulates, sulphur oxides, ozone, and carbon monoxide, are so close to LOAELs that regulatory agencies override the margin of safety approach, with considerations of technical feasibility.
The NOAEL approach has been criticized on several points: the NOAEL must, by definition, be one of the experimental doses tested; once identified, the rest of the dose–response curve is often ignored; experiments that test fewer animals often result in higher NOAELs and thus larger reference doses, as well as greater uncertainty; and the NOAEL will vary based on experimental design (Faustman and Bartell 1997).
Because of these limitations, an alternative to the NOAEL approach, the benchmark dose method, was proposed by Crump (1984) and extended by Kimmel and Gaylor (1988). The full dose–response is modelled, and the lower confidence bound for a dose at a specified response level is calculated. Figure 2 shows how a benchmark dose is calculated using a 10 per cent response and a 95 per cent lower confidence bound on dose (LED10). In this case,

Discussion continues on whether the values used for the uncertainty factors and modifying factors for benchmark doses should be the same factors as for the NOAEL or smaller values because of use of the full dose–response curve and of lower confidence bounds on the dose.

Fig. 2 Illustration of the benchmark dose (BMD) approach. LED10 is the lower confidence limit of the dose associated with a 10 per cent incidence of adverse effect. (Based on Kavlock et al. 1995.)

The benchmark dose approach has been applied to non-cancer endpoints (Clewell et al. 1997; Faustman and Bartell 1997), including specific applications for developmental and reproductive toxicity (Allen et al. 1994a,b; Auton 1994). Benchmark dose values were similar to NOAELs for a wide range of developmental toxicity endpoints. A generalized log logistic dose–response model has advantages in dealing with litter size and intralitter correlations (Allen et al. 1994b).
The benchmark dose approach has four advantages: it uses the full dose–response curve, as opposed to focusing on a single test dose as in the NOAEL approach; it includes a measure of variability (lower confidence limit on dose associated with upper confidence limit on risk); it uses responses within the experimental range versus extrapolation of responses to low doses not tested experimentally; and it facilitates comparisons of a consistent benchmark response level for RfD calculations across studies and agents (Faustman and Bartell 1997).
Non-threshold approaches
Numerous dose–response curves can be proposed in the low-dose region of the dose–response curve if a threshold assumption is not made. Because risk assessors are frequently interested in postulating exposures that would be associated with very low risks, such as one in a million over a lifetime, they frequently need to extrapolate far below the region of the dose–response curve for which experimentally observed data are available. Thus, the choice of models for extrapolation has received lots of attention. Two general types of dose–response model exist: statistical or tolerance distribution models and mechanistic models (Krewski and Van Ryzin 1981). Table 5 lists common models that have been used in risk extrapolation.

Table 5 Models used in risk extrapolation

The distribution models are based on the assumption that each individual has a tolerance level for responding to a test agent. A specific probability distribution is generated for the cumulative dose–response function (Faustman and Omenn 1996).
The mechanistic modelling approach to dose–response relationships tries to take account of the postulated biological mechanisms of response. Radiation research has spawned a series of ‘hit models’ for cancer modelling, where a ‘hit’ is defined as a critical cellular event that must occur before a toxic effect is produced. These models assume that an infinitely large number of targets exists (e.g. in the DNA), that the organism’s toxic response occurs after only a minimum number of targets has been modified, that a critical biological target is altered if a sufficient number of hits occurs, and that the probability of a hit in the low-dose range of the dose–response curve is proportional to the dose of the toxicant (Brown 1984).
The simplest mechanistic model is the one-hit (one-stage) linear model in which only one hit or critical cellular interaction is required for a cell to be altered. For example, based on somatic mutation theory, a single mutational change could be sufficient for a cell to become neoplastic through a transformational event and dose-independent clonal expansion. The probability statement for these models is

where ld is the number of hits occurring during a time period. In this theory a single molecule of a genotoxic carcinogen would have a minute but finite chance of causing a mutational event.
Armitage and Doll (1957) developed a multistage model for carcinogenesis based on the hypothesis that a series of ordered stages was required for a cell to undergo mutation, initiation, transformation, and progression to form a tumour. This relationship was generalized by Crump (1980) by maximizing the likelihood function over polynomials so that the probability statement is

If the true value of l1 is replaced with l1* (upper confidence limit of l1), then a linearized multistage model can be derived where the expression is dominated by (ld*)d at low doses. The slope q1* on this confidence interval is used for quantitative cancer assessment. To obtain an upper 95 per cent confidence interval on risk, the q1* value (drisk/ddose in mg/kg/day) is multiplied by the amount of exposure (mg/kg/day). Thus the upper-bound estimate on risk R is calculated as

This relationship has been used to calculate a ‘virtually safe dose’, which represents the lower 95 per cent confidence limit on a dose that gives an ‘acceptable level’ of risk (for example, an upper confidence limit for 10–6 excess risk). As both q1* and ‘virtually safe dose’ are calculated using 95 per cent confidence intervals, the values are believed to represent conservative estimates.
The EPA has utilized the linearized multistage model to calculate ‘unit risk estimates’, such as the increased individual lifetime risks of cancer over a 70-year lifespan for a 70-kg human breathing 1 µg/m3 of contaminated air or drinking 2 litres of water contaminated at 1 ppm (1 mg/kg/day).
The revised EPA cancer guidelines proposed several alternative approaches (EPA 1996b). For example, the lower confidence limit on a benchmark response has been suggested as a point of departure from which linear extrapolation to zero response or application of safety factors could be utilized for cancer risk assessment options depending upon the hypothesized mode of carcinogenic action. Genotoxic and non-genotoxic mechanisms might trigger different quantitative risk assessment approaches for establishing acceptable levels of exposure. Similar methods may be used for non-genotoxic carcinogens and for developmental and other non-cancer toxicants (Page et al. 1997; Gaylor et al. 1999).
Toxicological enhancements of the models
P>Table 5 lists three areas of research that have improved the application of models used in risk extrapolation. Physiologically based toxicokinetic modelling generates ‘internal effective doses’ at target organ sites, rather than relying on single-value external exposure estimates. Biologically based dose–response modelling connects the generalized mechanistic models discussed in the previous section to specific biological processes. Measured rates are incorporated into the mechanistic equations to replace default or computer-generated values. For example, the Moolgavkar–Venson–Knudson model is based on a two-stage model for carcinogenesis; two mutations are required for carcinogenesis, and birth and death rates of cells are modelled through clonal expansion and tumour formation. This model has been applied to human epidemiological data on retinoblastoma and to animal data on kidney and liver tumours in the 2-acetylaminofluorene ‘mega mouse’ study, bladder cancer in saccharin-exposed rats, rat lung tumours following radiation exposure, rat liver tumours following benzo[a]pyrene exposure, and mouse liver tumours following chlordane exposure (Cohen and Ellwein 1990; Moolgavkar and Luebeck 1990; National Research Council Committee on Risk Assessment Methodology 1993). Kohn et al. (1993) and Anderson (1983) used physiologically based toxicokinetic and biologically based dose–response information to improve dioxin risk assessment. EPA relied on Ah receptor binding in its dioxin risk reassessment (EPA 1994b, 2000).
Development of biologically based dose–response models for endpoints other than cancer are limited. Several approaches are being explored in developmental toxicity, utilizing cell cycle kinetics, enzyme activity, litter effects, and cytotoxicity as critical endpoints (Faustman et al. 1989, 1999; Shuey et al. 1994; Leroux et al. 1995; Lewandowski et al. 1998; Bartell and Faustman 1998). Unfortunately, there is a lack of specific, quantitative biological information for most toxicants and for most endpoints.
Exposure analysis
Exposure assessment is a crucial element of the risk assessment process, because there is no risk in the absence of exposure. Careful assessment of sources, pathways, environmental transformations, routes of entry, time course of exposure, total exposure from all sources and activities, and translation from ambient levels to target tissue effective dose is essential for exposure assessment. A good example is the work of the Electric Power Research Institute (EPRI) on emissions from electric utility boilers (EPRI 1994; Risk Commission 1997; Cullen and Frey 1999). Multiple chemical exposures and chemical–physical–biological agent interactions (Mumtaz et al. 1993) and exposure-specific sources of uncertainty (Bailar 1991) still need to be addressed.
The key step in making an exposure assessment is determining what exposure pathways are relevant for the risk scenario under development. The subsequent steps quantitate and sum these pathway-specific exposures for calculation of the overall exposure. The EPA has published guidelines for determining such exposures (EPA 1989a,b, 1992). Such calculations can include an estimation of total exposures for a specified population, as well as calculation of exposure for highly exposed individuals. The use of a hypothetical maximally exposed individual is no longer favoured in exposure assessment, owing to its extremely conservative assumptions at each step of the estimation. High end exposure estimates and theoretical upper-bound estimates are preferred (Risk Commission 1997).
High end exposure estimates are designed to represent ‘a plausible estimate’ of exposure of individuals in the upper 90th percentile of the exposure distribution. Theoretical upper-bound estimates are designed to represent exposures at a level that exceeds the exposures experienced by all individuals in the exposure distribution and are calculated by assuming limits for all exposure variables. In contrast, a calculation for individuals exposed at levels near the middle of the exposure distribution is a central estimate. The Risk Commission (1997) recommended use of the high end exposure estimates. A lifetime average daily dose is calculated for illustration in Table 6. In this example an exposure calculation to yield a central estimate and an high end exposure estimate for a potential dioxin exposure from recreational fish is given (EPA 1994b). These estimates differ in the estimates for the amount of fish ingested, the contamination level in the fish, and the frequency of eating meals of recreationally captured fish. Obviously such estimates would differ even more if the full range of potential toxicant containment levels were used. Modelling and utilizing better estimates for the distribution of containment levels is a major focus of exposure assessment research. Subjective uncertainty distributions and Monte Carlo composite analyses of parameter uncertainty are prominent methods, as described elsewhere (National Research Council 1994; Cullen and Frey 1999; Eastman and Omenn 2001). These approaches can provide a reality check which is useful for generating more realistic exposure estimates.

Table 6 Exposure scenarios for dioxin via ingestion of contaminated recreational fish

Several endpoint-specific exposure considerations need to be mentioned. In general, estimates of cancer risk use averages over a lifetime. In a few cases, short-term exposure limits are required (for example, ethylene oxide (EtO)) and characterization of short, but high levels of exposure are required. In these cases exposures are not averaged over the lifetime. With developmental toxicity it is assumed that a single exposure can be sufficient to produce an adverse developmental effect and there is time-dependent specificity of many adverse developmental outcomes (EPA 1991). In fact, both total exposure (as represented by area under the curve in pharmacokinetic studies) and peak exposures can play significant parts in determining response to developmental toxicants. A recent study with EtO has confirmed that exposure concentration times length of exposure as proposed by Haber’s law does not hold for comparing developmental toxicity potencies (Weller et al. 1999). Thus, daily doses are used rather than lifetime weighted averages.
By using Monte Carlo based approaches to deal with variability and uncertainty in exposure assessment, EPA is gradually replacing single-point estimates. Finley et al. (1994) and Cullen and Frey (1999) provide useful information and guidance on these probabilistic techniques in exposure assessment. The EPA provides an excellent on-line summary of the available statistical data on factors for human exposure assessment to support such assessments (http://www.epa.gov/ORD/WebPubs/exposure/front.pdf).
An example of the types of specific information available in the handbook (EPA 1997) is given in Table 7 about age-specific tap water intake. The handbook provides data for drinking water, food consumption, soil ingestion, inhalation rates, dermal absorption, product use, and human activity patterns. Such age-specific distributions for soil ingestion rates, inhalation rates, body weights, skin surface area, soil-on-skin adherence, tap water ingestion, fish consumption, residential occupancy, and occupational tenure, can be refined with additional data and can replace point estimates.

Table 7 Example of exposure factor handbook information: drinking water intake

The EPA is also dealing with several new issues for exposure assessment as a result of the Food Quality Protection Act of 1996, which required consideration of both aggregate and cumulative exposures. Aggregate exposures refer to the total exposures for a single substance. Good examples are the cross-media exposure analyses that are available for lead and mercury. Cumulative exposures refer to the total exposures to a group of compounds, for example total organophosphate pesticides via all food routes. EPA is developing guidelines for determining not only aggregate and cumulative exposures but also aggregate and cumulative risk estimates. To assess cumulative effects from such exposures EPA is identifying and categorizing pesticides that act by a common mode of action (EPA 1998d; ILSI 1999).
Variation in susceptibility
Both toxicology and epidemiology have been slow to recognize the marked variation in susceptibility among humans and the need to pay attention to outliers. Assay results and toxicokinetic modelling generally utilize means and standard deviations, or even standard errors of the mean, making the range seem smaller. In occupational and environmental medicine, physicians are often asked, ‘Why me, Doc?’ when they inform the patient that hazards on the job might explain a clinical problem. The EPA and Occupational Safety and Health Administration are expected, under the Clean Air Act and the Occupational Safety and Health Act, to promulgate standards that protect the most susceptible subgroups or individuals in the population. By focusing investigations on the most susceptible individuals, there might be a better chance of elucidating the underlying mechanisms (Omenn et al. 1990; Eaton et al. 1998). Host factors that influence susceptibility to environmental exposures are several: genetic traits (including sex and age), pre-existing diseases, behavioural traits (including most importantly, smoking), co-existing exposures, medications/vitamins, and protective measures (including respirators, gloves, and other barriers). Genetic studies are of two kinds.

1.
Investigations of the effects of chemicals and radiation on the genes and chromosomes, which constitute ‘genetic toxicology’: tests measure evidence of mutations (Ames test, and adduct formation between chemicals and DNA or between chemicals and proteins), chromosomal aberrations, sister chromatid exchange, DNA repair, and oncogen activation.

2.
Ecogenetic studies, identifying inherited variation in susceptibility (predisposition and resistance) to specific exposures, ranging across pharmaceuticals (‘pharmacogenetics’), pesticides, inhaled pollutants, foods, food additives, sensory stimuli, allergic and sensitizing agents, and infectious agents.
Variation in susceptibility has been demonstrated for all of these kinds of external agents (Omenn and Motulsky 1978; National Research Council 1993; Nebert 1999). The ecogenetic variation may affect either the biotransformation systems (enzymes that activate or detoxify chemicals) or the sites of action in target issues. Examples of ecogenetic considerations for immune response are seen in humans with beryllium sensitivity. Ethical issues about how such ecogenetic information can be utilized in protecting worker health have been the focus of recent Department of Energy concerns (Bartell et al. 2000).
Extrapolation from rodents to humans
Some of the most important scientific advances in the past decade and some of the most promising work for the future provide mechanism-based information for the critical question of relating rodent results to human risks. For all endpoints, it is essential to know more about the similarities and differences across species. Detailed knowledge of molecular mechanisms and cellular and organ system responses can guide us to make better decisions about which chemicals that produce cancers, neurotoxicity, birth defects, or other adverse effects in rodents really represent significant risks of doing the same in humans. Nearly all of our predictions about carcinogenicity risks for humans are based on the results from lifetime rodent bioassays. These bioassays are hardly themselves a ‘gold standard’, given their statistical and biological limitations and the observation that congruence of results between rats and mice is only 70 per cent (Lave et al. 1988; Haseman and Lockhart 1993). It is unlikely that rodent–human congruence would be higher. EPA’s revised cancer risk assessment guidelines (EPA 1996b) created a new category of animal carcinogens not likely to be predictive for human cancer risk. As summarized by McClain (1994) and the Risk Commission (1997), we now can cite several rodent carcinogenic responses that are candidates for no similar effect in humans (Table 8).

Table 8 Rodent carcinogenic responses not likely to apply in humans

Briefly, the male rat kidney has been demonstrated to respond with a nephropathy mediated by an a2-microglobulin for which there is no significant counterpart in humans or in other animals. The EPA action has recognized this distinction. The thyroid and other hormone-dependent tumours in rodents reflect marked species differences in the stimulating and feedback systems; sustained excessive levels of thyroid-stimulating hormone and lack of serum thyroid-binding globulin are the key elements in the hyperplasia and tumours of the rat thyroid. The incidence of spontaneous thyroid follicular cell neoplasia is also much higher in rats in the laboratory (e.g. Fischer 344) and among animals in endemic areas of iodine deficiency where many people have goitres, but thyroid cancer is rarely found in humans at these sites (EPA 1998c). Local necrosis and reactive hyperplasia in the bladder and in the forestomach represent responses to high local concentrations of the cytotoxic agent. On the other hand, many chemicals do cause tumours or other adverse effects in other parts of the body when administered (conveniently) by gavage, so the point here applies only to those chemicals whose effects are limited to the local point of application. Lung tumours have occurred with a variety of essentially inert particles, including titanium dioxide and carbon black, when the clearance capacity is markedly exceeded. The mouse liver cancer picture is considerably more complicated, with half a dozen different mechanisms, some of which seem to have definite counterparts in humans. High-dose mechanisms, involving induction of peroxisomes, cytotoxicity, and microsomal enzyme induction, seem much less likely to represent a significant risk in humans.
Information resources
There has been a figurative explosion of toxicology information now available on-line. HazDat can be accessed through the world-wide web by using the following address: http://www.atsdr.cdc.gov/hazdat.html. This database contains information on hazardous substance releases and contaminants, as well as over 160 public health statements from the Agency for Toxic Substances and Disease Registry chemical-specific toxicology profiles. EXTOXNET (http://ace.orst.edu/info/extoxnet/faqs/extoxnet.htm) provides information on the environmental chemistry and toxicology of pesticides, food additives, natural toxicants, and environmental contaminants. It is a product of an ad hoc consortium of university toxicologists and environmental chemists.
Other key sources of information for toxicologists are available through large databases such as RTECS, Toxline, and Medline. Scientific publications from the International Agency for Research on Cancer (IARC) are useful as is their website at http://depts.washington.edu/irarc/index.html. The EPA provides health hazard information on over 500 chemicals and includes the most current oral RfDs, inhalation RfCs, and carcinogen unit risk estimates (q1*) on the Integrated Risk Information System (IRIS); however, there are complaints about long lags in updating IRIS information. IRIS can be accessed at http://www.epa.gov/iris/. For Risk Commission documents, use http://www.riskworld.com/.
Integrating qualitative and quantitative aspects of risk assessment: classification schemes
Qualitative assessment of hazard information should include consideration of the concordance of toxicological findings across species and target organs, of consistency across duplicate experimental conditions, and of adequacy of the experiments to detect the adverse endpoints of interest.
The National Toxicology Program uses several categories in its biennial report on carcinogens. The National Toxicology Program’s evaluation guidelines allow for categories of ‘known to be human carcinogens,’ as well as ‘reasonably anticipated to be human carcinogens’ where there is limited evidence of carcinogenicity in humans and/or sufficient evidence of carcinogenicity in animals. Sufficient evidence in animals can include dose-related increases in malignant or combined malignant and benign neoplasms in multiple species, tissue sites, and/or by multiple routes of exposure. Also important to the ‘sufficient’ category are unusual tumours or tumours occurring at an early stage of onset or at different sites.
Similar classifications have been used for both the animal and human evidence categories by the EPA and IARC: sufficient, limited, inadequate, and no evidence (EPA 1994b) and sufficient, limited, inadequate, evidence suggesting lack of carcinogenicity, and no evidence (IARC 1994a). Table 9 presents IARC’s list of human carcinogens, of which 31 are chemical, 18 are pharmaceutical (mostly cancer chemotherapy agents), and 13 are manufacturing processes (IARC 1999). Although differing group number or letter categories are used, striking similarities exist between the EPA and IARC approaches for the overall weight of evidence in carcinogenicity classification schemes. Risk assessment guidelines for carcinogenic substances include relabelled categories described as ‘known’, ‘likely’, ‘not likely’ to be carcinogenic to humans (EPA 1996b), and ‘cannot evaluate’.

Table 9 Chemicals and related exposures with sufficient evidence for carcinogenicity in humans

So far we have discussed approaches for evaluating cancer endpoints. Similar weight of evidence approaches for reproductive risk assessment have been proposed. The Institute for Evaluating Health Risks defined an ‘evaluative process’ by which reproductive and developmental toxicity data can be evaluated consistently and integrated to ascertain their relevance for human health risk assessment (Moore et al. 1995).
Ethylene oxide: an example of chemical specific risk assessment
EtO is a colourless gas that is used as a chemical intermediate in the manufacture of industrial products, such as ethylene glycol, polyester fibres, and detergents. EtO is also used as a pesticide fumigant. Over 75 000 hospital workers are exposed via its use as an antimicrobial sterilant. EtO is one of the 25 chemicals of highest production volume in the United States, with over 2.5 million tons produced per year (IARC 1994b). There is evidence from animal tests and human exposure studies that EtO is a carcinogen, mutagen, reproductive toxicant, and neurotoxicant. We will focus on an assessment of its carcinogenic effects. EtO has been regulated under the Occupational Safety and Health Administration, Consumer Product Safety Commission, EPA, and FDA statutes (Table 3).
Short-term assay information
EtO has been shown, like many reactive epoxides, to be a direct-acting (genotoxic) mutagen. It has been evaluated in bacterial, plant, Drosophila germ cells, and rodent and human cell assays. It causes chromosomal aberrations and point mutations. In fact, EtO is frequently used as a positive control in assays of other potentially mutagenic agents.
Rodent bioassays
All three rodent bioassays conducted with EtO by inhalation found dose-related increases in tumours in both rats and mice, both male and female. Increased rates for mononuclear cell leukaemia, peritoneal mesothelioma, mixed brain tumours, alveolar/bronchiolar carcinomas and adenomas, lymphomas, papillary cystadenoma of the harderian gland, uterine adenocarcinomas, and mammary gland tumours were observed (Snellings et al. 1984a,b). Exposure levels included 0, 10, 33, 50 and 100 ppm for 6 to 7 h/day for 5 days per week for 2 years.
Epidemiology
Published studies of EtO-exposed workers are positive for cancer risk. Of the eight studies of chemical workers exposed to EtO, five found excesses of lymphatic and haematopoietic cancer but only two were significant. The standardized mortality rates for chemical plant workers were approximately nine for stomach and oesophageal cancers and leukaemia (Hogstedt et al. 1986). This study, as well as others on chemical plant workers, was complicated by exposure to multiple chemicals besides EtO and by relatively small cohort sizes (700 to 3000 participants). There are four studies of sterilant workers. Studies of 20 000 EtO sterilizer-exposed workers found significant increases in haematopoietic cancers in male workers (Steenland et al. 1991); these increases might have been complicated by other exposure factors such as human immunodeficiency infection. The three other smaller studies of sterilant workers showed non-significant increases in lymphatic and haematopoietic cancer. Molecular biomarker studies correlating EtO exposure, levels of hydroxyethyl adducts in haemoglobin, and cancer incidence failed to establish a direct correlation of adducts and cancer but did show a good dose–response relationship for the adducts and EtO exposures (Hagmar et al. 1991; Walker et al. 1993; Farmer and Shuker 1999; Wu et al. 1999).
Qualitative assessment
The EPA has concluded that EtO is a probable human carcinogen (EPA-B2 classification). They determined that there is sufficient animal evidence, but limited to inadequate human evidence, on which to base this qualitative assessment. Because EtO has been shown to have a genotoxic mode of action to produce its carcinogenicity, EPA has used EtO as an example in their revised cancer guidelines (EPA 1999). IARC has upgraded its overall evaluation of EtO from class 2A to class 1 (carcinogenic to humans) based on other data that support its carcinogenicity and mechanisms (IARC 1999).
Exposure assessment
Occupational exposures to EtO are primarily via inhalation. For chemical plant workers, a 7 h/day, 5 days/week, 50 weeks/year exposure scenario is appropriate. An appropriate model of exposure for EtO sterilizer workers in hospitals should include multiple short-term exposure peaks spread throughout the work day, reflecting cycles of EtO sterilizer operation. Both the time-weighted average and short-term exposure limits are believed to be particularly important for genotoxic carcinogens such as EtO, as DNA repair systems are known to be saturable at higher exposure levels.
Kinetics
EtO is rapidly absorbed through the respiratory route and is uniformly distributed throughout the body. The half-life for EtO in humans is approximately 60 min and 12 to 13 min in rats and mice respectively. Two key inactivational pathways have been identified, glutathione conjugation and hydrolysis to ethylene glycol and subsequent metabolism to CO2. EtO produces both DNA and protein alkylation. At high dose levels EtO is hypothesized to deplete glutathione and produce non-linear dose–effect relationships (Brown et al. 1998).
Susceptible populations
Because EtO is a genotoxic carcinogen, individuals with compromised DNA repair pathways would be at greater risk than other individuals. Other susceptible populations include the unborn children of workers. EtO is already a reactive epoxide, not requiring activation; however, it does undergo inactivation via two pathways for which ecogenetic differences could exist.
Quantitative risk assessment and standard setting
The Occupational Safety and Health Administration has used the rodent mesothelioma and leukaemia data to model upper-bound estimates on risk (Snellings et al. 1984a,b; National Toxicology Program 1987). A q1* (upper 95 per cent confidence limit on cancer risk) for EtO was set at 3.4 × 10–1/mg EtO per kg body weight per day based on the EPA’s evaluation of the study by Snellings et al. (1984a) of the incidence of mononuclear cell leukaemia and brain tumours in female rats. The current Occupational Safety and Health Administration time-weighted average for EtO is 1 ppm exposure averaged over an 8-h day. The initially proposed short-term exposure limit value of 5 ppm was not upheld by the courts.
Regulatory risk management/control of exposures
One of the key regulatory needs to improve the safe use of EtO in occupational settings is the establishment of the short-term exposure limit values. Our review of the options supports the establishment of a 15 min exposure limit that must not exceed five times the 8-h time-weighted average. Improved ventilation controls for the EtO sterilization units in hospitals can continue to offer an excellent solution for the safe use of this sterilization process. The Occupational Safety and Health Administration currently provides several recommended engineering options. These include use of non-recirculating exhaust hoods built directly over the sterilizer door, a capture box built over the floor drains for the sterilizers, and extended vacuum purges of the sterilizer chamber with ‘door-locked’ phases that prevent premature entry.
Substitutes
Few of the alternatives to EtO sterilization are effective or appropriate. Chemical disinfecting (glutaraldehyde is the agent of choice) requires long soaking times (11 h) for comparable levels of sterilization, costs more due to personnel time in processing, and results in alternative exposures. Other less favourable alternatives do not provide adequate sterilization.
Comparing international approaches to carcinogen risk assessment
For many years there has been an information-sharing process aimed at harmonization of chemical testing regimes and clinical trials methodologies, so that data might be accepted in multiple countries that are members of the OECD. The United Nations Conference on the Environment in Rio de Janeiro, Brazil, in 1992 established harmonization of risk assessment as one of its goals, with a co-ordinating role for the International Programme on Chemical Safety. The negotiation in 1994 of the General Agreement on Trade and Tariffs and establishment of a World Trade Organization made harmonization of various aspects of testing, risk assessment, labelling, registration, and standards potentially important elements in trade, not just in regulatory science. Much progress has been achieved for pharmaceuticals (D’Arcy and Harron 1998.)
Moolenaar (1994) summarized the carcinogen risk assessment methodologies used by various countries as a basis for regulatory actions. He tabulated the risk characterization, carcinogen identification, risk extrapolation, and chemical classification schemes of the EPA, the United States Public Health Service, the WHO/IARC, the American Conference of Governmental and Industrial Hygienists, Australia, the European Union, Germany, the Netherlands, Norway, and Sweden. The approach of the EPA to estimate an upper bound to human risk is unique; all other countries estimate human risk values based on the expected incidence of cancer from the exposure under review. The United Kingdom follows a case-by-case approach to risk evaluations for both genotoxic and non-genotoxic carcinogens, with no generic procedures. Denmark, the European Union, the United Kingdom, and the Netherlands all divide carcinogens into genotoxic and non-genotoxic agents and use different extrapolation procedures for each. Norway does not extrapolate data to low doses, using instead the TD50 to divide category I carcinogens into tertiles by probable potency. The United Kingdom, the European Union, and The Netherlands all treat non-genotoxic chemical carcinogens as threshold toxicants; a NOAEL and safety factors are used to set acceptable daily intake values. It may be time for the United States to consider applying the benchmark dose method to non-genotoxic carcinogens, instead of the linearized multistage model; towards this goal we are encouraged by the proposals in the EPA’s revised cancer guidelines (EPA 1996b).
The OECD countries have a well-established process of comparing economies of member countries for various benchmark parameters. An effort has been initiated to stimulate similar thinking about sentinel measures for comparisons of country performance in environmental protection (Lykke 1992; OECD 1996). The United Nations Conferences in Rio de Janeiro and Kyoto, together with other international forums, have sought an international consensus on the reduction of emissions of global importance (carbon dioxide) and regional importance (sulphur dioxide).
The United States Commission on Risk Assessment and Risk Management (Risk Commission)
The 1990 Clean Air Act Amendments (Title III) established an entirely new program to control 189 named hazardous air pollutants from point sources through promulgation and implementation of technology-based standards. These standards were based on determination of the maximum available control technology for each category of point sources. During the previous 20 years only seven substances had been regulated under this section of the law (vinyl chloride, asbestos, benzene, radionuclides, mercury, arsenic, and beryllium) using chemical-by-chemical, risk-based analyses, largely because of the statutory requirement to determine a no-effect level and then set the standard sufficiently lower to assure an ‘ample margin of safety’. For carcinogens, the no-effect level was assumed to be zero. Congress further mandated that EPA determine whether any unacceptable residual risks to health from hazardous air pollutants remain after the maximum available control technology has been implemented.
The National Academy of Sciences was called upon to review the methods used by EPA to determine carcinogenic risks and non-carcinogenic risks, as well. The report Science and Judgment in Risk Assessment (National Research Council 1994) reflects the interplay of scientific methods, variability and uncertainty, and social, political, cultural, and economic values. That report was a major input to the Risk Commission mandated by the 1990 Amendments ‘to make a full investigation of the policy implications and appropriate uses of risk assessment and risk management under various federal laws to prevent cancer and other chronic human health effects which may result from exposure to hazardous substances’. The Commission operated from 1994 to 1997 and was composed of three members appointed by the President, six by the leaders of the Congress, and one by the National Academy of Sciences, with G.S. Omenn as the chairman.
The Risk Commission made general recommendations about the uses and limitations of risk assessment, uncertainty analysis, economic analysis, peer review, and risk management decision-making, and specific recommendations for the various regulatory agencies and their major programs (Risk Commission 1997). The Commission recognized that it is time to modify the traditional approaches to assessing and reducing risks that have relied upon a chemical-by-chemical, medium-by-medium, risk-by-risk strategy. The output had become too focused on assumption-laden mathematical estimates of small risks associated with exposure to individual chemicals, rather than the overall goal of improved health status through the reduction of significant risks. Thus, the Commission developed the Risk Management Framework shown in Fig. 1. The Framework embraces collaborative and early involvement of stakeholders; requires that a current or potential problem be put into a broader context of public health or ecological health; stimulates identification of the interdependence of multimedia problems; focuses on cumulative risks and on addressing the benefits, costs, and social, cultural, ethical, political, and legal dimensions of the risk reduction options.
The Commission highlighted the importance of mechanisms of toxicity, risks from microbial and radiation exposures (not just from chemicals), use of realistic scenarios in exposure assessments, attention to mixtures of chemicals and multiple interacting exposures. It endorsed extensive modelling of variability in exposures, and expressed reservations about excessive modelling of uncertainty (Goldstein 1995). It supported use of economic analyses, especially cost-effectiveness analysis, but not as the overriding determinant of risk management decisions.
For individual agencies of the United States government, the Commission presented a tiered approach to set priorities for the residual risk mandate on hazardous air pollutants, which will be implemented over the next 10 to 20 years, and recommended that these risks be considered in the context of risks associated with the same pollutants from other sources, other air pollutants (such as the ubiquitous ‘criteria air pollutants’—sulphur dioxide, particles, nitrogen dioxide, hydrocarbons, ozone/photochemical oxidants, carbon monoxide, and lead), and other risks to the health of children and adults. Other recommendations addressed early determination of future land use for Superfund site clean-up objectives; comprehensive watershed management approaches for the Clean Water Act; risk assessment improvements adopted in the 1996 Safe Drinking Water Act; streamlined processes for developing permissible exposure limits for air contaminants in the workplace; modification of the ‘Delaney Clause’ covering food additives to a standard of reasonable certainty of no harm for all population groups, which was adopted in the 1996 Food Quality and Protection Act; endorsed international harmonization of risk assessment and clinical trial protocols for pharmaceuticals and restoration of the authority of the FDA to require scientific evidence to support health claims for dietary supplements; proposed risk-based approaches to priority-setting and budget-making for clean-up of contaminated sites at federal facilities; and urged better control of microbial risks in foods and drinking water. In its reports, the Commission presented numerous examples of stakeholder involvement (volume I) and of risk assessment and risk management outcomes (volume II). These recommendations have had a large influence in the Congress and in federal and state agencies, as well as international forums (HSE 2000).
Comparative analyses of risks and perceptions of risk
This aspect of risk assessment, risk communication, and risk management is so logical that it may be surprising to learn that comparisons of risks are extremely controversial. Public health officials practice comparative risk assessment, at least intuitively, on a routine basis when deciding how to allocate their own time, their staff’s time, and other resources. They must make judgements about what and how to advise their local communities about potential and definite risks. They must anticipate the question: ‘Compared with what?’
In fact, most people regularly compare risks of alternative activities—in their jobs, in recreational pursuits, in interpersonal interactions, and in investments. Since 1993, members of the United States Congress have pressed for the systematic use by federal regulatory agencies of comparisons of similar and dissimilar risks. The aim is to make the benefits and costs of health, safety, and environmental protection actions more explicit, more comprehensible, and more cost-effective. However, determining how best to conduct comparative risk analyses and do so efficiently has proved difficult, due to the great variety of health and environmental benefits, the uncertainties of dollar estimates of benefits and costs, and the different distributions of benefits and costs across the population.
A new concept, ‘environmental justice’, has emerged to reflect the ethical guidance that poor, disenfranchised neighbourhoods should be protected as much as well-to-do suburban neighbourhoods (Rios et al. 1993; Risk Commission 1997; Institute of Medicine 1999). In fact, the poor may need greater protection due to their coexisting higher risk factors for poor pregnancy outcomes, impaired growth and development, smoking-related cancers, asthma, and lead toxicity, among other health problems. On the other hand, the compelling need to overcome poor rates of prenatal care and childhood immunization, poor housing, lack of education, violence, and joblessness may make hypothetical or long-term estimated risks from chemical pollutant exposures relatively less salient to these communities.
Several formal analyses to compare risks have been developed. One method estimates the contribution of a particular activity or exposure to deaths in the general population, combining reports of actual deaths and estimates of likely or worst-case effects of various risk factors. McGinnis and Foege (1993) compiled the 10 leading reported causes of death for the 2.1 million American deaths in 1990 and then listed the ‘real causes of death’, beginning with smoking (430 000 excess deaths), then poor diet and physical inactivity, then alcohol (Table 10). The message here is clear: almost one-half of deaths and a higher proportion of premature deaths are caused by preventable risk factors, mostly individual behaviours. They estimated 60 000 deaths per year in the United States from exposure to toxic substances.

Table 10 Causes of death

Another approach determines the average estimated loss of life expectancy attributable to various causes (Crouch and Wilson 1982). For example, male smokers may lose an estimated 2250 days of life expectancy; persons 20 per cent overweight, 900 days; alcohol users, 130 days; persons of low socio-economic class, 700 days; and those impaired by occupational hazards, 30–300 days. Obviously, these are very rough categories and estimates. A third approach estimates the exposure required to increase the annual death rate by one death per million deaths. Estimates for different exposures include smoking 1.4 cigarettes daily (heart disease and cancers), drinking 0.5 litres of wine daily (cirrhosis of the liver), or eating 40 tablespoons of peanut butter contaminated with aflatoxin daily (liver cancer). Finally, the World Bank and many others are now using disability-adjusted life years (World Bank 1993).
All these approaches have serious limitations because of inadequate information on the variability of the statistics, uncertainty about the size of the population at risk due to specific exposures, complexities of exposures, multiple additional risk factors, and complications aetiology of deaths and disabilities.
Individuals respond very differently to information about hazardous situations, as do societies. An event that is accepted by one individual may be unacceptable to another (Fischhoff 1981) (see also Chapter 8.9)). Understanding these behavioural responses is critical in developing risk management options. In a classic study, students, League of Women Voters members, active club members, and scientific experts were asked to rank 30 activities or agents in order of their annual contribution to deaths (Slovic et al. 1979; Morgan et al. 1992)). The lay groups all ranked motorcycles and handguns as high risks and vaccinations, home appliances, power mowers, and football as relatively safe. Club members viewed pesticides, spray cans, and nuclear power as safer than did other lay persons. Students ranked contraceptives and food preservatives as riskier and mountain climbing as safer than did the others. Meanwhile, experts ranked electric power, surgery, swimming, and X-rays as more risky, and nuclear power and police work as less risky, than did lay persons. There are also group differences in perceptions of risk from chemicals among toxicologists according to their work in industry, academia, or government (Neil et al. 1994).
Psychological factors such as dread, perceived uncontrollability, and involuntary exposure interact with factors that represent the extent to which a hazard is familiar, observable, and ‘essential’ (Lowrance 1976; Morgan 1993). Public demand for government regulations often focuses on involuntary exposures and unfamiliar hazards, such as radioactive waste, electromagnetic fields, asbestos insulation, and genetically modified crops and foods. People’s perceptions may be related to technical and emotional grounds; Sandman (1993) classified these aspects as ‘hazard’ and ‘outrage’ respectively.
A different kind of risk comparison is conducted at the programme planning level across diverse types of risk. The EPA published a landmark review, Unfinished Business, in 1987, ranking EPA programmes in priority for more investment, based on the then current funding levels and technical progress. In 1990 EPA’s Science Advisory Board followed up with Reducing Risks, which categorized the relative risks of cancers, non-cancer endpoints, and various ecological impacts (EPA 1990). Meanwhile, there have been many comparative risk analysis forums or projects in various states. Vermont and Colorado had particularly productive experiences with public involvement. Washington State generated a public process for ‘Washington 2010’. A mayoral task force produced ‘Seattle’s Environmental Priorities’, which has guided budget decisions and public understanding. A highly publicized process in California yielded a draft report just before the 1994 state elections, which remained in limbo for many months due to political posturing about environmental justice and social welfare aspects mentioned prominently in the report; its array of high, medium, and low priorities among health and ecological risks has been utilized in state planning. Local governments, faced with unfunded regulatory mandates and limited budgets, are seeking rational and cost-effective options. One approach is the preparation of specific ‘community risk profiles (Wernick 1995). Table 11 lists some of the underlying philosophies of regulation.

Table 11 Philosophies of regulation

Economic analyses
The role of economic analysis in regulatory decision-making is controversial and highly political. Public health advocates are generally suspicious that economic analysis places too much emphasis on assigning dollar values to aspects of health and the environment that are difficult, if not impossible, to quantify. Furthermore, the equity implications of policies and regulations may be neglected. For example, if a decision decreases the welfare of the poor and increases the welfare of the wealthy, but the benefit to the wealthy outweighs the loss to the poor (in dollars, not per cent of income), quantitative benefit/cost analyses might show the policy to yield an improvement in aggregate social welfare. Another problem arises from the frequent use of point estimates for benefits and costs. Like the results of risk assessments, economic analyses involve multiple assumptions, choices among data and models, and very substantial uncertainty. As emphasized by the Risk Commission (1997), economic analyses require transparency, peer review, and stakeholder participation just as much as do risk assessments.
In the United States, some statutes (Table 4) require consideration of costs and benefits (pharmaceuticals, pesticides), others explicitly exclude their consideration (Clean Air Act), and others are silent. Morgenstern (1997) concluded that economic analysis so far has played a minor part, primarily because the scientific information on which benefits analyses were based was so weak that the credibility and influence of the economic analyses were undermined.
Nevertheless, there is broad agreement that information about the incremental costs and benefits associated with particular options for a regulatory decision can serve the public interest (Arrow et al. 1996a; Risk Commission 1997). Cost-effectiveness analyses are particularly helpful, as they begin with specification of the public health or ecological regulatory goal (without conversion to monetary values) and then explore and compare the methods of achieving that goal to identify the least costly one. For example, if the health-based goal is to reduce the current ambient ozone standard to 0.10 ppm, cost-effectiveness analysis could be used to help choose among options with different technologies, different costs, and different probabilities of success. Tengs et al. (1995) used cost-effectiveness analysis to compare the costs of many different life-saving medical, public health, and environmental regulatory interventions against a common measure, the estimated years of life saved. A similar approach can be used, with fewer assumptions and extrapolations, to assess different means of achieving intermediate regulatory goals. For example, there might be several alternative strategies to reduce automobile exhaust emissions as part of a larger ozone control programme. One might rank the cost of those alternatives per unit of emissions reduced.
Risk communication (see also Chapter 8.9)
Public health agencies and officials regularly engage in communication with the public and with public officials and private sector parties about health risks. It is a primary mission of public health to investigate the causes of health problems and the ways to reduce the incidence and consequences of such problems. Actions must include environmental controls, such as protection of air, water, and food from chemical and microbiological contamination and protection against radioactive exposures in medical care, industry, and the general environment. For inactive (abandoned) hazardous waste sites, the federal Superfund Law requires a public health advisory statement for each of the more than 1300 National Priorities List Sites in the United States. At federal facilities sites, an organized approach has been developed by the United States Department of Energy Environmental Restoration and Waste Management Program, involving local citizens, Indian Nations, and environmental organizations, to deal with the overlapping array of federal and state statutes, regulations, and programmes (Omenn 1994; Boiko et al. 1996; van Belle et al. 1996).
Actions also must include health education to promote healthy behaviours and reduce unhealthy behaviours—smoking, violence, alcohol and other drug abuse, sexually transmitted diseases, physical inactivity, and social isolation. Clinical preventive services of immunizations, counselling, and screening are important medical contributions to individual patients and public health services are important to the health status of communities. Finally, communication about risks and risk reduction must mobilize public policy in the form of incentives and disincentives for health promotion and against pollution and unhealthy behaviours. In the United States, reports from all Surgeons General since 1979 have sustained a campaign called Healthy People, which embraces health protection, health promotion, and clinical preventive services (Oberle et al. 1994). Local communities increasingly take part in that risk communication/health promotion process (Oberle et al. 1994). Their knowledge of exposure pathways in the past and of apparent health and ecological effects can redirect the technical assessments. Their view on future uses of contaminated sites can be crucial to deciding how stringent must be the clean up. All of these programmes must be integrated to achieve a reinforcing strategy for disease and injury prevention and for health promotion (Risk Commission 1997; National Research Council 1996).
Conclusions
The scientific community has come a long way in the past 25 years in expanding the science base, in better defining our questions about assumptions and models, and in helping regulators to make risk-based decisions for the protection of human health and the environment. The way ahead should be an acceleration of knowledge from the public health sciences including toxicogenomics (Omenn 2000). We expect important inputs from our constituencies—legislators, regulators, manufacturers, environmentalists, media, and affected communities—throughout the global commons.
Chapter References
Abbott, B.D., Harris, M.W., and Birnbaum, L.S.(1992). Comparisons of the effects of TCDD and hydrocortisone on growth factor expression provide insight into their interaction in the embryonic mouse palate. Teratology, 45, 35–53.
Albert, R.E.(1994). Carcinogen risk assessment in the US Environmental Protection Agency. Critical Reviews in Toxicology, 24, 75–85.
Allen, B.C., Kavlock, R.J., Kimmel, C.A., and Faustman, E.M.(1994a). Dose response assessments for developmental toxicity II. Comparison of generic benchmark dose estimates with NOAELs. Fundamental and Applied Toxicology, 23, 487–95.
Allen, B.C., Kavlock, R.J., Kimmel, C.A., and Faustman, E.M.(1994b). Dose–response assessment for developmental toxicity, III. Statistical models. Fundamental Applied Toxicology, 23, 496–509.
Ames, B.N. and Gold, L.S.(1990). Too many rodent carcinogens: mitogenesis increases mutagenesis. Science, 249, 970–1.
Anderson, E.L.(1983). Quantitative approaches in use to assess cancer risk. Risk Analysis, 3, 277–95.
Armitage, P. and Doll, R.(1957). A two-stage theory of arcinogenesis in relation to the age distribution of human cancer. British Journal of Cancer, 11, 161–9.
Arrow K.J., et al.(1996a). Is there a role for benefit-cost analysis in environmental, health, and safety regulation? Science, 272, 221–2.
Arrow K.J., et al.(1996b). Benefit-cost analysis in environmental, health, and safety regulation. A statement of principles.
Ashby, J. and Tennant, R.W.(1994). Prediction of rodent carcinogenicity of 44 chemicals: results. Mutagenesis, 9, 7–15.
ATBC Cancer Prevention Study Group(1994). The effect of vitamin E and beta-carotene on the incidence of lung cancer and other cancers in male smokers. New England Journal of Medicine, 330, 1029–35.
Atterwill, C.K., Johnston, H., and Thomas, S.M.(1992). Models for the in vitro assessment of neurotoxicity in the nervous system in relation to xenobiotic and neurotrophic factor-mediated events. Neurotoxicology, 13, 39–54.
Auton, T.R.(1994). Calculation of benchmark doses from teratology data. Regulatory Toxicology and Pharmacology, 19, 152–67.
Bailar, J.C., III(1991). Scientific inferences and environmental health problems. Chance: New Directions for Statistics and Computing, 4, 27–38.
Barnes, D.G. and Dourson, M.J.(1988). Reference dose (RfD): description and use in health risk assessment. Regulatory Toxicology and Pharmacology, 8, 471–86.
Barnes, D.G., et al.(1995). Benchmark dose workshop. Regulatory Toxicology and Pharmacology, 21, 296–306.
Bartell, S.M. and Faustman, E.M.(1998). Comments on ‘An approach for modeling noncancer dose responses with an emphasis on uncertainty’ and ‘A probabilistic framework for the reference dose (probabilistic RfD)’. Risk Analysis, 18, 663–4.
Bartell, S.M., et al.(2000). Risk estimation and value-of-information analysis for three proposed genetic screening programs for chronic beryllium disease prevention. Risk Analysis, 20, 87–99.
Boiko, P.E., et al.(1996). Who holds the stakes? A case study of stakeholder identification at two nuclear weapons production sites. Risk Analysis, 16, 237–49.
Brown, C.C.(1984). High-to low-dose extrapolation in animals. In Assessment and management of chemical risks (ed. J.V. Rodricks and R.G. Tardiff), pp. 57–79. American Chemical Society, Washington, DC.
Brown, C.D., Asgharian, B., Turner, M.J., and Fennel, T.R.(1998). Ethylene oxide dosimetry in the mouse. Toxicology and Applied Pharmacology, 148, 215–21.
Bucher, J.R., Potter, C.J., Goodman, J.L., Faustman, E.M., and Lucier, G.W.(1996). National Toxicology Program studies: principles of dose selection and applications to mechanistic based risk assessment. Fundamental Applied Toxicology, 31, 1–8.
Calkins, D.R., Dixon, R.L., Gerber, C.R., Zarin, D., and Omenn, G.S.(1980). Identification, characterization, and control of potential human carcinogens: a framework for federal decision-making. Journal of the National Cancer Institute, 61, 169–75.
Charnley, G. and Omenn, G.S.(1997). A summary of the findings and recommendations of the Commission on Risk Assessment and Risk Management (and accompanying papers prepared for the Commission). Human and Ecological Risk Assessment, 3, 701–11.
Clewell, H.J., III, Gentry, P.R., and Gearhart, J.M.(1997). Investigation of the potential impact of benchmark dose and pharmacokinetic modeling in noncancer risk assessment. Journal of Toxicology and Environmental Health, 52, 475–515.
Cohen, S.M. and Ellwein, L.B.(1990). Proliferative and genotoxic cellular effects in 2-acetylaminofluorene bladder and liver carcinogenesis: biological modeling of the EDO1 study. Toxicology and Applied Pharmacology, 104, 79–93.
Crouch, E.A.C. and Wilson, R.(1982). Risk/benefit analysis. Ballinger, Cambridge, MA.
Crump, K.S.(1980). An improved procedure for low-dose carcinogenic risk assessment from animal data. Journal of Environmental Pathology and Toxicology, 5, 675–84.
Crump, K.S.(1984). A new method for determining allowable daily intakes. Fundamental Applied Toxicology, 4, 854–71.
Cullen, A.C. and Frey, H.C.(1999). Probabilistic techniques in exposure assessment: a handbook for dealing with variability and uncertainty in models and inputs. Plenum Press, New York.
D’Arcy, P.F. and Harron, D.W.G.(eds). (1998). Fourth International Conference on Harmonization. Greystone Books, Northern Ireland.
Dourson, M.L. and DeRosa, C.T.(1991). The use of uncertainty factors in establishing safe levels of exposure. In Statistics in toxicology (ed. D. Krewski and C. Franklin), pp. 613–27. Gordon and Breach, New York.
Dourson, M.L. and Stara, J.F.(1983). Regulatory history and experimental support of uncertainty (safety factors). Regulatory Toxicology and Pharmacology, 3, 224–38.
Dourson, M.L., Hertzberg, R.C., Hartung, R., and Blackburn, K.(1985). Novel methods for the estimation of acceptable daily intake.Toxicology Industrial Health, 1, 23–41.
Eaton, D.L., Farin, F., Omiecinski, C.J., and Omenn, G.S.(1998). Genetic susceptibility. In Environmental and Occupational Medicine (3rd edn). (ed. W.N. Rom), pp. 209–21. Lippincott-Raven, Philadelphia.
Environmental Defense Fund(1997). Toxic ignorance. Environmental Defense Fund, New York.
EPA(1989a). Risk assessment guidance for Superfund. Human health evaluation manual, Part A. EPA Office of Policy Analysis, Washington, DC.
EPA(1989b). Exposure factors handbook, final report. EPA Office of Health and Environmental Assessment, Washington, DC.
EPA(1990). Reducing risk: setting priorities and strategies for environmental protection. EPA Science Advisory Board, Washington, DC.
EPA(1991). Guidelines for developmental toxicity risk assessment. Federal Register, 56, 63798–826.
EPA(1992). Guidelines for exposure assessment. Federal Register, 57, 22888–938.
EPA(1994a). Guidelines for reproductive toxicity risk assessment. EPA Office of Research and Development, Washington, DC.
EPA(1994b). Health assessment document for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and related compounds. Vols I–III. EPA Office of Research and Development, Washington, DC.
EPA(1994c). Guidelines for carcinogen risk assessment (draft revisions). Office of Health and Environmental Assessment, Exposure Assessment Group, Washington, DC.
EPA(1994d). Estimating exposure to dioxin-like compounds. Office of Health and Environmental Assessment, Exposure Assessment Group, Washington, DC.
EPA(1996a). Proposed guidelines for ecological risk assessment. Federal Register, 61, 47552.
EPA(1996b). Risk assessment guidelines. EPA Office of Research and Development, Washington, DC.
EPA(1997). Aggregate exposure. Review document for the Scientific Avisory Panel. SAP Public Docket, Washington, DC.
EPA(1998a). Endocrine disruptor screening program; proposed statement of policy. Federal Register, 63, 71542–71568.
EPA(1998b). Chemical hazard availability study. EPA Office of Pollution Prevention and Toxics, Washington, DC.
EPA(1998c). Assessment of thyroid follicular cell tumors. EPA Office of Research and Development, Washington, DC.
EPA(1998d). Guidance for identifying pesticides that have a common mechanism of toxicity: notice of availability and solicitation of public comments. Federal Register, 63, 42031–2.
EPA(2000). Health assessment document for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and related compounds. Office of Research and Development, Washington, DC. (http//www.epa.gov/ncea/dioxin.html)
EPRI(1994). Electric utility trace substances synthesis report. Electric Power Research Institute, Palo Alto, CA.
Ershow, A.G. and Cantor, K.P.(1989). Total water and tapwater intake in the United States: population-based estimates of quantities and sources, pp. 328–34. Life Sciences Research Office, Federation of American Societies for Experimental Biology, Bethesda, MD.
Farmer, P.B. and Shuker, D.E.G.(1999). What is the significance of increases in background levels of carcinogen-derived protein and DNA adducts? Some considerations for incremental risk assessment. Mutation Research, 424, 275–86.
Faustman, E.M. and Bartell, S.M.(1997). Review of noncancer risk assessment: applications of benchmark dose methods. Human and Ecological Risk Assessment, 3, 893–920.
Faustman, E.M. and Omenn, G.S.(1996). Risk assessment. InCasarett and Doull’s toxicology (5th ed) (ed. C.D. Klaassen), pp. 75–88. McGraw-Hill, New York.
Faustman, E.M. and Omenn, G.S.(2001). Risk assessment. In Cassarett and Doull’s toxicology. The basic science of poisons (6th edn) (ed. C. Klaassen). McGraw-Hill, New York.
Faustman, E.M., Allen, B.C., Kavlock, R.J., and Kimmel, C.A.(1994). Dose–response assessment of developmental toxicity: I. Characterization of data base and determination of NOAELs. Fundamental and Applied Toxicology, 79, 229–41.
Faustman, E.M., Wellington, D.G., Smith, W.P., and Kimmel, C.S.(1989). Characterization of a developmental toxicity dose response model. Environmental Health Perspectives, 79, 229–41.
Faustman, E.M., Lewandowski, T.A., Ponce, R.A., and Bartell, S.M.(1999). Biologically based dose–response models for developmental toxicants: lessons from methylmercury. Inhalation Toxicology, 11, 101–14.
Finley, B., Proctor, D., Scott, P., Harrington, N., Paustenbach, D., and Price, P.(1994). Recommended distributions for exposure factors frequently used in health risk assessment. Risk Analysis, 14, 533–53.
Fischhoff, B.(1981). Cost-benefit analysis: an uncertain guide to public policy. Annals of the New York Academy of Science, 363, 173–88.
Fischhoff, B., Bostrom, A., and Quandrei, M.J.(1996). Risk perception and communication. In Oxford textbook of public health (3rd edn) (ed. R. Detels, W. Holland, J. McEwen, and G.S. Omenn), pp. 987–1002. Oxford University Press.
Food Quality Protection Act(FQPA) (1996). EPA, Office of Pesticide Programs.
Gaylor, D.W., Kodell, R.L., Chen, J.J., and Krewski, D.(1999). A unified approach to risk assessment for cancer and noncancer endpoints based on benchmark doses and uncertainty/safety factors. Regulatory Toxicology and Pharmacology, 29, 151–7.
Goldman, L.R.(1998). Linking research and policy to ensure children’s environmental health. Environmental Health Perspectives, 106, 857–62.
Goldstein, B.(1995). Risk management will not be improved by mandating numerical uncertainty analysis for risk assessment. University of Cincinnati Law Review, 63, 1599–610.
Hagmar, L., et al.(1991). An epidemiological study of cancer risk among workers exposed to ethylene oxide using hemoglobin adducts to validate environmental exposure assessments. International Archives of Occupational and Environmental Health, 63, 271–7.
Harris, M.W., et al.(1992). Assessment of a short-term reproductive and developmental toxicity screen. Fundamental and Applied Toxicology, 19, 186–96.
Haseman, J.K. and Lockhart, A.M.(1993). Correlations between chemically related site-specific carcinogenic efects in long-term studies in rats and mice. Environmental Health Perspectives, 101, 50–4.
Health and Safety Executive(HSE) (2000). Reducing risks, protecting people. HSE Books, Sudbury, Suffolk. (Discussion document, 1999.)
Hill, A.B.(1965). The environment and disease: Association or causation. Proceedings of the Royal Society of Medicine, 58, 295–300.
Hogstedt, R.J., Aringer, L., and Gustavsson, A.(1986). Epidemiologic support for ethylene oxide as a cancer-causing agent. Journal of the American Medical Association, 225, 1575–8.
IARC(1994a). IARC monographs on the evaluation of carcinogenic risks to humans. World Health Organization, Lyon.
IARC(1994b). Meeting of the IARC working group on some industrial chemicals. Scandinavian Journal of Work in Environmental Health, 20, 227–9.
IARC(1999). IARC monographs on the evaluation of carcinogenic risks to humans. World Health Organization, Lyon, France, Vols 1–73 (20 January 1999, update summary). http://193.51.164.11/monoeval/crthall.html
ILSI(1999). A framework for cumulative risk assessment. International Life Sciences Institute, Washington, DC.
INFORM(1996). Risks on record: an overview of the Toxic Substances Control Act’s substantial risk reporting system with bulletins on selected chemicals. INFORM, New York.
Institute of Medicine(1999). Toward environmental justice. National Academy Press, Washington, DC.
Kavlock, R.J., Allen, B.C., Faustman, E.M., and Kimmel, C.A.(1995). Dose response asessments for developmental toxicity. IV: Benchmark doses for fetal weight changes. Fundamentals of Applied Toxicology, 26, 211–22.
Kimmel, C.A. and Gaylor, D.W.(1988). Issues in qualitative and quantitative risk analysis for developmental toxicology. Risk Analysis, 8, 15–20.
Kohn, M.C., et al.(1993). A mechanistic model of effects of dioxin on gene expression in the rat liver. Toxicology and Applied Pharmacology, 120, 138–54.
Krewski, D. and Van Ryzin, J.(1981). Dose response models for quantal response toxicity data. InStatistics and related topics (ed. M. Csorgo, D.A. Dawson, J.N.K. Rao, and A.K. Seleh), pp. 201–29. North-Holland, Amsterdam.
Landrigan, P.J. and Goldman, L.R.(1998). Report of a panel on the relationship between public exposure to pesticides and cancer. Cancer, 83, 1057–60.
Lave, L.B. and Omenn, G.S.(1986). Cost-effectiveness of short-term tests for carcinogenicity. Nature, 334, 29–34.
Lave, L.B., Ennever, F., Rosenkranz, H.S., and Omenn, G.S.(1988). Information value of the rodent bioassay. Nature, 336, 631–3.
Lehman, A.J. and Fitzhugh, O.G.(1954). 100-fold margin of safety. Association of the Food and Drug Office United States Quarterly Bulletin, 18, 33–5.
Leroux, B.G., Leisenring, W.M., Moolgavkar, S.H., and Faustman, E.M.(1995). A biologically based dose–response model for development. Risk Analysis, 16, 449–58.
Lewandowski, T.A., Bartell, S.M., Pierce, C.H., Ponce, R.A., and Faustman, E.M.(1998). Toxicokinetic and toxicodynamic modeling of the effects of methylmercury on the fetal rat. The Toxicologist, 42, 139.
Lewandowski, T.A., Ponce, R.A., Whittaker, S.G., and Faustman, E.M.(1999). In vitro models for evaluating developmental toxicity. In In vitro toxicology (ed. S.C. Gad). Raven Press, New York.
Lowrance, W.W.(1976). Of acceptable risk, pp. 180. William Kaufmann, Los Altos, CA.
Lykke, E.(1992). Achieving environmental goals: the concept and practice of environmental performance review. Pinter, London.
McClain, R.M.(1994). Mechanistic considerations in the regulation and classification of chemical carcinogens. In Nutritional toxicology (ed. F.N. Kotsonis, M. Mackey, and J. Hijele), pp. 278–304. Raven Press, New York.
McGinnis, J.M. and Foege, W.H.(1993). Actual causes of death in the United States. Journal of the American Medical Association, 270, 2207–12.
Moolenaar, R.J.(1994). Carcinogen risk assessment: international comparison. Regulatory Toxicology and Pharmacology, 20, 302–36.
Moolgavkar, S.H. and Luebeck, G.(1990). Two-event model for carcinogenesis: biological, mathematical, and statistical considerations. Risk Analysis, 10, 323–41.
Moore, J.A., et al.(1995). An evaluative process for assessing human reproductive and developmental toxicity of agents. Reproductive Toxicology, 9, 61–95.
Morgan, G.M.(1993). Risk analysis and management. Scientific American, 269, 32–5, 38–41.
Morgan, M.G., Fischhoff, B., Bostrom, A., Lave, L., and Atman, C.J.(1992). Communicating risk to the public. Environmental Science and Technology, 26, 2048–56.
Morgenstern, R.D.(ed). (1997). Economic analysis at EPA: assessing regulatory impact. Johns Hopkins University Press, Baltimore, MD.
Mumtaz, M.M., Sipes, I.G., Clewell, H.J., and Yang, R.S.(1993). Risk assessment of chemical mixtures: biologic and toxicologic issues. Fundamental and Applied Toxicology, 21, 258–69.
National Research Council(1983). Risk Assessment in the Federal Government: Managing the Process. National Academy Press, Washington, DC.
National Research Council(1984). Toxicity testing: strategies to determine needs and priorities. National Academy Press, Washington, DC.
National Research Council(1989a). Biological markers in pulmonary toxicology. National Academy Press, Washington, DC.
National Research Council(1989b). Biological markers in reproductive toxicology. National Academy Press, Washington, DC.
National Research Council(1992a). Biological markers in immunotoxicology. National Academy Press, Washington, DC.
National Research Council(1992b). Environmental neurotoxicology. National Academy Press, Washington, DC.
National Research Council(1993). Pesticides in the diets of infants and children. National Academy Press, Washington, DC.
National Research Council(1996). Understanding risk. National Academy Press, Washington, DC.
National Research Council Committee on Risk Assessment Methodology(CRAM) (1993). Issues in risk assessment,use of the maximum tolerated dose in animal bioassays for carcinogenicity. National Academy Press, Washington, DC.
National Research Council Committee on Risk Assessment of Hazardous Air Pollutants(1994). Science and judgment in risk assessment. National Academy Press, Washington, DC.
National Toxicology Program(1987). Toxicology and carcinogenesis studies of ethylene oxide in B6C3F1 mice. US Department of Health and Human Services, Public Health Service, National Institutes of Health, Research Triangle Park, NC.
Nebert, D.W.(1999). Pharmacogenetics and pharmacogenomics. Why is this relevant to the clinical geneticist? Clinical Genetics, 56, 247–58.
Nebert, D.W. and Duffy, J.J.(1997). How knockout mouse lines will be used to study the role of drug-metabolizing enzymes and their receptors during reproduction, development, and environmental toxicity, cancer and oxidative stress. Biochemical Pharmacology, 53, 249–54.
Neil, N., Malmfors, T., and Slovic, P.(1994). Intuitive toxicology: expert and lay judgments of chemical risks. Toxicologic Pathology, 22, 198–201.
NIEHS(1999). The murine local lymph node assay: a test method for assessing the allergic contact dermatitis potential of chemicals/compounds, pp. 14 006–7. Report 99-4494. ICCVAM, Washington, DC.
Oberle, M.W., Baker, E.L., and Magenheim, M.J.(1994). Healthy People 2000 and community health planning. Annual Review of Public Health, 15, 259–75.
Office of Technology Assessment(1992). Centralized risk assessment research. National Academy Press, Washington, DC.
Ohanian, E.V., et al.(1997). Risk characterization: a bridge in informed decision-making. Fundamental and Applied Toxicology, 39, 81–8.
Omenn, G.S.(1993). Commentary: the role of environmental epidemiology in public policy. Annals of Epidemiology, 3, 319–22.
Omenn, G.S.(1994). Can systematic, integrated risk assessment with full stakeholder participation enhance clean-up at DOE’s sites? The 1994 Herbert H.Parker Lecture. In 33rd Hanford Symposium on Health and the Environment. In-situ remediation: scientific basis for current and future technologies, Part I (ed. G.W. Gee and R. Wing), pp. xv–xxx. Battelle Press, Columbus, OH.
Omenn, G.S.(1996). Putting environmental risks in a public health context. Public Health Reports, 111, 514–16.
Omenn, G.S.(1998). Chemoprevention of lung cancer: the rise and demise of beta-carotene. Annual Review of Public Health, 19, 73–99.
Omenn, G.S.(2000). The genomic era: a crucial role for the public health sciences. Environmental Health Perspectives, 108, 160–1.
Omenn, G.S. and Lave, L.B.(1988). Scientific and cost-effectiveness criteria in selecting batteries of short-term tests. Mutation Research, 205, 41–9.
Omenn, G.S. and Motulsky, A.G.(1978). Ecogenetics: genetic variation in susceptibility to environmental agents. In Genetic issues in public health and medicine (ed. B.H. Cohen, A.M. Lilienfeld, and P.C. Huang), pp. 83–111. C.C. Thomas, Springfield, IL.
Omenn, G.S., Omiecinski, C.J., and Eaton, D.E.(1990). Ecogenetics of chemical carcinogens. In Biotechnology and human genetic predisposition to disease (ed. C. Cantor, C. Caskey, L. Hood, D. Kamely, and G. Omenn), pp. 81–93. Wiley–Liss, New York.
Omenn, G.S., Stuebbe, S., and Lave, L.(1995). Predictions of rodent carcinogenicity testing results: interpretation in light of the Lave–Omenn value-of-information model. Molecular Carcinogens, 14, 37–45.
Omenn, G.S., et al.(1996). Effects of a combination of beta-carotene and vitamin A on lung cancer and cardiovascular disease. New England Journal of Medicine, 334, 1150–5.
Organization for Economic Co-operation and Development(OECD) (1996). Environmental performance reviews. OECD, Paris.
Page, N.P., et al.(1997). Implementation of EPA revised cancer assessment guidelines: incorporation of ‘mechanistic and pharmacokinetic data.Fundamental and Applied Toxicology, 37, 16–36.
Renwick, A.G. and Walker, R.(1993). An analysis of the risk of exceeding the acceptable or tolerable daily intake. Regulatory Toxicology and Pharmacology, 18, 463–80.
Rios, R., Poje, G.V., and Detels, R.(1993). Susceptibility to environmental pollutants among minorities. Toxicology and Industrial Health, 9, 797–820.
Risk Commission, Presidential/Congressional Commission on Risk Assessment and Risk Management(1997). A framework for environmental health risk management (Vol. 1). Risk assessment and risk management in regulatory decision-making (Vol. 2). US Government Printing Office, Washington, DC. http:\\www.riskworld.com.
Sandman, P.M.(1993). Responding to community outrage: strategies for effective risk communication. American Industrial Hygiene Association, Fairfax, VA.
Shelby, M.D., Bishop, J.B., Mason, J.M., and Tindall, K.R.(1993). Fertility, reproduction and genetic disease: Studies on the mutagenic effects of environmental agents on mammalian germ cells. Environmental Health Perspectives, 100, 283–91.
Shuey, D.L., et al.(1994). Biologically based dose–response modeling in developmental toxicology: biochemical and cellular sequelae of 5-fluorouracil exposure in the developing rat. Toxicology and Applied Pharmacology, 126, 129–44.
Slovic, P., Fischhoff, B., Baruch, F., and Lichtenstein, S.(1979). Rating the risks. Environment, 21, 1–20, 36–9.
Snellings, W.M., Weill, C.S., and Maronpot, R.R.(1984a). A two-year inhalation study of the carcinogenic potential of ethylene oxide in Fischer 344 rats. Toxicology and Applied Pharmacology, 75, 105–17.
Snellings, W.M., Weill, C.S., and Maronpot, R.R.(1984b). A subchronic inhalation study on the toxicologic potential of ethylene oxide in B6C3F1 mice. Toxicology and Applied Pharmacology, 76, 510–18.
Steenland, K., et al.(1991). Mortality among workers exposed to ethylene oxide. New England Journal of Medicine, 324, 1402–7.
Tengs, O.T., et al.(1995). Five-hundred life-saving interventions and their cost-effectiveness. Risk Analysis, 15, 369–90.
Tennant, R.W., Spalding, J.W., Stasiewicz, S., and Ashby, J.(1990). Prediction of the outcome of rodent carcinogenicity bioassays currently being conducted on 44 chemicals by the National Toxicology Program. Mutagenesis, 5, 3–14.
Tennant, R.W., Stasiewicz, S., Mennear, J., French, J.E., and Spalding, J.W.(1999). Genetically altered mouse models for identifying carcinogens. In The use of short-and medium-term tests for carcinogens and data on genetic effects in carcinogenic hazard evaluation (ed. D.B. McGregor, J.M. Rice, and S. Venitt), Vol. 146. IARC Scientific Publications, Lyon.
van Belle, G., Omenn, G.S., Faustman, E.M., Powers, C.W., Moore, J.A., Goldstein, B.D.(1996). Dealing with a lethal legacy. Washington Public Health, 14, 16–21.
Van den Berg, et al.(1998) Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for humans and wildlife. Environmental Health Perspectives, 106, 775–92.
Walker, V.E., Fennell, T.R., Upton, P.B., MacNeela, J.P., and Swenberg, J.A.(1993). Molecular dosimetry of DNA and hemoglobin adducts in mice and rats exposed to ethylene oxide. Environmental Health Perspectives, 99, 11–17.
Weller, E., et al.(1999). Dose-rate effects of ethylene oxide exposure on developmental toxicity. Toxicological Sciences, 50, 259–70.
Wernick, I.K.(ed.) (1995). Community risk profiles: a tool to improve environment and community health. Rockefeller University, New York.
Whittaker, S.G. and Faustman, E.M.(1994). In vitro assays for developmental toxicity. In In vitrotoxicology (ed. S.C. Gad), pp. 97–122. Raven Press, New York.
WHO(1962). WHO: Principles in governing consumer safety in relation to pesticide residues. WHO, Geneva.
World Bank(1993). World development report: investing in health. World Bank, Washington, DC.
Wu, K.Y., Ranasinghe, A., Upton, P.B., Walker, V.E., and Swenberg, J.A.(1999). Molecular dosimetry of endogenous and ethylene oxide-induced N7-(2-hydroxyethyl) guanine formation in tissues of rodents. Carcinogenesis, 9, 1787–92.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: