Você está na página 1de 10

Toxicity Testing Overview Last Updated: June 15, 2009 Toxicology is defined as "the study of the adverse effects

of chemical, physical or biological agents on living organisms and the ecosystem" and is based on the 16th century principle that any substance can be toxic if consumed in sufficient quantity. Today, most developed countries have enacted laws and regulations to control the marketing of drugs, vaccines, food additives, pesticides, industrial chemicals, and other substances of potential toxicological concern. Such regulations often prescribe a specific regime of toxicity testing to generate information that will enable government regulators to determine whether the benefits of a particular substance outweigh its risks to human health and/or the environment. This process of regulatory risk assessment can be broken down into three main phases:

Hazard identification: Determination of a substance's intrinsic toxicity (e.g., eye irritation, birth defects, or cancer) through the use of toxicity tests. Test results are then analyzed to determine what, if any, adverse effects occur at different exposure levels (known as a "dose-response" assessment) and, where possible, to identify the lowest exposure level at which no adverse effects are observed (known as the "no observed adverse effect level" or "NOAEL"). Exposure assessment: Determination of the extent of human and/or environmental exposure to a substance, including the identification of specific populations exposed, their composition and size, and the types, magnitudes, frequencies, and durations of exposure. Risk characterization: A composite analysis of the hazard and exposure assessment results to arrive at a "real world" estimate of health and/or ecological risk.

AltTox.org focuses primarily on toxicity tests used in the hazard identification step of risk assessment. However, exposure information can impact hazard identification strategies, and this will be discussed in sections of AltTox dealing with integrated testing strategies and criteria for waiving testing requirements.

Toxicity Tests A test method is a definitive procedure that produces a test result. A toxicity test, by extension, is designed to generate data concerning the adverse effects of a substance on human or animal health, or the environment. Many toxicity tests examine specific types of adverse effects, known as "endpoints," such as eye irritation or cancer. Other tests are more general in nature, ranging from single-exposure ("acute") studies to multiple-exposure ("repeat dose") studies, in which animals are administered daily doses of a test substance to calculate NOAELs and determine whether one or more organ or system is adversely affected following exposures of one-month ("subacute"), three-month ("subchronic"), and/or two-year ("chronic") duration. Tests aimed at

identifying hazards to humans are generally referred to as "safety" or "health effects" studies, whereas wildlife and environmental tests are known as "ecotoxicity" studies. Toxicity endpoints considered within the scope of AltTox include the following:

Acute Systemic Toxicity: Adverse effects occurring within a short time after administration of a single (usually extremely high) dose of a substance via one or more of the following exposure routes: oral ("gavage"), inhalation, skin ("dermal"); or injection into the bloodstream ("intravenous"), the abdomen ("intra-peritoneal"), or the muscles ("intra-muscular") Carcinogenicity: Chemically induced cancer, whether through genotoxic or nongenotoxic (e.g., growth-promoting) mechanisms Dermal Penetration: The extent and rate by which a chemical is able to enter the body via the skin (also known as "skin absorption" or "percutaneous absorption") Ecotoxicity: Chemically induced adverse effects on organisms in the wild, including mammals, birds, fish, amphibians, crustaceans, and other aquatic invertebrates; common study designs include acute systemic, dietary, and reproductive (also known as "lifecycle") toxicity Eye Irritation/Corrosion: Chemically induced eye damage that is reversible (irritation) or irreversible (corrosion) Genotoxicity: Chemically induced genetic mutations and/or other alterations of the structure, information content, or segregation of genetic material (e.g., DNA strand breaks or a gain/loss in chromosome number in cells) Immunotoxicity: Chemically induced adverse effects on the immune system (e.g., thymus, white blood cell number, and viability) Neurotoxicity: Chemically induced adverse effects on the brain, spinal cord, and/or peripheral nervous system (e.g., deficits in learning or sensory ability) Pharmacokinetics & Metabolism: The study of the absorption, distribution, metabolism, and elimination ("ADME") of chemicals in the body (also known as "toxicokinetics") Repeated Dose/Organ Toxicity: General toxicological effects occurring as a result of repeated daily exposure to a substance (via oral, inhalation and/or dermal routes) for a portion of the expected life span (i.e., subacute or subchronic exposure) or for the majority of the life span (i.e., chronic exposure) Reproductive & Developmental Toxicity: Chemically induced adverse effects on sexual function, fertility, and/or normal offspring development (e.g., spontaneous abortion, premature delivery, and birth defects), generally determined through the breeding of one or more generations of offspring Skin Irritation/Corrosion: Chemically induced skin damage that is reversible (irritation) or irreversible (corrosion) Skin Sensitization: The induction of allergic contact dermatitis following exposure to a chemical agent

History of Animal Use in Toxicity Testing

Following the birth of the synthetic chemical industry in the late 1800s, the field of toxicology grew in response to the need to understand how tens of thousands of new substances might affect the health of workers and consumers involved in their production and use. The use of living animals to study the potential adverse effects of new drugs, food additives, pesticides, and other substances began in earnest during the 1920s, when British pharmacologist J.W. Trevan proposed the "lethal dose fifty percent" or "LD50" test to determine the single dose of a chemical that would kill half the animals exposed to it. The idea of a comparative toxicity index offered instant appeal to government regulatorsso much so that variants of the LD50 (i.e., acute systemic toxicity studies) remain the most prevalent animal tests even to this day. Two decades after the introduction of the LD50 test, US Food and Drug Administration scientist John Draize developed standardized tests for eye and skin irritation using albino rabbits, which are now known simply as "Draize tests." A few years later, the US National Cancer Institute developed a standardized test for the identification of chemical carcinogens through the daily dosing of rats and mice for up to two years. Then in the early 1960s, as thousands of babies worldwide were born with debilitating birth defects caused by the drug thalidomide, a number of new and more complex animal breeding studies were developed (i.e., reproductive and developmental toxicity studies), in which large numbers of animals are dosed with a test agent before they mate, throughout their pregnancy, and after giving birth, to evaluate effects on reproductive performance and/or developing offspring. As chemical and pharmaceutical markets became more global during the 1980s, animal tests became entrenched in "internationally harmonized test guidelines" of multinational bodies such as the Organisation for Economic Co-operation and Development (OECD) and the International Conference on Harmonization (ICH). Today, more than 50 such animal-based test guidelines exist representing the default method for testing everything from drugs and food additives to industrial chemicals and pesticides.

Why Deviate from the Status Quo?

Testing methods have not kept pace with scientific progress: Between the time that most commonly used toxicity tests were conceived and today, there has been a revolution in biology and biotechnology. Advances in cell culture and robotics have given birth to rapid "high throughput" in vitro test systems, while tissue engineering is providing ever more relevant in vitro tissues. Emerging technologies such as bioinformatics, genomics, proteomics, metabonomics, systems biology, and in silico (computer-based) systems offer still more potential alternatives to animal use. In June 2007, the US National Academy of Sciences called for a major paradigm shift in toxicology that would "rely less heavily on animal studies and instead focus on in vitro methods that evaluate chemicals' effects on biological processes using cells, cell lines, or cellular components, preferably of human origin. The new approach would generate more-relevant data to evaluate risks people face, expand the number of chemicals that could be scrutinized, and reduce the time, money, and animals involved in testing." Questionable reliability and relevance of current testing methods: Animal testing is predicated on the assumption that adverse effects observed in one animal species could

also occur in others. However, it is also recognized that different species may respond differently to the same substance (Ekwall, et al., 1998; Hurtt, et al., 2003; Gold & Slone, 1993). Whether interspecies differences are products of genetic, biochemical, or metabolic factorsor a combinationit is virtually impossible to know whether the results of testing on rodents, rabbits, or dogs will provide an accurate prediction of toxic effects in humans (i.e., questionable relevance) (Robinson, et al., 2001; Schardein, 2000; Cohen, 2002 & 2004; Haseman, et al., 1998). There is also much debate concerning the relevance of extrapolating from high doses administered to animals to realistic human or environmental exposure levels (Muller, 1948; ACSH, 1997). In addition, despite efforts to standardize procedures, the results of some animal tests can be highly variable and difficult to reproduce (i.e., poor reliability) (Weil & Scala, 1971; Bremer, et al., 2007; Gottmann, et al., 2001). Animal welfare considerations: Some conventional toxicity test methods consume hundreds or thousands of animals per substance examined (Doe, et al., 2006; Cooper, et al., 2006). In addition, some countries' statistics on animal use indicate that toxicity testing accounts for up to 70% of the most painful procedures to which animals are subject for all experimental purposes (e.g., the continued use of death or moribundity (near death) as the experimental endpoint in acute systemic toxicity studies). Time and cost considerations: Some conventional tests take months or years to conduct and analyze (e.g., 4-5 years, in the case of carcinogenicity studies), at a cost of hundreds of thousandsand sometimes millionsof dollars per substance examined (e.g., US $24 million per two-species carcinogenicity study) (USEPA, 2004). Legal obligations: As public opposition towards animal testing has grown, some parts of the world have broadly prohibited testing on animals where alternative methods are "reasonably and practicably available" (e.g., EU Directive 86/609/EEC and US state laws in California [pdf], New Jersey [pdf] and New York [pdf]). Animal testing bans may also be sector-specific, as in the case of the 7th Amendment to the EU Cosmetics Directive, which since 2004 has banned the marketing of any formulated cosmetic products that have been animal tested, and as of March 2009, additionally prohibits (i) animal testing of cosmetic ingredients within the EU, and (ii) with a few exceptions, also the marketing of cosmetic products whose ingredients have been tested on animals on or after that date.

Alternative Methods The term "alternative" in the context of toxicity testing is used to describe any change from present procedures that will result in the replacement of animals, a reduction in the numbers used, or a refinement of techniques to alleviate or minimize potential pain, distress, and/or suffering. This 3Rs concept of alternatives is rooted in the 1959 publication The Principles of Humane Experimental Technique. During the subsequent half-century, tens of millions of dollars have been invested by corporations, governments, and other stakeholders with the goal of advancing the 3Rs in research and testing. Examples of potential replacement methods include the following:

In vitro cell and tissue cultures, such as: freshly harvested "primary" cells, tissues, or organs (e.g., liver slices for metabolism studies; corneas from slaughtered cow or chicken eyes for eye irritation studies); cell lines (e.g., mouse 3T3 cell line for evaluating the potential for sunlight-induced phototoxicity); stem cells (e.g., stem cells for embryotoxicity testing); and complex reconstructed tissue models (e.g., EpiDermTM human skin corrosion test). These and other cell and tissue-based methods have already achieved international acceptance as full or partial replacement methods for their animalbased counterparts In silico systems include computer structure-activity relationship (SAR) or Quantitative SAR [(Q)SAR] models, which predict the biological/toxicological properties of a substance based on its chemical structure and knowledge of similar structures (e.g., MultiCASE and TOPKAT); and expert systems programs that predict toxicological or metabolic activity (e.g., DEREK and METEOR)

Other strategies for reducing toxicity testing requirements include:


Human epidemiology and volunteer studies (most often to confirm to adverse effect of products, e.g., human patch tests for skin irritation and sensitization) Integrated testing strategies Waiving of a requirement to conduct new testing because: 1) existing toxicological information on a substance is recognized as sufficient for risk assessment purposes (e.g., the 30 member countries of the OECD have agreed to recognize one another's testing results); 2) human exposure levels are below what is considered a significant risk to human health, and therefore toxicological testing is not needed [Threshold of Toxicological Concern (TTC)]; 3) information on a structurally similar substance can be used to fill a knowledge gap (a process known as "read-across" or "bridging"); or 4) testing would be difficult, impossible, or meaningless given the nature of the substance in question (e.g., conducting aquatic toxicity studies using a substance that does not dissolve in water)

Other AltTox pages with information on alternative methods include:


Each of the 12 Toxicity Endpoints & Tests sections Existing Alternatives Table of Validated/Accepted Alternative Methods

Scientific Validation & Regulatory Acceptance In general, government regulators will accept alternative toxicity testing methods only after they have been scientifically "validated"that is, determined to be reliable (reproducible) and relevant for their intended purpose. Criteria and processes for test method validation have been developed and implemented in Europe (under the auspices of the European Centre for the Validation of Alternative Methods, or ECVAM), the US (through the Interagency Coordinating Committee on the Validation of Alternative Methods, or ICCVAM), Japan (through the Japanese

Centre for the Validation of Alternative Methods (JaCVAM), and internationally through the OECD. Key steps include the following:

Research & development, which is generally undertaken and/or funded by regulated industry or government Prevalidation, an approximately two-year process that aims to establish the mechanistic basis of a test; standardize and optimize the test protocol; evaluate within-lab variability using a training set of coded chemicals; and define a "prediction model" or "data interpretation procedure," which articulates the process by which test results are used to predict toxicological endpoints in vivo Validation, an approximately one-year process which aims to evaluate a test's transferability to a second laboratory, together with a test's between-labs variability and reproducibility (involving up to four outside laboratories) If a test performs well during the preceding steps, a peer review is undertaken to independently evaluate the results of the validation study. This process requires approximately one year, depending whether an existing peer review body (e.g., the ECVAM Scientific Advisory Committee, or ESAC) is used or whether a new ad hoc expert panel is convened Processes for regulatory acceptance differ region by region. In Europe, ESAC endorsement usually leads to EU-wide acceptance under applicable regulations, given the longstanding legal requirement under Directive 86/609/EEC that non-animal alternatives be used preferentially. In the US, ICCVAM formulates recommendations on the basis of peer review findings and in consultation with the public, and regulatory agencies are required by law to respond to ICCVAM's recommendations within six months. This process can take two years or more at the national/regional level and longer in the case of international consensus-driven bodies such as OECD, ICH, and VICH

Although the process above was initially designed with only alternative (non-animal) methods in mind, it has since been recognized that proper validation should be a requisite for all new and revised test methods.

Neurotoxicity
Last Updated: July 20, 2009

"Neurotoxicology is the study of the adverse effects of chemical, biological, and certain physical agents on the nervous system and/or behavior during development and in maturity" (Harry, et al., 1998). Many common substances are neurotoxic, including lead, mercury, some pesticides, and ethanol. Neurotoxicity testing is used to identify potential neurotoxic substances. Neurotoxicity is a major toxicity endpoint that must be evaluated for many regulatory applications. Sometimes neurotoxicity testing is considered as a component of target organ toxicity; the central nervous system (CNS) being one of the major target organ systems. In utero exposure to chemicals and

drugs can also exert an adverse effect on the development of the nervous system, which is called developmental neurotoxicity (DNT). Like other target organ toxicities, neurotoxicity can result from different types of exposure to a substance; the major routes of exposure are oral, dermal, or inhalation. Neurotoxicity may be observed after a single (acute) dose or after repeated (chronic) dosing. The Animal Test(s) Neurotoxicity testing for regulatory purposes is based on in vivo animal test methods. Four Organisation for Economic Co-operation and Development (OECD) Test Guidelines (TGs) describe in vivo neurotoxicity studies. Delayed Neurotoxicity of Organophosphorus Substances Following Acute Exposure, TG 418, involves a single oral dose to hens who are observed for 21 days. Primary observations include the hen's behavior, weight, and gross and microscopic pathology. Delayed Neurotoxicity of Organophosphorus Substances: 28-day Repeated Dose Study, TG 419, involves daily oral dosing of hens with an organophosphorous pesticide for 28 days followed by biochemical and histopathological assessments. Neurotoxicity Study in Rodents, TG 424, involves daily oral dosing of rats for acute, subchronic, or chronic assessments (28 days, 90 days, or one year or longer). Primary observations include behavioral assessments and evaluation of nervous system histopathology. The OECD adopted a new Test Guideline in 2007 for DNT testing. The Developmental Neurotoxicity Study, TG 426, evaluates in utero and early postnatal effects by daily dosing of at least 60 pregnant rats from implantation through lactation. Offspring are evaluated for neurologic and behavioral abnormalities, and brain weights and neuropathology are assessed at different times through adulthood. An OECD expert group conducted a retrospective performance assessment of DNT testing in support of OECD TG 426, and concluded that TG 426 "represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points" (Makris, et al., 2009). The type of exposure (single or repeated dose) and the outcome (lethal or nonlethal; immediate or delayed effects) will result in different classifications for substances under the Globally Harmonized System (GHS). GHS classifications are determined "on the basis of the weight of all evidence available," including human exposures and animal studies. Neurotoxic effects sufficient for classification include significant functional changes in the central or peripheral nervous system, signs of CNS depression, effects on the senses (sight, hearing, smell), and damage to the brain observed at necropsy or microscopically. Human data are generally not available, but when they are they take precedence over animal test results. The GHS may permit the use of (Quantitative) Structure Activity Relationships [(Q)SAR] and expert judgment to fill data gaps for structural analogs. An expert working group of the International Life Sciences Institute (ILSI) Risk Science Institute published a series of four reports in 2008 "to assess the lessons learned from the implementation of standardized tests for developmental neurotoxicity in experimental animals" (Fitzpatrick, et al., 2008). These reports covered the following topics: need for positive control studies (Crofton,

et al., 2008); understanding variability in study data (Raffaele, et al., 2008); statistical issues and appropriate techniques (Holson, et al., 2008); and interpretation of DNT effects (Tyl, et al., 2008). Problems cited with the current regulatory testing approach for neurotoxicity and DNT include: high cost, long duration, low throughput, and "not always sensitive enough to predict human neurotoxicity" (Bal-Price, et al., 2008a). Experts also claim that the animal tests in the current test guidelines "do not always generate the mechanistic data required for a scientifically based human risk assessment" (ECVAM, 2002). Non-animal Alternative Methods A 1998 review of in vitro methods developed for neurotoxicity testing explains the desirability of using a battery of in vitro tests that would capture the complexity of the nervous system, and the processes involved in neurotoxicity (Harry, et al., 1998). Since these cellular and mechanistic processes had not been fully identified, the authors noted the difficulty in designing such an in vitro test battery. They described a more appropriate use of in vitro models to elucidate toxicity mechanisms and to identify the target cells of neurotoxicants. They also pointed out that cellular models usually cannot distinguish between pharmacological actions and toxicity responses and that this level of discrimination is required for risk assessment. A European Centre for the Validation of Alternative Methods (ECVAM) Working Group reviewed some of the many individual assays and test batteries that have been developed for neurotoxicity testing. The best approach was described as the development of mechanistically relevant alternative methods "that encompass the most important neurotoxic endpoints" to be used in test batteries as part of a tiered testing strategy (ECVAM, 2002). The first testing tier would distinguish neurotoxicant from cytotoxic chemicals, and the second tier would consist of mechanism-specific tests. It was proposed that a minimum battery might consist of methods for assessing blood-brain barrier function, basal cytotoxicity, and energy metabolism. A number of other test battery approaches were summarized in this article (ECVAM, 2002). A review article by Harry and Tiffany-Castiglioni (2005) covered in vitro systems for neurotoxicity testing ranging from "single cell types to systems that preserve some aspects of tissue structure and function." These authors reviewed the current state of in vitro methods and their potential limitations as a reference for future studies. Potential limitations associated with existing in vitro methods for neurotoxicity testing were identified as: "relevant drug concentrations, factors that limit or alter drug accessibility to the nervous system, and the need for assays to reflect biologically meaningful end points" (Harry & Tiffany-Castiglioni, 2005). Recent symposia and workshops have also reviewed the state of alternative testing approaches for developmental neurotoxicity (Bal-Price, et al., 2008a; 2008b; CAAT, 2009a; 2009b; Coecke, et al., 2007; Lein, et al., 2007). In vitro models for neurotoxicology studies and testing include the following types (Breier, et al. 2009; Coecke, et al., 2007; ECVAM, 2002; Harry & Tiffany-Castiglioni, 2005; Prieto, et al., 2005):

Primary cells: neurons and glia (microglia, oligodendrocyte, astrocyte) from different brain regions Cell lines: neuroblastoma, astrocytoma, glioma, pheocromocytoma Brain slices: hippocampus Reaggregating neuronal and glial cell cultures Organotypic (3D) cultures; usually co-cultures (more than one type of cell) Neural stem (progenitor) cells: primary cell cultures and cell lines

The primary cell cultures and brain slices require the use of animals for obtaining the cells and tissues. Continuous cell lines, originally derived from human or animal tissues, typically can be propagated, frozen, and thawed and therefore maintained for research and testing purposes for many years. As with any cell-based test system, determining the type of cell(s) to be used is critical. For example, a commonly used neuroblastoma cell line (Neuro-2a) was less sensitive to some neurotoxicants than cultured primary neurons (LePage, et al., 2005). The authors suggest caution in interpreting neurotoxicity data obtained from tests using this transformed cell line. Tamm, et al., (2006) reported that neural stem cells "are more sensitive than differentiated neurones or glia" to methlymercury exposure. Both neural progenitor cell lines and primary cultures have been utilized in published reports. Costa (2008) reported that primary cells are not always more sensitive than cell lines, and that differences may be due to different culture conditions, such as the presence of serum. Any cell system, whether a primary or transformed cell line, or neuroprogenitor cells, needs to be characterized and preferably compared to primary cells and/or the in vivo response for the specific toxicity endpoint being assessed. In addition to defining the appropriate cellular models, predictive in vitro toxicity assays require that neurotoxicity specific assay endpoints be determined and used in appropriate models, and in a combination (test battery/test strategy) that provides an overall response predictive of the in vivo human outcome. Commonly used cellular endpoints that are not neural specific include cytotoxicity, proliferation, migration, differentiation, and apoptosis. Some endpoints that have been described as relevant to cell-based neurotoxicity assays include electrical activity, neurotransmitter release, and neurite outgrowth (Bal-Price, et al., 2008a; Radio & Mundy, 2008). The roles of the blood-brain barrier (BBB), metabolism, and toxicokinetics need to be included in assessing the potential neurotoxicity of a substance. Recent workshops have discussed how the results from in vitro assays might be coupled with (Q)SAR studies, computational modeling, and other techniques to form an integrated testing strategy for the prediction of neurotoxicity (BalPrice, et al., 2008a; 2008b).

Validation and Acceptance of Non-animal Alternative Methods ICCVAM, ECVAM, and the OECD have not reviewed or validated any non-animal methods or alternative testing strategies for assessing neurotoxicity. Regulatory authorities have not accepted any non-animal methods or alternative testing strategies for neurotoxicity testing.

Você também pode gostar