Você está na página 1de 7

MALARIA EPIDEMICS DETECTION AND J.A. Njera, R.L. Kouznetsov and C. Delacollette 10.

Risk Detection and Forecasting

CONTROL

FORECASTING

AND

PREVENTION

Ideally, epidemics should be forecast and prevented. If this is not possible, they should be detected early enough in their evolution to prevent, as far as possible, their impact on mortality and incapacity. The concept of risk detection must be expressed in terms of the real time available for implementing an appropriate response after the recognition of the imminent risk. It is particularly important in the case of an incipient epidemic to be able to estimate the potential slope of the forthcoming epidemic wave, as well as the area of potential spread. It is therefore necessary to establish a monitoring system capable of detecting the earliest indicators of the triggering of the chain of determinant events, as well as an emergency-preparedness system to respond to it. The effectiveness of preventive action will ultimately depend on the degree of preparedness of the health services to mobilize the necessary resources in the available time. For an epidemic to occur, the conjunction of several factors is necessary, one or more of which are absent in the interepidemic, or 'normal' years. The speed with which an epidemic is triggered depends on whether the missing factors come together quickly or develop in stages. Knowledge of the local epidemiology, ecology and the biology of the vectors involved is essential if accurate estimates are to be made. Interest in epidemic prevention disappeared after the early 1950s, as it was assumed that the risk of malaria epidemics would no longer exist following establishment of malaria eradication campaigns. The search for actual forecasting systems used, or even proposed, must therefore rely heavily on historical records and a few recent experiences. 10.1. Monitoring of morbidity and mortality This is obviously the most direct method of detecting the actual occurrence of an epidemic although, even with rapid methods of communication, it will seldom provide sufficient time for effective preventive action. Early detection may allow some containment action and, in geographically spreading epidemics such as invasions by new vectors, may give time for preventive measures in neighbouring areas. Morbidity monitoring has been the classic method of epidemiological surveillance, in which one of the standard methods of detecting deviations from normality has been the plotting of endemoepidemic indices. Normality is defined on the basis of past experience over a period of at least 5-10 years. During this period, there should not have been any changes in the system of data collection or case definition. A graph showing the first and third quartile around the median (or the mean plus and minus one or two standard deviations) of all monthly or weekly data for the whole period, will define a normal or endemic channel on which it will be easy to see any departure from normality of the data for the current year as they are plotted on it. The use of the median and the third quartile does not require any selection of past data. Similarly, if all past data are used for the calculation of the mean, one standard deviation should be used to define the normal channel. When abnormal data are eliminated, however, it will be preferable to use two standard deviations around the mean. This method was used in northern Thailand by selecting the years of 'acceptable' or normal transmission during the previous eight years and defining the normal channel as given by the mean plus and minus two standard deviations (Cullen et al., 1984). The monitoring of malaria morbidity has nevertheless been hampered by the slow, complicated procedures used in most of the established antimalarial services, which generally insist on processing only data on microscopically confirmed cases, even if most laboratories operate with large backlogs of slides (Prasad et al., 1992). The sensitivity of the system could be improved by requiring peripheral services to report, via emergency channels, abnormal increases in fever cases or the risk of running out of antimalarial drugs. If there is some form of emergency preparedness, it may be possible to strengthen the case-management capability, proceed to a rapid confirmation and, if necessary, start emergency transmission control. Unfortunately, during most epidemics, once the increase in morbidity has been recognized, there may be too little time to mobilize the required resources for effective vector control before the transmission season reaches its peak. Retrospective studies of mortality statistics, generally more complete and reliable than those of morbidity, have been found very useful in delimiting epidemic-prone areas, determining past periodicity and initiating the search for possible determinants.

In his classic study of the great epidemic of 1908 in the Punjab (Christophers, 1949) which affected a population of some 30 million in an area of 500 000 square miles, Christophers showed that similar epidemics had occurred in the same area at intervals of about eight years, and devised a method of mapping the spread of these epidemics by calculating an epidemic figure. This was obtained by dividing the deaths recorded during the month of greatest epidemic prevalence by the normal monthly mortality. These figures, calculated by registrar unit (thana), were then mapped and lines of equal mortality were drawn, which showed the extent of each epidemic. The Malaria Commission of the League of Nations defined an index of epidemic potential as the coefficient of variation of mortality established over as long a period as possible. Similar indicators of epidemic potential can be based on morbidity data for those areas where the sources of such data show a certain consistency over the years. 10.2. The spleen rate as an indicator of herd immunity The spleen rate, and particularly the average enlarged spleen, has long been recognized as a good indicator of the immunity of the population, since it has been shown that epidemics did not occur in areas where the spleen rate was consistently high, while a declining spleen rate was an indication of increasing epidemic risk. The Kampala conference of 1950 (WHO, 1951) based its definition of endemicity on the spleen rate. Nevertheless, it included as part of the definition of hyper- and holoendemic malaria the condition that the spleen rates in children aged 2-9 years should at all times be greater than 75% for holoendemic and between 50% and 75% for hyperendemic malaria. Repeated surveys were therefore necessary to determine whether these conditions were satisfied. The spleen rate combined with the value of the average enlarged spleen (AES) should make it possible to distinguish clearly between an endemic situation and a current or recent epidemic. Even after a single survey, areas of high endemicity have high values for both indices (spleen rates above 50% and AES above 2), while a high spleen rate with a low AES is an indication of a recent epidemic. Even if spleen enlargement has lost much of its value as an epidemiological indicator in areas where antimalarial drugs are very widely used, there are still many malarious areas not so well served where a spleen survey in school children may perhaps provide epidemiological information most quickly. It could in particular serve to identify scattered endemic villages in hypoendemic areas which maintain the parasite reservoir during the interepidemic periods and from which explosive epidemics may spread. 10.3. Monitoring entomological variables The monitoring of entomological indicators such as increased vector density or longevity should theoretically provide some time to introduce measures to reduce transmission, e.g., house spraying, reimpregnation of bednets or, if affordable, space spraying. Nevertheless, the difficulties and cost of obtaining representative relevant entomological information often makes this impossible. Moreover, even in the best circumstances, detection can coincide with active transmission, leaving very little time for effective action to be taken against it. The entomological inoculation rate has been proposed as a comprehensive indicator of epidemic risk on which to base forecasting (Onori & Grab, 1980). It is defined as the mean daily number of bites inflicted on an individual by mosquitos infected with sporozoites, and therefore requires the determination of the man-biting and sporozoite rates which, particularly the latter, may be practically impossible to determine in many epidemic-prone areas. Epidemics may occur with very low sporozoite rates, often undetectable by common practices of entomological services, particularly in areas where the vectors are only partially anthropophilic, such as most areas of the Americas and the Indian subcontinent. 10.4. Monitoring meteorological variables Long before the discovery of the mosquito transmission of malaria, experience showed that some abnormal meteorological events, such as extensive floods, could be followed by epidemics of intermittent fevers. Associations of this type vary considerably from area to area, depending on the abnormal event which triggers the epidemic process. Although the association is not fully deterministic, the detection of the meteorological abnormality is usually relatively easy and will give a good indication of the increased epidemic risk. Meteorological monitoring should aim at the detection of:

early, prolonged rains in arid areas such as north-west India and Pakistan, where the main vector, A. culicifacies, normally breeds in pools on stream margins or in drying streams, small irrigation channels and areas receiving canal seepage, thereby maintaining endemic malaria in the adjacent villages. In years of prolonged abundant rains, however, it breeds profusely in all manner of temporary rain pools, producing high densities everywhere. In addition, the prolonged rainy season maintains favourable humidity conditions, thus ensuring the expectation of infective life of the vector necessary to produce an epidemic. The detection of early rains should initiate the preparatory phase of control, and further confirmation of the rain pattern will still allow more than a month of real time to mobilize vector control; periods of unusual drought in areas such as the medium-altitude valleys of the 'intermediate zone' (located between the dry and wet zones) of south-western Sri Lanka, where the vector will not find many favourable breeding places in normal wet years in the well-cultivated land, but will breed profusely in the numerous pools formed in the river beds when the flow decreases markedly following the failure of the south-western monsoon. Such failure in Sri Lanka would give about two months for preparation and preventive action; periods of flooding or increased water-logging caused by an excessive rise in the water level of desert rivers. In such areas e.g., the Nile, Indus and Senegal river valleys, increased vector breeding occurs in the pools resulting from the withdrawal of the flood waters, so that between the time of the flood and that of maximum vector breeding, another period of two to four weeks is added on to the two periods described above. In addition, in most desert rivers, the high levels originate in remote upstream areas, and it may thus be possible to sound the alarm several weeks earlier; periods of temperature and humidity favourable to vector survival, as would be required for epidemics of oasis malaria. These provide a short time for action to be taken as they correspond to the periods when longevity is increased and transmission is therefore occurring. Depending on the extent of preparedness, they may allow some effective transmission control or at least the emergency supply of drugs.

Epidemics due to abnormal meteorological conditions are rare events which occur with a certain periodicity. Epidemic forecasting should therefore be a progressive process, in which the time elapsed since the previous episode of increased risk should trigger the first stage of emergency preparedness. The first alarm signal from the monitoring system should then lead to the strengthening of diagnosis and treatment facilities, the building up of drug stocks, and the logistic measures for the mobilization of vector control when the imminent risk is finally confirmed. As many meteorological abnormalities appear to be linked to the El NiZo-Southern Oscillation (Bouma and van der Kaay, 1996), it is important to determine the correlations between the local epidemic risk and the El NiZo or their opposite La NiZa years. The global climatic disturbances associated with these phenomena may result in abnormal spells of wet or dry weather, which may in turn result in changes in malaria transmission potential. For example, the well-documented malaria history of the Punjab shows that, in that area, the El NiZo years are particularly warm and dry and malaria epidemics occur most frequently in the year following El NiZo phenomenon. In contrast, the malaria epidemics of Central-South Sri Lanka seem to be associated with La Nia years, when the south-west monsoon fails. The analysis of these correlations could define the possible predictive value that could be given to those global meteorological events in a certain area which could provide an early preparatory warning, alerting the antimalaria service to intensify the monitoring and rapid reporting of local indicators of epidemic risk. 10.5. Monitoring socioeconomic variables The association between irrigation and the opening up of land for agriculture with malaria epidemics has been recognized since ancient times. In recent years, major epidemics have also occurred among gold and gem miners in South America and south-east Asia, while focal epidemics have accompanied the agricultural settlements which have followed the construction of new roads. These have spread among native populations, which have often been decimated by P. falciparum malaria. It may sometimes be possible to identify an economic activity which may rapidly become attractive to large numbers of people and which could be monitored as a means of detecting new high-risk areas. For example: in Brazilian Amazonia (graph 16), the invasion of the forest by gold miners has resulted not only in the creation of foci of high apparent endemicity among them, but also catastrophic epidemics among the native tribes, e.g., the Yanomami (Veeken, 1993). Both have followed the variation in the attractiveness to miners of certain areas. Thus, the States of Par and Rondnia were by far the most malarious from the 1970s onwards until the early 1990s, when there was a massive rush of miners to northern Mato Grosso, which then experienced epidemics as serious as those in the other two states. These epidemics could have been

foreseen if the health services had been more aware of new economic activities likely to attract large numbers of people; Graph 16 Brazil - malaria cases reported

in Venezuela in 1969, diamonds were discovered some 150 km south of Caicara (Cereno District, Bolvar State) in an area of virtually uncharted virgin forest with an annual rainfall of 2 200 mm. This news brought to the area large numbers of people from Brazil, Colombia, Guyana and elsewhere. By the end of 1969, the number had reached 1 000 and by March 1970, 5 000, although the population remained around that figure for the next few years. In 1970, a malaria epidemic started, mainly of P. vivax followed by P. falciparum, peaking in 1972 with more than 3 000 cases (graph 17). The Malaria Service brought in supplies of chloroquine at the beginning of 1970, but an all-out campaign which included mass drug administration and DDT spraying and fogging was not initiated until April 1972. From June 1972, intramuscular injections of the experimental drug cycloguanil pamoate (camolar) were given to everyone entering the area for a period of time (Bruce-Chwatt et al., 1974). The epidemic risk ended with the end of the diamond rush; Graph 17 Malaria positive cases in the Cereno district (1970-1974)

on the Pacific coast of Central America, malaria control had been hampered since the 1950s by the high vulnerability of cotton workers, owing to the primitive and crowded conditions of their camps and their high mobiitliy because of the temporary nature of their employment. In contrast, malaria control was highly effective in the wellestablished bnana plantations of the the Atlantic coast. The collapse of cotton cultivation in the early 1980s resulted in a mass exodus of migrant workers who then provided very cheap labour, allowing the expansion of the banana plantations on the Atlantic coasts of Costa Rica and Honduras. This led to serious focal malaria outbreaks on the

Atlantic coast and the success of malaria control on the Pacific coast. These developments are illustrated by the evolution of national malaria statistics in Costa Rica, which had achieved a high level of control before the expansion of banana plantations (graph 18) and in El Salvador, which has no Atlantic coast (graph 19), and where malaria disappeared as cotton cultivation declined. In addition to the impact of man-made modifications on the environment and migration in search of work as determinant factors of epidemics, economic depression and famine are important determinants of the severity of their impact on morbidity and mortality (Packard, 1986; Zubbrigg, 1994). Graph 18 Malaria with banana production (1980-1992 incidence in Costa Rica compared

Graph 19 Malaria with cotton production (1980-1990)

incidence

in

El

salvador

compared

10.6. Comprehensive monitoring of epidemic risk Fully developed forecasting services will make it possible to refine progressively the understanding not only of the main determining factors, but also of some complementary ones which may be responsible for the more rapid development or greater severity of epidemics. The history of epidemic forecasting, particularly in the Punjab between 1920 and 1940 (now north-west India and north-east Pakistan), provides an illustration of the development of a comprehensive system and of its adaptation to changing situations (Christophers, 1949). In his earlier studies in the Punjab, Christophers had found a high correlation between fever and rainfall (correlation coefficient 0.67, probable error 0.168), but an even higher correlation when an indicator of hunger (the 'human factor', indicated by the price of food grains) was introduced.

When this was done, the correlation coefficient between fever and the product of prices and rainfall was 0.80 (probable error 0.099). Further studies by Perry, and by Gill in 1914, showed that the spleen rate fell considerably after an epidemic, and that epidemics could be largely owing to the low endemicity prevailing during the interepidemic periods, leading to an absence of immunity in children, as well as to a general decline in immunity. These epidemics not only showed a certain periodicity, but also always affected parts of the same general areas, and were therefore designated as 'regional epidemics'. Following Gill's recommendations, forecasting continued in the Punjab up to the early 1950s, based on the study of four factors:

1. A rainfall factor based on the measurements of rain in July and August in 192 recording stations, which
gave an indication of the transmission potential. Rainfall alone was used because humidity, which was recorded only in 10 stations in the Punjab, was very closely correlated with rainfall in the critical months of July and August. A spleen-index factor based on spleen rates in schoolchildren in 286 representative communities, routinely taken during the previous two or three, and eventually five years, which gave an indication of immune status and therefore of the areas more likely to be affected by an epidemic. An economic or human factor given by the average price of food grains during the preceding two years; although not direct causes of an epidemic, famine and stress strongly influence its severity and intensity. An epidemic potential factor for each locality (registration centre), giving a coefficient of variability, calculated by multiplying the standard deviation of the October fever mortality for the years 1868-1921 (excluding 1918) by 100 and dividing it by the number of observations (53 years); this coefficient varied from 31 in Kangra to 106 in Sialkot.

2. 3. 4.

The first factor, indicating the imminent risk of increased transmission, was actually a determinant factor, while the other three factors indicated the expected impact and spread of the epidemic. Districts were characterized by the intensity of recent epidemics. To gauge the actual intensity of a given epidemic, an epidemic figure was defined as the quotient of the mean monthly fever mortality in October-December over that for the four months of April-June of the same year. This figure did not exceed 1 in interepidemic years, but could be as high as 10 in epidemic years. Gill also used a diffusion index, namely the number of registration centres in a district with an epidemic figure greater than 2.5, and an intensity index, namely the percentage of registration centres with an epidemic figure greater than 5.0. Forecasting, although it gave only about one month from the detection of the determinant factor in August and the expected start of the epidemic in September, was extremely useful from the point of view of emergency relief, provided that there was an appropriate organization to deliver it. Such emergency action included: a) the provision of adequate supplies of quinine by special relief units to the villages at risk; and b) the stocking of supplementary essential foods, particularly milk for infants, and the organization of other public services, since epidemics were likely to disrupt the whole way of life of the communities affected and the high mortality was often not only a direct but also a secondary effect of the disease. Stimulated by the severe epidemic in Ceylon of 1934-1935, the Malaria Commission decided in 1937 to draw the attention of Health Administrations to the 'urgent necessity for carrying out research on the subject of great malaria pandemics, not only during the epidemic but also prior to its outbreak', and circulated Covell's report, Methods of forecasting and mitigating malaria epidemics in the Punjab (1938). After reviewing the studies by Christophers and Gill, as well as a report by Parrot and Catanei (League of Nations, 1939) on the determining factors of epidemics in Algeria, the Commission gave its support to the work in Punjab. Recognizing that the main determinant of an epidemic was the disruption of the equilibrium between infection and immunity, it recommended that forecasting should be based on the monitoring of hygrometry and pluviometry, the evolution of the splenic index, the economic factor and the epidemic potential (the coefficient of variation of mortality established over as long a period as possible) (League of Nations, 1939). The system of epidemic forecasting adopted in the Punjab continued to provide useful predictions until the launching of the large-scale vector-control operations of the 1950s (Yacob & Swarop, 1944; Swaroop, 1949). Nevertheless, the human or economic factor was slowly being neglected in the Punjab in part, as Zubbrig (1994) notes, because fluctuations in the selected indicator (grain prices) decreased so that it lost its value as an indicator of hunger, but perhaps also because of a tendency on the part of malariologists to explain the epidemiology of the disease solely in terms of cases, parasites and vectors. It should be noted that the report of Parrot and Catanei

discussed only the influence of premunition, and the detailed study by Covell and Baily (1932) of the regional epidemic in northern Sind in 1929 does not even mention the human factor.

Você também pode gostar