Public Health and Field Epidemiology





Site Selection/Establishing Camp


Responders to a disaster are rarely afforded the luxury of selecting a site for creating a displaced persons camp. Despite the realities of establishing field operations in disaster-affected areas, responders should strive to organize camps that minimize the transmission of disease and provide a safe haven from conflict. Ideal camps:




  • Do not compete for resources with the surrounding community



  • Are located an appropriate distance from known insect/animal vectors and other environmental/industrial health risks



  • Have access to surface or ground water supplies of adequate quantity and quality



  • Have adequate drainage to prevent flooding/standing water



  • Are easily accessible by ground transportation



  • Are located in areas where geopolitics afford freedom of movement of persons



Overcrowding substantially complicates the ability to maintain an orderly, hygienic environment. A minimum usable surface area of 45 m 2 per person is recommended when calculating the total footprint required for a camp providing all necessities (shelter, cooking areas, educational facilities, toileting areas, recreation areas, roads and paths, etc.) within the confines of the camp itself. Each individual should be provided a minimum of 3.5 m 2 of covered space for shelter. Ensuring space between individual structures as well as collections of structures greatly improves opportunities for maintaining a sanitary camp and decreases opportunities for fires.


Water


Ample quantities of safe water are necessary for preservation of health, hygiene, and morale. Unsafe water can transmit a variety of bacterial, viral, protozoan, and parasitic diseases, exacerbating already difficult standards of living. Similarly, insufficient quantities of water available for hygiene activities can be an equally important contributor to many health problems observed in the aftermath of a disaster. Although unique climactic, cultural, and infrastructure considerations affect the quantities of water required by a population affected by a disaster, Sphere standards identify 15 liters per day per person as a key indicator for levels generally considered acceptable for drinking, cooking, and personal hygiene needs. Certain vulnerable subpopulations generate water demands substantially greater than population-accepted levels; field hospitals operating in emergency environments have been documented to require between 150 and 200 liters per patient per day, child feeding centers typically require 30 to 40 liters per child per day.


Untreated water sources require evaluation for potability by personnel with water sanitation experience. Historically, attempts to provide two distinct sources of water (one for potable use and another for nonpotable use) fail because of complicated logistics and end-user misunderstanding. Responders should instead focus efforts on generating a single source of potable water to meet all survival needs (drinking, cooking, and personal hygiene). Whenever possible, responders should ensure the standard of potability is the same for both displaced persons and the neighboring local population to prevent potential conflict.


Contamination of a water supply with fecal coliforms, predominantly Escherichia coli , serves as a sentinel indicator of other potentially harmful pathogens. As such, Sphere guidelines recommend treating water if any fecal coliforms are present. Minimum recommended water quality standards are outlined in Table 10.1 .



Table 10.1

Sphere Potable Water Standards




















Standard Value Applicability
Chlorine residual 0.5 mg/L


  • Piped supplies



  • All water supplies at times of risk of diarrheal epidemics

Turbidity 5 nephelometric turbidity units (NTU)


  • Piped supplies



  • All water supplies at times of risk of diarrheal epidemics

Fecal coliforms None per 100 mL •At point of delivery


A variety of options exist for substantially improving the safety and potability of water sources in austere environments. Overwhelmingly, chlorination is the preferred method for water disinfection as it has a measurable residual that both continues to kill organisms after initial treatment and can also serve as a sentinel for potential contamination when residual chlorination levels fall below acceptable thresholds. Manufacturer information can provide specific dosing recommendations depending upon the product used and volume of water to be treated. Concentration of dissolved solids, pH, and temperature can all affect the volume of chlorine required to safely kill pathogens. Chlorine requires a minimum of 30 minutes of contact time to disinfect water for drinking. Table 10.2 briefly summarizes strengths and limitations of various methods for treating water in the field.



Table 10.2

Options for Treating Water in the Field




































Treatment Method Strengths Limitations
Polyethylene terephthalate (PET) bottle solar sterilization


  • Inexpensive



  • Capable of destroying large percentage of pathogens




  • Limited production volume



  • Time intensive



  • Posttreatment contamination possible



  • Requires acceptable fresh water source

Iodine


  • Inexpensive



  • Simple technology



  • Can be performed at household level




  • Requires substantial logistics for replenishment of consumables



  • Requires acceptable fresh water source

Boiling


  • Inexpensive



  • Simple technology



  • Can be performed at household level




  • Limited production volume



  • Time intensive



  • Requires reliable fuel source



  • Posttreatment contamination possible



  • Requires acceptable fresh water source

Slow sand filtration


  • Inexpensive



  • Simple technology




  • Posttreatment contamination possible



  • Requires acceptable fresh water source



  • May require finished water distribution system

Chlorination


  • Gold standard of water treatment



  • Residual chlorine continues to kill organisms after initial treatment



  • Can be performed at household level




  • Resource intensive



  • May require finished water distribution system



  • Requires substantial logistics for replenishment of consumables

Reverse osmosis


  • Can generate potable water from seawater/briny/brackish water




  • Expensive



  • Requires substantial logistics for replenishment of consumables



  • Requires reliable source of power



  • Posttreatment contamination possible if not chlorinated

Commercial bottled water


  • Does not require finished water distribution system




  • Expensive



  • Requires substantial logistics



Careful thought must also be given to how water will be distributed to populations. Ideal water delivery should be geographically convenient, located in an area that minimizes the potential exploitation of women and children who are often culturally delegated to perform household water collection, afford procurement without unnecessarily long wait queues, and be sufficiently designed as to minimize standing surface water, which can serve as a vector breeding habitat. Guidelines recommend striving for no more than 250 people per tap, 500 people per hand pump, and 400 people per single-user open well.


Even after proper treatment and delivery of field water, there exists substantial health risk because of the increased likelihood of postdelivery contamination in crowded settings. Sphere guidelines recommend at least two clean water-collecting or storage containers (10–20 L each) per household to minimize postdelivery contamination risk. Additionally, continued surveillance of water at point of use can help identify potential sources or practices jeopardizing treated water.


Food


Food-borne illness imposes a substantial burden of morbidity and mortality in hygiene-challenged settings. Common factors contributing to food-borne illness include food from unsafe sources, poor personal hygiene, inadequate cooking, and inadequate storage. Specific interventions and best practices addressing each of these factors can substantially reduce the likelihood of food-borne illness.


Food From Unsafe Sources


Assuring a safe food supply begins with procuring food from safe sources. When responding to disasters, a safe food source extends well beyond procuring items with reasonable assurances of pathogen absence from a supplier. Considerations for the ability to safely store food items before preparation, fuel availability to properly cook temperature-sensitive food items, safe storage of prepared food items not immediately consumed, and overall availability and application of hand hygiene substantially affect a food item’s safety.


Poor Personal Hygiene


Inadequate hygiene, particularly hand hygiene, is frequently identified as a factor contributing to outbreaks of food-borne illness. The importance of reinforcing hand hygiene before meal preparation and consumption is critical in field settings.


In settings with centrally prepared messing facilities, food handlers should receive specific training on hand hygiene technique and actions or events throughout the food preparation and serving process that would require repeat hand washing. Food handlers should be free of communicable diseases and should be required to report symptoms of potential illness (vomiting, diarrhea, nausea, jaundice, sore throat with fever, skin lesions). A low threshold for exclusion from food preparation duties should be applied to food handlers to minimize the possibility of food-borne illness resulting from a communicable disease.


Inadequate Cooking


Proper cooking temperatures destroy pathogenic organisms and minimize the potential for food-borne illness. Raw animal foods must be cooked to a temperature and for a time period that ensures destruction of common food-borne pathogens. Availability of reliable fuel sources will substantially affect the ability to safely eliminate pathogens in raw foods that require cooking.


Inadequate Storage


Improper food storage, either before or after preparation, is critical to ensuring a safe food supply. Environmental conditions and the logistic realities of fieldwork impose substantial limitations on the ability to store food safely. Each disaster response situation imposes unique challenges; responders charged with organizing the procurement and provisioning of food items must account for logistic realities when considering what commodities will be made available to displaced persons. Similarly, considerations must be made to prevent contamination of food items after they have been prepared. If not immediately consumed after preparation, a variety of food items require specific holding temperatures to prevent the growth of pathogens. Restricting rations of such items may be warranted to discourage preparation of temperature-sensitive items in excess of what is to be immediately consumed.


Toileting


Methods employed for the disposal of human waste are influenced by material availability, soil conditions, source and location of drinking water, and environmental regulations. Positioning of toileting facilities should reduce or minimize the transmission of disease by vermin, vectors, or direct human contact. In general, field toileting facilities should be located at least 30 m from groundwater sources or food preparation areas. All attempts should be made to create handwashing stations at the exit of any toileting facility; handwashing with soap and water is ideal but hand sanitizer is an acceptable alternative. In the absence of soap, handwashing with water alone or water and a natural abrasive such as sand should still be encouraged.


Two factors are critical when implementing any toileting solutions following a disaster. First, toileting practices presented to a population must be culturally acceptable. The fundamental purpose of established toileting facilities—the prevention of indiscriminate defecation—hinge upon identifying a solution the population served is willing to use. Second, special considerations must be made to ensure toileting facilities are designed and located in ways that do not place vulnerable populations (unaccompanied children, women, and the elderly) at harm. Placing toileting facilities far from other camp activities for the sake of privacy inconveniences inhabitants and unnecessarily isolates potentially exploitable persons, potentially exposing them to harm during a particularly vulnerable activity.


Defecation fields are the crudest of toileting options available and appropriate only in the initial phase of a disaster response; however, their ability to prevent indiscriminate defecation is critical to establishing basic hygiene measures in a new camp. Defecation fields are simply open fields designated and marked for use as open toileting. Typically, defecation fields are used from the back forward, with sections successively closed after use to allow for appropriate decomposition of waste. Hung sheets are sometimes employed as a modest effort to afford some privacy and to indicate to the user sections of the defecation field that remain open for use.


Pit latrines are another simple toileting option; however, their practicality is influenced by soil conditions and water table depth. Hard or rocky soil can make digging pit latrines extremely difficult. A shallow water table can result in contamination of natural water sources and restrict the maximum depth to which a latrine can be dug; the bottom of a pit latrine should be no closer than 1.5 m from the water table. Ideally, pit latrines require periodic treatment with appropriate insecticides to prevent vector breeding. Additionally, latrines fitted with seating should have the means to place a lid over unused seats to further minimize vector breeding. Pit latrines should be closed and filled in with earth once the contents of the latrine reach 1 foot from ground level.


Burn barrels are sometimes used by military forces when establishing field activities. Burn-barrel latrines use 55-gallon drums cut in half to collect and burn human waste. Before use or reuse, a barrel is primed with approximately 3 inches of diesel fuel to ensure more efficient burning of solid waste, while also serving as an insect repellant. The contents of burn barrels are then burned when more than half full. A mixture of four parts diesel to one part gasoline should completely cover the barrel contents. Occasional stirring of the ignited contents is necessary but minimized with use of priming fuel. Successful use of burn barrels requires minimizing the urine content, to burn waste more efficiently and reduce fuel requirements.


As camps mature, transitioning to family- or community-level toileting options reduces the potential for exploitation of vulnerable persons and increases accountability for maintaining sanitary toileting facilities. As such, responders should strive for family toilets with one toilet per 20 people. In situations where public toileting is the only option, approximately three women’s facilities are recommended for every one men’s facility.


Waste Disposal


Field operations generate large amounts of waste products. Waste products may contain infectious agents such as cholera or typhoid; may attract dogs, birds, other wildlife, and vermin capable of spreading zoonotic disease; and can serve as breeding grounds for many vectors of human disease. As conditions permit, waste should be collected and disposed of in accordance with local, federal, or host nation law. Personnel responsible for the welfare of populations residing in field conditions must be familiar with basic principles of waste management and assure waste management is addressed in the planning stages of any field activity.


Solid Waste


Solid waste generated in field environments is disposed via burial, incineration, or commercial hauling. Burial is suitable only for short-term activities involving small numbers of people. Although commercial hauling for off-site disposal is greatly preferable, logistic or geopolitical realities often necessitate incineration of waste on site. Open burn pits are convenient for use when first establishing camps; however, attempts should be made to replace open burn pits with improved incineration techniques as soon as practical because of potential health risks associated with unimproved combustion of solid waste.


Liquid Waste


Proper management of liquid waste (gray water) is critical in camp settings, as improperly managed water waste generates mud or standing water, which greatly complicates daily life for camp inhabitants. Standing water also serves as a breeding ground for vectors and an attractant for vermin and other undesirable wildlife.


Waste water generated from field activities must be disposed of in accordance with applicable local, federal, and host nation laws. Use of municipal sanitary sewers is preferred but often unavailable in austere settings. When necessary, soakage pits and evaporation beds may be used to dispose of waste water without attracting vermin or serving as breeding pools for vectors. Soakage pits are simply pits filled with a coarse medium such as crushed gravel into which gray-water liquid waste is disposed. Soakage pits may become clogged or saturated over time, requiring the construction of additional soakage pits. Evaporation beds are contained fields of earth groomed into rows of depressions and ridges, similar to agricultural growing beds. The field is flooded to the top of the ridges and waste water is allowed to penetrate the soil and evaporate. A series of evaporation beds are used on a rotating basis to allow approximately 3 days of evaporation before the next use. To control mosquito populations, the height of evaporation bed ridges must not be constructed so high as to require more than 3 days for water to evaporate completely from the beds.


Vector Control


The existing burden of vector-borne disease already experienced by communities in low-resource settings is further exacerbated by disaster. Climate change further increases the likelihood of tempering environments previously inhospitable to arthropod vectors and spreading vector-borne disease to novel areas. Although select diseases have effective vaccine or chemoprophylaxis measures, such measures are typically unaffordable or unavailable when performing fieldwork. Minimizing exposure to vectors remains the cornerstone of all vector-borne illness prevention.


Minimizing vector exposure begins with avoidance. Locations chosen for habitation in the field should consider the surrounding vector habitat. Whenever possible, sites for human habitation should be open, dry, and positioned away from local settlements, animal pens, or rodent burrows. Habitat denial measures should be taken to minimize vector burden. Camps should be cleared of unnecessary vegetation and waste should be disposed of in an orderly manner. All efforts should be made to minimize standing water and ensure appropriate physical measures are in place to minimize potential anthropogenic vector breeding habitats (i.e., using lids or screens on vessels containing or capable of collecting water).


Both the lifecycle and preferred feeding time of many arthropod vectors follow well-established seasonal, rainfall, and diurnal patterns. Minimizing outdoor exposure during peak biting periods can substantially decrease the likelihood of bites. Physical barriers such as appropriate long-sleeved clothing and use of insect screens or bed nets (if available) can substantially decrease exposure. Chemical repellants further augment bite avoidance; however, their availability is sometimes limited. Permethrin is typically the agent of choice for long-term treatment of clothing, bed nets, and some types of tent fabric.


Animals/Livestock


In addition to serving as sources of food and animal labor for agricultural activities, livestock in many developing countries also represent the net worth of individuals and families. In such environments, owners are extremely reluctant to be separated from animals that represent their financial security. When possible, consideration for providing a nearby secured area to maintain and graze livestock is important when establishing a camp.


Animals serve as important reservoirs or intermediate hosts for a variety of zoonotic diseases and enteric pathogens. As such, animal populations should be located outside of camp areas exclusively for human use and located an appropriate distance from housing, food storage and preparation, and human toileting areas to minimize potential for disease transmission. Animal waste contains a variety of pathogens, and appropriate care should be exercised to ensure animal waste does not contaminate sources of drinking water.


Surveillance


In clinical medicine, clinicians use patient history and physical examination to establish a diagnosis and formulate a plan of treatment. Field epidemiologists establish a community “diagnosis” for public health action by examining indicators to describe community health problems and guide a plan of action. This process is known as public health surveillance (also called epidemiologic surveillance), defined as the ongoing, systematic collection, analysis, interpretation, and dissemination of health data to help guide public health decision-making and action. Public health surveillance serves several critical functions for protecting and promoting the health of populations such as estimating the magnitude of a health problem; determining the geographic distribution of illness; portraying the natural history of disease(s); identifying cases for follow-up or contact tracing; detecting epidemics/defining the problem; generating hypotheses/stimulating research; evaluating control measures; monitoring changes in infectious agents; detecting changes in health practices; and facilitating planning.


Public health surveillance can draw from a variety of data sources, including health surveys, registries of vital events such as births and deaths, medical and laboratory information systems, environmental monitoring systems, research studies, and other resources ( Fig. 10.1 ).




Fig. 10.1


Public health surveillance: data inputs. a Vital registration (e.g., births and deaths), cancer registries, and exposure registries. b Medical and laboratory records, pharmacy records. c Weather, climate, and pollution. d Criminal justice information, online databases and search engines, census.

(Modified from Thacker, 2012.)


The Surveillance Cycle and Realities of the Field


Public health surveillance is an ongoing process in which the relevant data are systematically collected and analyzed, and then disseminated to those involved in disease control and public health decision-making, as part of a surveillance “cycle” ( Fig. 10.2 ).




Fig. 10.2


The surveillance cycle: Surveillance information flows from public and healthcare providers, such as clinicians, laboratories, and hospitals and health departments. Feedback flows from health departments back to public and healthcare providers.

(From Centers for Disease Control and Prevention (U.S.) Office of Workforce and Career Development. Principles of Epidemiology in Public Health Practice: An Introduction to Applied Epidemiology and Biostatistics . 2012. Available at: http://www.cdc.gov/ophss/csels/dsepd/SS1978/SS1978.pdf. )


Those deploying to field environments, especially those in resource-limited settings, may find many elements of this ideal surveillance cycle to be missing. Data collection may be inconsistent, substandard, or not exist at all; geographic, technological, and/or financial constraints may constrain collection, interpretation, and/or sharing of relevant health information; and acute stresses to the system (such as natural disasters or a propagating epidemic) may impair surveillance capacity. Even in the absence of an acute strain to the public health infrastructure, surveillance in resource-limited environments may differ from that in industrialized countries in at least three important ways: (1) more must be done with less; (2) strengthening surveillance is more complicated; and (3) sustainability is challenging. Notwithstanding, the field epidemiologist can work with available resources following the same basic steps: identifying and collecting public health data, and communicating this information and its analysis to stakeholders to guide action and further inform data collection efforts. For example, one can use sick call logs as a rapid, readily available data collection resource.


Community Health Assessment


To assess the health of a population or a community, relevant data sources need to be identified for analysis by person, place, and time (descriptive epidemiology) to address questions such as, What are the actual and potential health problems in the community? Where are they occurring? Which populations are at increased risk? Which problems have declined over time? Which ones are increasing or have the potential to increase? How do these patterns relate to the level and distribution of public health services available? , Various types of information may be relevant ( Table 10.3 ), and ultimately, the decision about what data to collect depends on a host of factors, including what objective(s) are sought, what health information is already available, what resources can be committed to surveillance, and—just as importantly—terms of reference with local, regional, and/or national health authorities.



Table 10.3

Types of Information Relevant to Community Health Assessment









  • General information




    • History, physical and climatic features of area, community organization, economic development, occupations, and organization of local government



    • Geographic distribution of villages and towns, major roads, important features such as rivers and mountains




  • Population




    • Area’s population size, age and sex structure, geographic distribution, migration patterns, and growth rate




  • Health status, morbidity and mortality patterns




    • Demographic indices for birth and fertility rates and for maternal, infant, child, and overall mortality rates



    • Common causes of morbidity and mortality



    • Underlying health problems such as food availability, housing, water supply, and excreta disposal





  • Health services




    • Number and distribution of governmental and nongovernmental facilities, personnel, and programs



    • Adequacy of management support, logistics, and supplies




  • Area health programs




    • Pregnancy: antenatal, delivery, and postnatal care



    • Nutrition: growth monitoring and malnutrition



    • Environmental health: water supplies, excreta disposal, and hygiene



    • Communicable disease control: cases diagnosed and control activities



From Vaughan P, Morrow RH. Manual of Epidemiology for District Health Management . Geneva: World Health Organization; 1989.


Disease Reporting and Outbreak Detection


Detecting disease outbreaks—the occurrence of more disease cases than expected—is perhaps the best-recognized objective of public health surveillance; notwithstanding, many outbreaks may go undetected, even in environments with robust surveillance. Opportunities to uncover an outbreak include: (1) reviewing routinely collected surveillance data (if any); (2) astute observation of single events or clusters by clinicians, infection control practitioners, or laboratory personnel; and (3) reviewing reports by one or more patients or members of the public. To facilitate early detection of outbreaks, mandatory reporting has been instituted for certain diseases or events. The 2005 International Health Regulations (IHR) framework, the only binding international agreement on disease control, requires countries to report certain diseases and public health events of international concern (PHEICs) to the World Health Organization (WHO). , Under IHR, WHO declares a PHEIC if at least two of the following four criteria are met: (1) the event has a serious public health impact; (2) the event is unusual or unexpected; (3) there is risk of international spread; and/or (4) there is risk of international trade or travel restrictions. Some diseases always require reporting under the IHR, no matter when or where they occur, whereas others become notifiable when they represent an unusual risk or situation ( Table 10.4 ). From 2005 through 2016, WHO declared four PHEICs: H1N1 influenza (2009), polio (2014), Ebola (2014), and Zika virus (2016).



Table 10.4

Reportable Public Health Events to the World Health Organization under the International Health Regulations










Always Notifiable Other Potentially Notifiable Events



  • Smallpox



  • Poliomyelitis attributed to wild-type poliovirus



  • Human influenza caused by a new subtype



  • Severe acute respiratory syndrome (SARS)




  • May include cholera, pneumonic plague, yellow fever, viral hemorrhagic fever, and West Nile fever, and any others that meet International Health Regulations (IHR) criteria



  • Other biological, radiological, or chemical events meeting IHR criteria


From U.S. Centers for Disease Control and Prevention (12 May 2017) Frequently Asked Questions about the International Health Regulations (IHR). Available at https://www.cdc.gov/globalhealth/healthprotection/ghs/ihr/ihr-faq.html. Accessed 14 Jan 2021.


In the field, one may encounter additional disease reporting requirements, such as those mandated by local, regional, or national public health authorities, and/or the military. ,


Outbreak Investigation


Pursuing the Investigation


An outbreak of disease is a number of cases in excess of an expected baseline. After detecting the presence of an outbreak, the decision must be made about whether to prioritize control measures or pursue further investigation. Generally, if the source or mode of transmission is known, then control measures that target the source or interrupt transmission can be implemented. If the source or mode of transmission is not known, then one cannot know what control measures to implement, so investigation takes priority. , Additionally, the severity of the illness, the potential for spread, political considerations, public relations, available resources, and other factors all influence the decision to launch a field investigation.


Investigation Components


Once the decision to conduct an outbreak investigation has been made, a systematic approach to proceeding is recommended ( Box 10.1 ); depending on the outbreak, the investigation steps are adjusted. , It is important to note, even though the following components are presented in a stepwise fashion, it is often necessary to perform several components simultaneously.



Box 10.1

Components of an Outbreak Investigation




  • 1.

    Establish the existence of an outbreak


  • 2.

    Prepare for fieldwork


  • 3.

    Verify the diagnosis


  • 4.

    Construct a working case definition


  • 5.

    Find cases systematically and record information


  • 6.

    Perform descriptive epidemiology


  • 7.

    Develop hypotheses


  • 8.

    Evaluate hypotheses epidemiologically. As necessary, reconsider, refine, and reevaluate hypotheses


  • 9.

    Compare and reconcile with laboratory and/or environmental studies


  • 10.

    Implement control and prevention measures


  • 11.

    Initiate or maintain surveillance


  • 12.

    Communicate findings




By definition, an outbreak is a number of cases of disease in excess of anticipated baseline values. In determining whether the number of reported cases constitutes more than expected, one must attempt to contextualize the cases of reported disease. In outbreaks, the cases are usually presumed to have a common cause or to be related to one another in some way. Some disease clusters are true outbreaks with a common cause, some are sporadic and unrelated cases of the same disease, and some are unrelated cases of similar but unrelated diseases. Incorrect diagnosis, faulty reporting, lack of baseline data, or sporadic surveillance occasionally result in the initial appearance of an outbreak where none actually exist—a phenomenon known as a pseudo-outbreak.


Preparing for a field investigation demands not only scientific and investigative preparation but also administrative and operational coordination. Understanding what is known about similar outbreaks and how to go about achieving the research objectives is important; however, understanding roles, communication strategy, logistics, and securing a plan for action are also critical for performing a successful investigation.


Verifying the diagnosis is often done in tandem with establishing the existence of an outbreak. Investigators often review clinical findings and laboratory results; talk with one or more patients with the disease, elucidate clinical features, risk factors for illness, and the patient’s understanding of possible cause(s); and summarize the clinical features using frequency distributions to help characterize the spectrum of illness, establish the credibility of the diagnosis (i.e., whether the clinical features are consistent with the purported diagnosis), and inform the refinement of case definitions.


To construct a working case definition, one must identify a standard set of criteria for deciding whether an individual should be classified as having the health condition of interest. Four typical elements of a case definition include: (1) clinical information about the disease (what signs and symptoms have been observed?); (2) characteristics about the persons who are affected (do any commonalities exist among those who have been ill?); (3) information about the location or place (where are the persons located?); and (4) specification of time during which the illness onset occurred (what place or time did the illness begin to occur, and how long did the symptoms last?). During an outbreak, case definitions often evolve, typically becoming more specific as additional information becomes available. Investigators sometimes create different categories of a case definition, such as confirmed, probable, and possible or suspect, to accommodate uncertainty. Usually, cases are confirmed if indicated by laboratory testing, probable if having typical clinical features of the disease without laboratory confirmation, and possible if all that is present are a few of the typical clinical features.


Because only a small number of possible cases may be reported, it is important to find cases systematically, to ensure the complete burden and spectrum of disease is captured in the investigation. Usually, the first effort to identify cases is directed at healthcare practitioners and facilities where a diagnosis is likely to be made. Investigators may conduct what is sometimes called sentinel or enhanced passive surveillance by contacting these sources by mail and asking for reports of similar cases. Alternatively, they may conduct active surveillance by telephoning or visiting the facilities to collect information on any additional cases. In the field, and especially in low-resource settings, determining when and where cases occurred will frequently require the ingenuity and assistance of local political, religious, cultural, and government authorities. Essential to field surveillance efforts is striving to record information effectively, such as by abstracting selected critical items onto a form called a line listing. Data collection is detailed elsewhere in this chapter.


After identifying and gathering basic information on the persons with the disease, investigators describe some of the key characteristics of those persons. This process, in which the outbreak is characterized by time, place, and person, is called descriptive epidemiology. It may be repeated or refined several times during the course of an investigation as additional cases are identified or as new information becomes available. Visually depicting the time course of an epidemic is often helpful; an epidemic curve provides a sample visual display of the outbreak’s magnitude and time trend. Additionally, epidemic curves (also called epi curves) can help classify outbreaks according to their manner of spread through a population. In a common-source outbreak ( Fig. 10.3 ), a group of persons is exposed to an infectious agent or a toxin from the same source. Common-source outbreaks may further be classified according to whether persons have been exposed for a relatively short time within an incubation period (point-source outbreak); whether the case-patients have been exposed for a long duration (continuous common-source outbreak); or whether sporadic exposures over time have occurred (intermittent common-source outbreak). Other types of outbreak patterns discernible from epi curves include the propagated outbreak, where person-to-person transmission has resulted in increasing numbers of cases in each subsequent incubation period. In addition, mixed types of outbreaks can occur, containing features of both common-source and propagated outbreaks.




Fig. 10.3


Typical epi curves for different types of spread.


Experienced investigators develop hypotheses and reconsider, refine, and reevaluate hypotheses throughout the outbreak investigation process. This process is informed by the data collection and analysis, and also serves as a discrete opportunity to compare and reconcile findings with laboratory and environmental studies when such resources are available.


The decision to implement control and prevention measures should be undertaken as early in the investigation process as possible. As mentioned earlier, once the source or mode of transmission is known, efforts to interrupt transmission can be implemented—sometimes even before an investigation is launched. For example, a child with measles in a community with other susceptible children may prompt a vaccination campaign before an investigation of how that child became infected.


Initiating or maintaining surveillance is crucial to investigators who wish to monitor the situation and help determine whether control measures are working. Surveillance can identify whether the outbreak has spread, affording implementation of additional prevention and control efforts if required. Surveillance also serves as the foundation for establishing the baseline rate of disease—a fundamental piece of information for determining the existence of future outbreaks.


To communicate findings in an effective way, one must develop an effective risk communication plan to relevant stakeholders. Additionally, briefings or reports should be provided to local authorities and other public health professionals to convey critical findings and lessons learned. Principles of good risk communication are discussed later in this chapter.


General Epidemiologic Principles


“Epidemiology” is the study of the five “W”s (what, who, when, where, and why) as they relate to a population and disease:




  • What: the disease (i.e., etiology)



  • Who: demographics (e.g., sex, age)



  • When: timing of exposure and disease onset



  • Where: environment of exposure



  • Why: risk factors and exposures



An epidemiologist studies the relationships between these five “W”s. The goal of most epidemiologic investigations is to better define one or more of the five “W”s, to most effectively allocate resources. The public health decisions based on epidemiologic investigations can have life or death implications; therefore, the data must be as accurate as possible. “Bias” is a systematic error that occurs during an epidemiologic investigation that can skew the results and occur at any stage of an epidemiologic investigation. Common types of bias include selection bias, recall bias, confounding, and random error. It is nearly impossible to eliminate all sources of bias from an epidemiologic investigation. Instead, an epidemiologist aims to reduce bias as much as possible. This process starts with selecting a study design most appropriate to answer an investigator’s key questions. The two most common epidemiologic study designs are a cohort study and a case-control study.


Cohort Study


A cohort study is useful for identifying potential health outcomes that result from known exposures. In a cohort study, a group of individuals who share a set of common characteristics (i.e., a cohort) are followed for a given period of time to determine whether or not they acquire one or more health outcomes of interest. Follow-up can range from days to years. For example, a cohort might be defined as all refugees living in Turkey in September 2015; all children born to human immunodeficiency virus (HIV)-infected mothers during 2016 in Lusaka, Zambia; or all individuals who were displaced by 2015 Typhoon Koppu in the Philippines.


Cohort studies can be prospective or retrospective. Prospective cohort studies are typically resource intensive because they require first assembling a cohort, and then following the cohort population for months or years after a discrete exposure to identify potential outcomes. Retrospective cohorts use preexisting data and look back in time to determine who was exposed. A retrospective cohort study typically requires fewer resources, although it can be more prone to bias and exposure misclassification.


A cohort study usually requires more resources than a case-control study because it involves following individuals over time. Thus, they are usually more feasible for studying health outcomes that are common and/or that have a relatively short incubation period (i.e., the period of time from exposure to symptom manifestation).


Case-Control Study


In a case-control study, individuals who have acquired a particular health outcome (i.e., cases) are compared with individuals who have not acquired the health outcome (i.e., controls) to identify risk factors. A case-control study is usually employed when there is a single disease of interest and when multiple potential risk factors are being examined.


A case-control study usually takes fewer resources than a cohort study, and it can be used for studying rare diseases or diseases with a relatively long incubation period. However, it is more prone to bias than a cohort study. The largest source of bias in a case-control study comes from the selection of controls.


The ideal control group is selected in such a way that if the control had acquired the disease of interest, they would have been in the group of cases being studied. This means that the control group arises from the same population as the cases. For example, if you are studying risk factors for trachoma (i.e., a bacterial infection of the eyes that can cause scarring and blindness) in Nepal, and if your cases are individuals of any age who present to a particular Nepalese hospital with trachoma-induced corneal scarring, then you would want to identify controls who fall within that hospital’s catchment area and who would have presented to that hospital if they had developed trachoma-induced corneal scarring. This means accounting not only for the geographic region/catchment area but also for other factors such as socioeconomic status, insurance status, and access to transportation.


Sampling


Sampling is the scientific process of selecting participants for an epidemiologic investigation. To find participants, the investigator first identifies a target population for study. Within this population an investigator identifies a sampling frame; common sampling frames include a census, hospital records, and telephone directory. Ideal sampling frames capture as much of the target population as possible, while minimizing the capture of individuals outside the target population. Once a sampling frame has been determined, the investigators identify an eligible population, individuals from the sampling frame who meet certain eligibility criteria such as specific clinical or demographic characteristics. After applying inclusion and exclusion criteria, investigators arrive at the sampled participants, individuals from the eligible population who are selected, meet the eligibility criteria, can be contacted/reached, and agree to participate. Fig. 10.4 outlines the sequentially selective process of selecting a sample representative of a larger population.


Aug 20, 2021 | Posted by in GENERAL & FAMILY MEDICINE | Comments Off on Public Health and Field Epidemiology

Full access? Get Clinical Tree

Get Clinical Tree app for offline access