2 Research 101

At its core, global health research is based on common principles of scientific research that each discipline follows to a greater or lesser extent, but different disciplinary traditions and emphases can amplify what is unique over what is shared. This book is presents what is shared while highlighting unique aspects along the way.

2.1 Scientific Research

What counts as “scientific research”? King, Keohane, and Verba (1994) offer a useful definition in their book, Designing Social Inquiry. They point to several main characteristics:

  1. The goal is inference.
  2. The procedures are public.
  3. The conclusions are uncertain.

2.1.1 ALL ABOUT INFERENCE

By stating that the goal of scientific research is inference, we mean that science goes beyond the collection of facts. Inference refers to the process of making conclusions about some unobserved or unmeasured phenomenon based on direct observations of the world. What is known is used to infer something about things that are not known. This process can be deductive or inductive.

In deductive reasoning, we start from general theories and make hypotheses. Then we collect data and make conclusions based on the data. Inductive reasoning flows the other direction, from specific observations to the generation of hypotheses and theories. Remember it this way: Testing a specific hypothesis requires deductive reasoning. Using observations and making more general statements requires inductive reasoning. To say that quantitative research is deductive and qualitative research is inductive is not quite right, but it’s often true.5

For instance, Singla, E. Kumbakumba, and Aboud (2015) reported the results of a cluster randomized trial of a parenting intervention in rural Uganda. This study is an example of deductive reasoning because the authors started with a hypothesis, collected quantitative data, and inferred something about the impact of the intervention:

  • Using quantitative methods, the primary outcomes of this study were cognitive and receptive language development of the children of participating caregivers was measured using the Bayley Scales of Infant Development.
  • The authors hypothesized that the intervention would improve child development.
  • They found effects on cognitive and receptive language but not height-for-age, and they inferred that the difference observed between the treatment and control groups was due to the parenting intervention.6

The Singla et al. trial can be compared with a qualitative study by Sahoo et al. (2015). Sahoo and colleagues used a grounded theory approach to conduct and analyze interviews with 56 women in Odisha, India, about their sources of stress and sanitation practices.7 This study used inductive reasoning because the authors started with the data—their observations—and then looked for themes and patterns. They came to some conclusions about the nature of sanitation-related stress.8 One result of this work was a conceptual framework for thinking about sanitation-related psychosocial stress.9

The point to take away about inference is that, regardless of the approach to reasoning, the goal of scientific research is to use what we observe to make conclusions about what we do not or cannot observe directly. This is sometimes referred to as empiricism, and our systematic observations are empirical evidence. Empiricism is at the heart of scientific research.

2.1.2 RESEARCH AS A PUBLIC ACT

Scientific research uses public methods that can be examined and replicated. Replication is a core principle of scientific research. No one study rules the day. If the results of a study are robust, another research group should be able to follow the methods used and replicate the findings. When such findings are replicated, we all have more confidence in the results.

Replications are relatively rare, however. First, resources for replicating studies are scant, especially for big field experiments. Second, journal space is limited (especially if there is still a print version), and peer review requires a good deal of time and other resources. Journals want to use their space and resources to publish novel ideas. (Ironically, novel aspects are sometimes small effects that cannot be replicated.) Without the promise of a publication, researchers have little incentive to spend time and money trying to replicate published findings. Publications are a key criterion for tenure and promotion in academia, as well as grant acquisition, so many researchers don’t waste their efforts on studies that have no chance of being published.

What happens when researchers attempt to replicate study findings? The short answer is bitterness. Replicators grab more headlines when they “debunk” findings, and the original authors almost invariably call into question the quality of the replication. This process often leads to hard feelings on both sides. Just see #wormwars to learn what happened when a famous de-worming study was re-examined. Or Google social psychology and priming. Yikes!

A related issue is reproducibility, the ability to generate a study’s findings given the original data set and sometimes the original analysis code. Surprisingly, reproducible findings are much less than assured. The Quarterly Journal of Political Science found that slightly more than half of their published empirical papers subjected to review had results that could not be reproduced using the author’s own code.

On the positive side, authors increasingly share their data and analysis code. Although such sharing has been standard practice in economics for some time, the idea is pretty revolutionary in medicine and public health. We’ll explore why sharing methods and analysis code is so important and easier than ever to do.

2.1.3 LIVING WITH UNCERTAINTY

Every method has limitations, every measurement has error, and every model is wrong to some extent. In short, research is an imperfect process. Sometimes researchers make outright mistakes. These mistakes may or may not be detected and corrected in the peer review process, or during post-publication review if authors share their data and analysis code. Other findings are free of obvious mistakes, but they fail the replication test, and over time some results run counter to a growing body of literature that points in the other direction. In these ways, science is said to be self-correcting. However, this ideal can fall short in the face of challenges like publication bias, but the point here is to understand the intrinsic nature of uncertainty in the scientific process.

A study of the estimation of maternal mortality provides a good example of the principle of uncertainty (see Figure 2.1). Hogan et al. (2010) published maternal mortality estimates for 181 countries. Some countries, like the United States, have vast amounts of data in vital registries that attempt to track all births and deaths. Countries with vital registries struggle with changing definitions over time, but the uncertainty interval around their estimates is typically small because a lot of good data has been collected over a significant period. In many low-income countries, the situation is very different, however. Figure 2.1 shows only four data points! No wonder the uncertainty interval is so great.

The takeaway message is that there is uncertainty in everything. No single estimate can be considered “The Truth.” Instead, we must focus on the origin of estimates and recognize the limitations of what we know or what is being reported.

Estimates of maternal mortality 1990-2010. LEFT: United States. RIGHT: Afghanistan. Squint and you will see that the confidence intervale for the US estimate is less than 10 out of 100,000, compared to more than 3,000 out of 100,000 in Afghanistan. Source: Hogan et al. (2010), http://bit.ly/1JBCelOEstimates of maternal mortality 1990-2010. LEFT: United States. RIGHT: Afghanistan. Squint and you will see that the confidence intervale for the US estimate is less than 10 out of 100,000, compared to more than 3,000 out of 100,000 in Afghanistan. Source: Hogan et al. (2010), http://bit.ly/1JBCelO

Figure 2.1: Estimates of maternal mortality 1990-2010. LEFT: United States. RIGHT: Afghanistan. Squint and you will see that the confidence intervale for the US estimate is less than 10 out of 100,000, compared to more than 3,000 out of 100,000 in Afghanistan. Source: Hogan et al. (2010), http://bit.ly/1JBCelO

So how many women die during pregnancy or within 42 days of delivery? The same research group that published Hogan et al., the Institute for Health Metrics and Evaluation, estimated that there were 292,982 maternal deaths globally in 2013, with a 95% uncertainty interval ranging from 261,017 to 327,792; that’s a range of 66,775 (Kassebaum et al. 2014). Although this number seems high, global statistics include a world population of more than 7 billion people!10

2.2 Stages in the Research Process

Just as every story has a beginning, middle, and end, every scientific article has an introduction, methods, results, and discussion (known as the IMRaD format). Follow these steps and you will have all of the pieces you need to write each section.

2.2.1 FIND A RESEARCH PROBLEM

Every study begins with a research problem. A research problem represents a gap in our knowledge. In academic research, this is another way of saying a gap in “the literature.”

Usually when people speak of “the literature,” they mean scholarly or peer-reviewed journal articles. In addition, a body of work called “grey literature” is more encompassing and harder to search systematically. Grey literature sources are typically disseminated through channels other than peer-reviewed journals. Examples include technical reports or white papers published on the web.

Research problems are typically broad. For instance, stakeholders might want to know how to increase the use of mosquito nets for children under 5 years of age or whether all children should receive deworming medication prophylactically.

These problems have something in common: they are solvable. In his introductory text on behavioral research methods, Leary (2012) writes that this is another key criterion for scientific research. The problems must be solvable. This does not mean easy; it just means that we can use systematic, public methods to gather and analyze data on the problem. In other words, it is possible to devise a method for studying how to get more parents to ensure that their kids sleep under a mosquito net every night, but we don’t yet have a scientific method for determining whether there is a mosquito afterlife where these pests get to buzz around for all of eternity.

2.2.2 ARTICULATE A RESEARCH QUESTION

To study a broad research problem, we must narrow our focus to a more specific research question. D. de Vaus (2001) says there are essentially two types of research questions:

  • Descriptive—What is going on?
  • Explanatory—Why is it going on?

For example, to study the uptake or use of bed nets, we might ask a descriptive research question like, “How many children sleep under bed nets?” But this question is too general. Children of what age? Living where? We also need to define what we mean by sleeping under a bed net. In this line of research, it is common to ask about the previous night, as in the night before the survey.11 A better way to phrase the question might be, “What percentage of children under 5 years of age in Kenya slept under an insecticide treated net the previous night?” An explanatory research question on the same topic might be, “What are the predictors of the use of insecticide treated net among children under 5 years of age in Kenya?”

Remember this: Good research questions are FINER: feasible, interesting, novel, ethical, and relevant (Hulley, Newman, and Cummings 2007).


Feasible Some resarch questions will take a long time to answer, cost too much, require too many participants, require skills or equipment that you do not have, or will be too complex to implement.
Interesting Research requires funding and effort. If you do not ask a sufficiently interesting question, you will not get funding. If you manage to get funding but lose interest in the question, you might not finish. Unlike other domains, global health research tends to have long timelines, and it’s important to work on things you will find interesting over the long term.
Novel Replication is an important part of science, but the majority of funding goes to research that asks new and interesting questions.
Ethical It would be very interesting to create a prison simulation to determine whether charactristics of the people or situation cause abusive behavior, but this would not be ethical because it could lead to the harmful treatment of research subjects. Right?
Relevant In addition to being interesting, a research question should also be relevant. The answer should move the field forward in some way. Making this determination requires a thorough review of the literature and conversations with senior colleagues.


And here’s another: Good clinical questions include PICO: Patient, population or problem; intervention, prognostic factor or exposure, comparison, and outcome.


P Patient, Population, or Problem
I Intervention, Prognostic Factor, or Exposure
C Comparison
O Outcome


We could use PICO to develop a research question about the efficacy of mosquito bed nets in preventing malaria. The problem is malaria infections. The population is children under 5 years of age. Because intervention studies tend to be smaller in reach than nationally representative surveys, we might add “living around the Lake Victoria basin in Kenya.” The intervention is the application of an insecticide-treated net. [Prognostic factor refers to covariates that could influence the prognosis of the patient. An exposure would be something that we think might increase the risk of an outcome.] The comparison group might be children living in families who are provided an untreated bed net. One outcome measure could be the rate of parasitaemia after the intervention.

Importantly, we combine all of these elements into a single research question:

Among children under 5 years of age living around the Lake Victoria basin in Kenya, are insecticide-treated mosquito nets more effective than untreated nets at preventing parasitaemia?

Here is some good advice if you are writing qualitative research questions:

2.2.3 IDENTIFY RELEVANT THEORY

Leary (2012) defines a theory as “a set of propositions that attempts to explain the relationships among a set of concepts.” In quantitative research, you could replace “propositions” with “hypotheses” and “concepts” with “variables.”

Some studies set out to develop a theory (inductive), while others may test a theory (deductive). However, much of applied global health is atheoretical. Many impact evaluations fit the label of “black box evaluations,” meaning that they do not focus on why programs do or do not have an impact. The evaluation is not guided by theory, and the hypotheses are as simple as “the program will have an impact on the outcome.” White (2009) outlines a strategy for changing this and moving to theory-based impact evaluations (White 2009). Chapter 6 discusses developing a theory of change and logic models.

A good resource for understanding the (potential) role of theory in global health is the journal Social Science & Medicine. For instance, Green et al. (2015) frame their cluster randomized trial of an economic assistance program for women in terms of the literature on engaging men in efforts to reduce intimate-partner violence (IPV).

The rationale for addressing IPV through men’s discussion groups is based on the belief that socially constructed gender norms about inequality are a root cause of violence (Barker et al., 2010). Girls and boys learn gender roles and normative behavior, such as gender-based violence, by watching others and observing rewards and punishments; this is the basis of social learning theory (Bandura, 1973), one of several theoretical etiologies of IPV (for a review see Dixon and Graham-Kevan, 2011). Understanding and addressing the connection between violence and masculinity is also critical, gender theorists argue (Jewkes et al., 2014). ‘Gender-transformative’ programs are therefore designed to change gender norms and to promote gender equality among men and boys, most often by raising awareness and targeting attitudes throughout the social ecology.

Search for the words “theory” or “conceptual” in the Introduction or Discussion sections of articles to see how authors frame their work in theoretical and conceptual terms.

2.2.4 DEVELOP HYPOTHESES

The logical approach in quantitative research is often deductive. After a theory is formulated, the investigator develops research hypotheses that are tested. A hypothesis is an a priori prediction about what will occur or about how constructs are related. When the hypothesis is supported by the data, the underlying theory is validated. A well-designed study will have a higher impact as other researchers consider the evidence in support of the theory.

For a hypothesis to be scientific, it should be falsifiable, or testable. Considering our silly example from earlier, the following statement is not a research hypothesis because it cannot be tested: “if a mosquito is killed, it goes to mosquito heaven.” Although this assertion may be true, we cannot test this hypothesis. Scientific validation requires the possibility of falsification, so hypotheses must be engineered to potentially fail.

Not all studies test hypotheses, however. Qualitative research is generally inductive and “hypothesis generating.” Developing hypotheses in the course of planning a study, however, can be daunting. Not all hypotheses are obvious at first.

Can you prove a theory?

Some people advocate against the free distribution of insecticide-treated nets (ITNs) because they believe that there is a “sunk cost” effect of spending money for a bed net, that is, people will use the net more to justify their purchase (Arkes and Blumer 1985). In this case, the theory is one of sunk costs directing behavior. Cohen and Dupas (2010) designed a study to test the falsifiable hypothesis that people who paid a non-zero price for an ITN would use the ITN more than those who received the ITN for free. Notably, they did not find support for this hypothesis.

So the theory of sunk costs is rejected, right? Not necessarily. Leary (2012) offers some helpful advice for thinking about proof and disproof. Proof is logically impossible, whereas disproof is practically impossible.

Proof is Not Possible

First, state the theory and hypothesis as an if-then statement. For example, “If the theory of sunk cost effects is true, then people who pay for an ITN will be more likely to use it than people who get an ITN for free.” If the theory is true, then the hypothesis will be true.

Next consider what happens when you flip this statement. If the hypothesis is true, does it mean that the theory is true?

For example, a child has a fever, and my theory is that the fever is a symptom of malaria. My hypothesis is that this child must have been bitten by a mosquito, therefore.

I pull back the child’s sleeve and find a few mosquito bites. My hypothesis was supported by data! So therefore, if my hypothesis is correct, my theory is proven, and the child must have malaria, right?

Well, no. Although the child was bitten by a mosquito, the child may have been in the eastern United States where we don’t worry about malaria. So in this case, the hypothesis was true, but it didn’t prove the theory.

Disproof is Possible, but Uncommon

What if the hypothesis was not supported because the child was not bitten by mosquitoes? Could my “theory” be true—could the child’s fever still be malaria?

No. If the hypothesis is derived from the theory and the hypothesis is not supported, the logical inference is that the theory is wrong. However, we still shy away from concluding that the theory is wrong. We consider the complexity of the broader picture.

A study like that of Cohen and Dupas (2010) could fail to reject the null hypothesis that net use does not differ between free and subsidized clients—thus not supporting the hypothesis of different use rates—but there are many practical reasons that no differences were detected. For example, maybe they used a questionnaire to measure bed net use that was affected by a bias, such as the idea that “Good parents use bed nets,” that skewed the responses. The counting technique itself may have been flawed and hid the difference in net use as a result. The possibilities are endless, which is is partly why journals are hesitant to publish null results.

So what do we learn when a study does or does not support a theory?

In short, no one study is enough to lead people to discard a theory. But several null results might be. Conversely, no study ever proves a theory, but an accumulation of study results showing support for the theory-derived hypothesis builds confidence in the theory. This is particularly true when the studies are conducted by different researchers across different populations and results are triangulated using multiple methods.

How do researchers find out whether several experiments failed to support a certain theory if journals are reluctant to publish null results? And if negative evidence is missing, is positive evidence overrepresented in the literature? Yes. This is the problem of publication bias, or the file drawer problem, and there is not an easy answer. Efforts like AllTrials to register and report the results of all trials, regardless of outcome, represent a step in the right direction.

2.2.5 SELECT A RESEARCH DESIGN

As Glennerster and Takavarasha (2013) explain in their practical guide to running randomized evaluations, different research questions require different research designs. The most common designs can be lumped into the following four categories:

  • Descriptive (quantitative or qualitative)
  • Correlational/observational
  • Quasi-experimental
  • Experimental

Deciding on the best design to approach a particular research question can be overwhelming. Developing a flow chart can help.

Research design choose your own adventure. PDF download, https://drive.google.com/open?id=0Bxn_jkXZ1lxuWkhFcTUzdWVkZ0E

Figure 2.2: Research design choose your own adventure. PDF download, https://drive.google.com/open?id=0Bxn_jkXZ1lxuWkhFcTUzdWVkZ0E

These designs are introduced in Chapter 5 and Parts V and VI are dedicated to explaining them in detail.

2.2.6 IDENTIFY KEY VARIABLES

A variable is a characteristic or aspect that can take on different values (Diez, Barr, and Çetinkaya-Rundel 2015). A variable can be numeric or categorical. Numeric variables can be further classified as continuous or discrete. You can add, subtract, and take the mean of continuous and discrete numeric variables. The distinction between continuous and discrete is that discrete numeric variables cannot be negative and must be whole numbers. For instance, the number of mosquitoes captured in a light trap is a discrete number. Conversely, the blood meal volume observed in trapped mosquitoes is continuous because volume does not need to be a whole number.

A continuous variable can also be classified as ‘interval’ or ‘ratio’. The key difference is that ratio variables have a meaningful zero, so it’s OK to compute a ratio. For instance, if you trap 10 mosquitoes and I trap 20, I collected more mosquitoes by a ratio of 2 to 1. Interval variables like temperature don’t have this meaningful zero. I’ve never come across other example of interval variables besides temperature, so suggest an edit if you think of one.

The type of mosquito trapped is an example of a categorical variable (e.g., Anopheles, Aedes, Culex). Specifically, type is a nominal or unordered categorical variable. If you ask someone to rate how often they’ve been bitten by any mosquito in the past week—let’s say on a 4-point scale from never to often—their response would be an example of the other type of categorical variable: ordinal. An ordinal variable is what you think: some categories are greater than others. A variable with two levels (yes and no) is often be called binary.

Some disciplines, such as economics, also refer to categorical variables as “qualitative” variables, not to be confused with qualitative methods.

Variables are also classified as dependent and independent variables, depending on how they are used. In studying the impact of ITN use on parasitaemia, parasitaemia is the dependent variable and ITN use is the independent variable. Or maybe the predictors of ITN use are being studied. In this case, ITN use is the dependent variable and other factors, like education level, cultural background, or income level, are independent variables. Here are some other ways these terms are described in the literature.


Dependent Variable (DV) Independent Variable (IV)
Response Explanatory
Outcome, Endpoint Predictor, Risk Factor
Y X

Qualitative researchers do not typically talk about the measurement of dependent and independent variables; rather, they talk about concepts and constructs. In qualitative research, the goal is not to test a hypothesis that some independent variable predicts some dependent variable. Instead, the goal is to explore a phenomenon and describe it in as much detail—thick description—as possible. Some researchers quantify qualitative data (e.g., as frequencies), however, so it’s not necessarily a number-free zone.

2.2.7 SELECT APPROPRIATE RESEARCH METHODS

If research designs are strategies for answering research questions with the best possible evidence, then research methods are the tactics for obtaining the evidence (Chapter 8). Often methods are divided into three broad categories:

  • Quantitative
  • Qualitative
  • Mixed

Investigators use quantitative methods to collect and analyze numerical data. These data may be binary or dichotomous** (e.g., whether a patient is hospitalized or not), they may be categorical (e.g., to which wealth quintile does a person belong), or they may be continuous (e.g., hematocrit level). A good example of a quantitative method is a survey in which people are asked to answer questions with fixed response options or to provide numeric values, such as their monthly income. Lab tests resulting in disease classifications (yes/no) or in a measurement such as the number of white blood cells in a blood sample are also examples of quantitative methods.

Qualitative methods focus on nonnumeric data. Participant observations, interviews, and focus-group discussions are common qualitative methods used in global health research. Qualitative methods are well suited for obtaining thick descriptions and for exploring new areas of research.

Qualitative methods are often considered less rigorous because they are flexible and do not lead to the same types of hypothesis testing and results as quantitative methods. But this is not true; rigor is a characteristic of how the methods are applied rather than the methods themselves.

The choice of methods should be based on the research question. Impact evaluations often use quantitative methods, but there is no 1-to-1 match between research designs and research methods. Many studies incorporate both quantitative and qualitative methods as mixed methods. Sometimes the goal of mixing methods is to triangulate results with respect to the same research question. At other times, qualitative work is used to develop the tools and measures needed in a subsequent trial. When qualitative work follows a quantitative phase, the goal is often to explain or explore results in more depth, which was not possible with the quantitative data alone.

Increasingly randomized control trials (RCTs) complement their use of quantitative methods with qualitative inquiry (O’Cathain et al. 2013). Alaii et al. (2003) provide a good example. These investigators incorporated qualitative interviews on nonadherence into a larger randomized trial of the efficacy of ITNs on reducing child morbidity and mortality in Kenya (Phillips-Howard et al. 2003). They wanted to better understand why people, particularly children under the age of 5, were not using their ITNs correctly. Alaii et al. found that more than a quarter of individuals were nonadherent, often due to excessive heat.

2.2.8 SPECIFY AN ANALYSIS PLAN

Although we are not specifically studying data analysis in this book, a well-considered analysis plan is an important component of both qualitative and quantitative research proposals. On a practical level, the plan must be carefully evaluated to ensure that the study design will produce the data needed to conduct the desired analysis.

More generally, however, prespecified and registered analysis plans promote transparency and confidence in study results. Researchers make tens or hundreds of small decisions during the course of data processing and analysis that can alter the results. Although many of these decisions are overwhelmingly legitimate and defensible, fraudulent skewing of research results is not unheard of. The biggest problem occurs when these decisions are made after seeing the data.

For instance, a researcher might run a test and find that a relationship is not statistically significant. The researcher then makes a small change in how a variable is defined and runs the test again. This time the relationship is significant, which greatly increases the likelihood that the study is published. It’s easy to see where the temptations arise.

The following video provides a great introduction to some of these thorny issues, which are revisited in a later chapter. For now, we highlight the benefits of pre-registration, or publishing your analysis plan in advance of obtaining the data. With drug trials, for example, the research is regulated by the FDA and registration is required on clinicaltrials.gov. Other areas may not have a regulatory registration requirement, but certain high-impact journals might require that you do so in order to publish the results.

2.2.9 OBTAIN ETHICAL APPROVAL

Research involving human subjects must be reviewed and approved by an institutional review board (IRB) prior to commencing. According to the U.S. Department of Health & Human Services, 45 CFR 46, “research” is defined as

a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge.

As shown in Figure 2.3, several categories of research with human subjects are [exempt]from approval (https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/#46.101).

Is the human subjects research eligible for exemption?; Source: http://bit.ly/2brlbKR.

Figure 2.3: Is the human subjects research eligible for exemption?; Source: http://bit.ly/2brlbKR.

Unless a study is exempt from review or meets the requirements for expedited review, the study proposal is expected to be reviewed by the full committee at a regularly scheduled IRB meeting.

For international research, sufficient time is needed for the study proposal to be reviewed by an in-country IRB in addition to the IRB of the investigator’s home institution. “Sufficient” could mean 6 weeks or 6 months, so proper planning is imperative. Global health research is collaborative by design and necessity, so the local expertise of partners is often sought with regard to IRB procedures.

2.2.10 RECRUIT A SAMPLE AND COLLECT DATA

Unless a secondary analysis of existing data (e.g., medical record review)is being used, a sampling strategy and procedural outline for data collection must be identified. These topics are covered in Chapters 9 and 8, respectively.

2.2.11 ANALYZE THE DATA AND WRITE UP THE RESULTS

Complex global health research typically involves specialists in data analysis, most commonly biostatisticians. These experts provide guidance early in the process of study design to ensure that the raw materials needed for the planned analysis are available.

Writing manuscripts in global health research is a collaborative process. In some disciplines, like economics, published studies tend to have very few authors. Some medical studies conducted at multiple sites can have dozens of coauthors. The International Committee of Medical Journal Editors suggests that authors be defined by four criteria:

  • Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Any contributors who do not meet these criteria should be acknowledged.

Author order is handled differently in different disciplines. In psychology, for instance, the author listed first is supposed to be the “lead” author who contributed the most. The last author could be the person who contributed the least, or it could be the senior-most member of the “lab” that produced the work. In economics, author order tends to be alphabetical, but not always. For senior researchers, author order may be an afterthought, but junior scholars need to establish a record of first author publications to signal their emergence as independent scientists.

2.2.12 MAKE YOUR RESEARCH HAVE AN IMPACT

It’s not uncommon in global health research for an intervention study to span 5 to 10 years from proposal to final publication in the scientific literature. And there is an important distinction between publication and impact. While specific metrics track the impact that journals, articles, and authors have on a field, impact also implies real-world change. Does the research lead to new policies or programs? Does it change the way people think? Is the work utilized by fellow researchers?

It is tempting to think that high-quality research and significant, salient results will spread through the discipline and will be built upon in good faith. In reality, however, there is a gap between research and practice/policy. Many worthy ideas for which the data comparisons yielded a p-value less than 0.05 have been shelved prematurely. In response to this stagnation, some have advocated for a research utilization framework to promote the advanced planning needed to engage the potential end-users of research before the study even collects any data.

Research utilization framework. Source: http://bit.ly/2j1dLfL

Figure 2.4: Research utilization framework. Source: http://bit.ly/2j1dLfL

References

King, G., R. O. Keohane, and S. Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton University Press. http://amzn.to/1N7Hi0c.

Singla, D. R., E. E. Kumbakumba, and F. E. Aboud. 2015. “Effects of a Parenting Intervention to Address Maternal Psychological Wellbeing and Child Development and Growth in Rural Uganda: A Community-Based, Cluster-Randomized Trial.” The Lancet Global Health 3 (8):e458–e469. http://www.thelancet.com/journals/langlo/article/PIIS2214-109X(15)00099-6/abstract.

Sahoo, K. C., K. R. Hulland, B. A. Caruso, R. Swain, M. C. Freeman, P. Panigrahi, and R. Dreibelbis. 2015. “Sanitation-Related Psychosocial Stress: A Grounded Theory Study of Women Across the Life-Course in Odisha, India.” Social Science & Medicine 139:80–89. http://www.sciencedirect.com/science/article/pii/S0277953615300010.

Hogan, Margaret C, Kyle J Foreman, Mohsen Naghavi, Stephanie Y Ahn, Mengru Wang, Susanna M Makela, Alan D Lopez, Rafael Lozano, and Christopher JL Murray. 2010. “Maternal Mortality for 181 Countries, 1980–2008: A Systematic Analysis of Progress Towards Millennium Development Goal 5.” The Lancet 375 (9726):1609–23. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(10)60518-1/abstract.

Kassebaum, Nicholas J, Amelia Bertozzi-Villa, Megan S Coggeshall, Katya A Shackelford, Caitlyn Steiner, Kyle R Heuton, Diego Gonzalez-Medina, et al. 2014. “Global, Regional, and National Levels and Causes of Maternal Mortality During 1990–2013: A Systematic Analysis for the Global Burden of Disease Study 2013.” The Lancet 384 (9947):980–1004. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)60696-6/abstract.

Leary, M. 2012. Introduction to Behavioral Research Methods. 6th ed. Pearson. http://amzn.to/1In1BDE.

D. de Vaus. 2001. Research Design in Social Research. Sage. http://amzn.to/1MY21GT.

Hulley, S, TB Newman, and SR Cummings. 2007. “Getting Started: The Anatomy and Physiology of Clinical Research.” In, edited by S Hulley, SR Cummings, WS Browner, DG Grady, and TB Newman, Third, 3–15.

White, H. 2009. “Theory-Based Impact Evaluation: Principles and Practice.” Working Paper 3. 3ie. http://www.3ieimpact.org/media/filer_public/2012/05/07/Working_Paper_3.pdf.

Green, Eric P, Christopher Blattman, Julian Jamison, and Jeannie Annan. 2015. “Women’s Entrepreneurship and Intimate Partner Violence: A Cluster Randomized Trial of Microenterprise Assistance and Partner Participation in Post-Conflict Uganda.” Social Science & Medicine 133:177–88.

Arkes, Hal R, and Catherine Blumer. 1985. “The Psychology of Sunk Cost.” Organizational Behavior and Human Decision Processes 35 (1):124–40. http://www.sciencedirect.com/science/article/pii/0749597885900494.

Cohen, Jessica, and Pascaline Dupas. 2010. “Free Distribution or Cost-Sharing? Evidence from a Randomized Malaria Prevention Experiment.” Quarterly Journal of Economics 125 (1):1–45. http://www.povertyactionlab.org/publication/free-distribution-or-cost-sharing-evidence-malaria-prevention-experiment-kenya.

Glennerster, R., and K. Takavarasha. 2013. Running Randomized Evaluations: A Practical Guide. Princeton University Press. http://amzn.to/1eQqpvr.

Diez, D.M., C.D. Barr, and M. Çetinkaya-Rundel. 2015. OpenIntro Statistics. 3rd ed. OpenIntro.

O’Cathain, Alicia, KJ Thomas, SJ Drabble, Anne Rudolph, and Jenny Hewison. 2013. “What Can Qualitative Research Do for Randomised Controlled Trials? A Systematic Mapping Review.” BMJ Open 3 (6):e002889. http://bmjopen.bmj.com/content/3/6/e002889.full.

Alaii, Jane A, William A Hawley, Margarette S Kolczak, Feiko O Ter Kuile, John E Gimnig, John M Vulule, Amos Odhacha, Aggrey J Oloo, Bernard L Nahlen, and Penelope A Phillips-Howard. 2003. “Factors Affecting Use of Permethrin-Treated Bed Nets During a Randomized Controlled Trial in Western Kenya.” The American Journal of Tropical Medicine and Hygiene 68 (suppl 4):137–41. http://www.ajtmh.org/content/68/4_suppl/137.long.

Phillips-Howard, Penelope A, Bernard L Nahlen, Margarette S Kolczak, Allen W Hightower, FEIKO O TER KUILE, Jane A Alaii, John E Gimnig, et al. 2003. “Efficacy of Permethrin-Treated Bed Nets in the Prevention of Mortality in Young Children in an Area of High Perennial Malaria Transmission in Western Kenya.” The American Journal of Tropical Medicine and Hygiene 68 (4 suppl):23–29. http://www.ajtmh.org/content/68/4_suppl/23.short.


  1. However, all studies do not fit into one box. Often in global health research, studies use mixed-methods approaches and both types of reasoning.

  2. This study used causal inference, a major focus in this book. Does X impact Y? Does this program cause a specific outcome? As we discuss later, this study also provides an example of statistical inference. The authors recruited one sample of adult-child dyads, collected data, and used inferential statistics to generalize from the sample to the population.

  3. Grounded theory reappears in a later chapter in specific examples of approaches to qualitative inquiry. This approach involves iterative data collection and analysis. Most importantly, data have primary importance in grounded theory. Only through the iterative process of data collection and analysis do theories and broader implications emerge.

  4. The study by Sahoo et al. (2015) also provides an example of descriptive inference. In contrast with causal inference, descriptive inference does not seek to establish that X caused Y. However, descriptive inference goes beyond basic description—or the collection of facts—and includes how the individual experiences and opinions of these women tell us something more universal about the nature of sanitation-related stress and its possible connections to factors like a woman’s life stage and her behavior.

  5. Sahoo et al. observed that “sanitation” encompassed much more than defecation and urination, such as washing, bathing, and menstrual management. These sanitation activities brought numerous challenges that could be classified as environmental, social, or sexual that could be understood in the context of a woman’s life stage, living environment, and access to sanitation facilities.

  6. The World Health Organization (WHO) and partners published their own estimates for 2013. They estimated that there were 289,000 maternal deaths in 2013, which is pretty close to the IHME estimate of almost 293,000. As (Kassebaum et al. 2014)(http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)62421-1/abstract) explain, however, the consistency in these estimates masks substantial disagreements, including estimates that diverged by at least 20% in 120 countries in 2013. Also, different perspectives on progress toward achieving the Millennium Development Goal 5 must also be considered.

  7. In the chapter on measurement, we consider the challenges to getting valid information, such as recall difficulties.