Instant insights, infinite possibilities

What is causal research design?

Last updated

14 May 2023

Reviewed by

Short on time? Get an AI generated summary of this article instead

Examining these relationships gives researchers valuable insights into the mechanisms that drive the phenomena they are investigating.

Organizations primarily use causal research design to identify, determine, and explore the impact of changes within an organization and the market. You can use a causal research design to evaluate the effects of certain changes on existing procedures, norms, and more.

This article explores causal research design, including its elements, advantages, and disadvantages.

Analyze your causal research

Dovetail streamlines causal research analysis to help you uncover and share actionable insights

  • Components of causal research

You can demonstrate the existence of cause-and-effect relationships between two factors or variables using specific causal information, allowing you to produce more meaningful results and research implications.

These are the key inputs for causal research:

The timeline of events

Ideally, the cause must occur before the effect. You should review the timeline of two or more separate events to determine the independent variables (cause) from the dependent variables (effect) before developing a hypothesis. 

If the cause occurs before the effect, you can link cause and effect and develop a hypothesis .

For instance, an organization may notice a sales increase. Determining the cause would help them reproduce these results. 

Upon review, the business realizes that the sales boost occurred right after an advertising campaign. The business can leverage this time-based data to determine whether the advertising campaign is the independent variable that caused a change in sales. 

Evaluation of confounding variables

In most cases, you need to pinpoint the variables that comprise a cause-and-effect relationship when using a causal research design. This uncovers a more accurate conclusion. 

Co-variations between a cause and effect must be accurate, and a third factor shouldn’t relate to cause and effect. 

Observing changes

Variation links between two variables must be clear. A quantitative change in effect must happen solely due to a quantitative change in the cause. 

You can test whether the independent variable changes the dependent variable to evaluate the validity of a cause-and-effect relationship. A steady change between the two variables must occur to back up your hypothesis of a genuine causal effect. 

  • Why is causal research useful?

Causal research allows market researchers to predict hypothetical occurrences and outcomes while enhancing existing strategies. Organizations can use this concept to develop beneficial plans. 

Causal research is also useful as market researchers can immediately deduce the effect of the variables on each other under real-world conditions. 

Once researchers complete their first experiment, they can use their findings. Applying them to alternative scenarios or repeating the experiment to confirm its validity can produce further insights. 

Businesses widely use causal research to identify and comprehend the effect of strategic changes on their profits. 

  • How does causal research compare and differ from other research types?

Other research types that identify relationships between variables include exploratory and descriptive research . 

Here’s how they compare and differ from causal research designs:

Exploratory research

An exploratory research design evaluates situations where a problem or opportunity's boundaries are unclear. You can use this research type to test various hypotheses and assumptions to establish facts and understand a situation more clearly.

You can also use exploratory research design to navigate a topic and discover the relevant variables. This research type allows flexibility and adaptability as the experiment progresses, particularly since no area is off-limits.

It’s worth noting that exploratory research is unstructured and typically involves collecting qualitative data . This provides the freedom to tweak and amend the research approach according to your ongoing thoughts and assessments. 

Unfortunately, this exposes the findings to the risk of bias and may limit the extent to which a researcher can explore a topic. 

This table compares the key characteristics of causal and exploratory research:

Main research statement

Research hypotheses

Research question

Amount of uncertainty characterizing decision situation

Clearly defined

Highly ambiguous

Research approach

Highly structured

Unstructured

When you conduct it

Later stages of decision-making

Early stages of decision-making

Descriptive research

This research design involves capturing and describing the traits of a population, situation, or phenomenon. Descriptive research focuses more on the " what " of the research subject and less on the " why ."

Since descriptive research typically happens in a real-world setting, variables can cross-contaminate others. This increases the challenge of isolating cause-and-effect relationships. 

You may require further research if you need more causal links. 

This table compares the key characteristics of causal and descriptive research.  

Main research statement

Research hypotheses

Research question

Amount of uncertainty characterizing decision situation

Clearly defined

Partially defined

Research approach

Highly structured

Structured

When you conduct it

Later stages of decision-making

Later stages of decision-making

Causal research examines a research question’s variables and how they interact. It’s easier to pinpoint cause and effect since the experiment often happens in a controlled setting. 

Researchers can conduct causal research at any stage, but they typically use it once they know more about the topic.

In contrast, causal research tends to be more structured and can be combined with exploratory and descriptive research to help you attain your research goals. 

  • How can you use causal research effectively?

Here are common ways that market researchers leverage causal research effectively:

Market and advertising research

Do you want to know if your new marketing campaign is affecting your organization positively? You can use causal research to determine the variables causing negative or positive impacts on your campaign. 

Improving customer experiences and loyalty levels

Consumers generally enjoy purchasing from brands aligned with their values. They’re more likely to purchase from such brands and positively represent them to others. 

You can use causal research to identify the variables contributing to increased or reduced customer acquisition and retention rates. 

Could the cause of increased customer retention rates be streamlined checkout? 

Perhaps you introduced a new solution geared towards directly solving their immediate problem. 

Whatever the reason, causal research can help you identify the cause-and-effect relationship. You can use this to enhance your customer experiences and loyalty levels.

Improving problematic employee turnover rates

Is your organization experiencing skyrocketing attrition rates? 

You can leverage the features and benefits of causal research to narrow down the possible explanations or variables with significant effects on employees quitting. 

This way, you can prioritize interventions, focusing on the highest priority causal influences, and begin to tackle high employee turnover rates. 

  • Advantages of causal research

The main benefits of causal research include the following:

Effectively test new ideas

If causal research can pinpoint the precise outcome through combinations of different variables, researchers can test ideas in the same manner to form viable proof of concepts.

Achieve more objective results

Market researchers typically use random sampling techniques to choose experiment participants or subjects in causal research. This reduces the possibility of exterior, sample, or demography-based influences, generating more objective results. 

Improved business processes

Causal research helps businesses understand which variables positively impact target variables, such as customer loyalty or sales revenues. This helps them improve their processes, ROI, and customer and employee experiences.

Guarantee reliable and accurate results

Upon identifying the correct variables, researchers can replicate cause and effect effortlessly. This creates reliable data and results to draw insights from. 

Internal organization improvements

Businesses that conduct causal research can make informed decisions about improving their internal operations and enhancing employee experiences. 

  • Disadvantages of causal research

Like any other research method, casual research has its set of drawbacks that include:

Extra research to ensure validity

Researchers can't simply rely on the outcomes of causal research since it isn't always accurate. There may be a need to conduct other research types alongside it to ensure accurate output.

Coincidence

Coincidence tends to be the most significant error in causal research. Researchers often misinterpret a coincidental link between a cause and effect as a direct causal link. 

Administration challenges

Causal research can be challenging to administer since it's impossible to control the impact of extraneous variables . 

Giving away your competitive advantage

If you intend to publish your research, it exposes your information to the competition. 

Competitors may use your research outcomes to identify your plans and strategies to enter the market before you. 

  • Causal research examples

Multiple fields can use causal research, so it serves different purposes, such as. 

Customer loyalty research

Organizations and employees can use causal research to determine the best customer attraction and retention approaches. 

They monitor interactions between customers and employees to identify cause-and-effect patterns. That could be a product demonstration technique resulting in higher or lower sales from the same customers. 

Example: Business X introduces a new individual marketing strategy for a small customer group and notices a measurable increase in monthly subscriptions. 

Upon getting identical results from different groups, the business concludes that the individual marketing strategy resulted in the intended causal relationship.

Advertising research

Businesses can also use causal research to implement and assess advertising campaigns. 

Example: Business X notices a 7% increase in sales revenue a few months after a business introduces a new advertisement in a certain region. The business can run the same ad in random regions to compare sales data over the same period. 

This will help the company determine whether the ad caused the sales increase. If sales increase in these randomly selected regions, the business could conclude that advertising campaigns and sales share a cause-and-effect relationship. 

Educational research

Academics, teachers, and learners can use causal research to explore the impact of politics on learners and pinpoint learner behavior trends. 

Example: College X notices that more IT students drop out of their program in their second year, which is 8% higher than any other year. 

The college administration can interview a random group of IT students to identify factors leading to this situation, including personal factors and influences. 

With the help of in-depth statistical analysis, the institution's researchers can uncover the main factors causing dropout. They can create immediate solutions to address the problem.

Is a causal variable dependent or independent?

When two variables have a cause-and-effect relationship, the cause is often called the independent variable. As such, the effect variable is dependent, i.e., it depends on the independent causal variable. An independent variable is only causal under experimental conditions. 

What are the three criteria for causality?

The three conditions for causality are:

Temporality/temporal precedence: The cause must precede the effect.

Rationality: One event predicts the other with an explanation, and the effect must vary in proportion to changes in the cause.

Control for extraneous variables: The covariables must not result from other variables.  

Is causal research experimental?

Causal research is mostly explanatory. Causal studies focus on analyzing a situation to explore and explain the patterns of relationships between variables. 

Further, experiments are the primary data collection methods in studies with causal research design. However, as a research design, causal research isn't entirely experimental.

What is the difference between experimental and causal research design?

One of the main differences between causal and experimental research is that in causal research, the research subjects are already in groups since the event has already happened. 

On the other hand, researchers randomly choose subjects in experimental research before manipulating the variables.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

Causal Research: Definition, Design, Tips, Examples

Appinio Research · 21.02.2024 · 34min read

Causal Research Definition Design Tips Examples

Ever wondered why certain events lead to specific outcomes? Understanding causality—the relationship between cause and effect—is crucial for unraveling the mysteries of the world around us. In this guide on causal research, we delve into the methods, techniques, and principles behind identifying and establishing cause-and-effect relationships between variables. Whether you're a seasoned researcher or new to the field, this guide will equip you with the knowledge and tools to conduct rigorous causal research and draw meaningful conclusions that can inform decision-making and drive positive change.

What is Causal Research?

Causal research is a methodological approach used in scientific inquiry to investigate cause-and-effect relationships between variables. Unlike correlational or descriptive research, which merely examine associations or describe phenomena, causal research aims to determine whether changes in one variable cause changes in another variable.

Importance of Causal Research

Understanding the importance of causal research is crucial for appreciating its role in advancing knowledge and informing decision-making across various fields. Here are key reasons why causal research is significant:

  • Establishing Causality:  Causal research enables researchers to determine whether changes in one variable directly cause changes in another variable. This helps identify effective interventions, predict outcomes, and inform evidence-based practices.
  • Guiding Policy and Practice:  By identifying causal relationships, causal research provides empirical evidence to support policy decisions, program interventions, and business strategies. Decision-makers can use causal findings to allocate resources effectively and address societal challenges.
  • Informing Predictive Modeling :  Causal research contributes to the development of predictive models by elucidating causal mechanisms underlying observed phenomena. Predictive models based on causal relationships can accurately forecast future outcomes and trends.
  • Advancing Scientific Knowledge:  Causal research contributes to the cumulative body of scientific knowledge by testing hypotheses, refining theories, and uncovering underlying mechanisms of phenomena. It fosters a deeper understanding of complex systems and phenomena.
  • Mitigating Confounding Factors:  Understanding causal relationships allows researchers to control for confounding variables and reduce bias in their studies. By isolating the effects of specific variables, researchers can draw more valid and reliable conclusions.

Causal Research Distinction from Other Research

Understanding the distinctions between causal research and other types of research methodologies is essential for researchers to choose the most appropriate approach for their study objectives. Let's explore the differences and similarities between causal research and descriptive, exploratory, and correlational research methodologies .

Descriptive vs. Causal Research

Descriptive research  focuses on describing characteristics, behaviors, or phenomena without manipulating variables or establishing causal relationships. It provides a snapshot of the current state of affairs but does not attempt to explain why certain phenomena occur.

Causal research , on the other hand, seeks to identify cause-and-effect relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. Unlike descriptive research, causal research aims to determine whether changes in one variable directly cause changes in another variable.

Similarities:

  • Both descriptive and causal research involve empirical observation and data collection.
  • Both types of research contribute to the scientific understanding of phenomena, albeit through different approaches.

Differences:

  • Descriptive research focuses on describing phenomena, while causal research aims to explain why phenomena occur by identifying causal relationships.
  • Descriptive research typically uses observational methods, while causal research often involves experimental designs or causal inference techniques to establish causality.

Exploratory vs. Causal Research

Exploratory research  aims to explore new topics, generate hypotheses, or gain initial insights into phenomena. It is often conducted when little is known about a subject and seeks to generate ideas for further investigation.

Causal research , on the other hand, is concerned with testing hypotheses and establishing cause-and-effect relationships between variables. It builds on existing knowledge and seeks to confirm or refute causal hypotheses through systematic investigation.

  • Both exploratory and causal research contribute to the generation of knowledge and theory development.
  • Both types of research involve systematic inquiry and data analysis to answer research questions.
  • Exploratory research focuses on generating hypotheses and exploring new areas of inquiry, while causal research aims to test hypotheses and establish causal relationships.
  • Exploratory research is more flexible and open-ended, while causal research follows a more structured and hypothesis-driven approach.

Correlational vs. Causal Research

Correlational research  examines the relationship between variables without implying causation. It identifies patterns of association or co-occurrence between variables but does not establish the direction or causality of the relationship.

Causal research , on the other hand, seeks to establish cause-and-effect relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. It goes beyond mere association to determine whether changes in one variable directly cause changes in another variable.

  • Both correlational and causal research involve analyzing relationships between variables.
  • Both types of research contribute to understanding the nature of associations between variables.
  • Correlational research focuses on identifying patterns of association, while causal research aims to establish causal relationships.
  • Correlational research does not manipulate variables, while causal research involves systematically manipulating independent variables to observe their effects on dependent variables.

How to Formulate Causal Research Hypotheses?

Crafting research questions and hypotheses is the foundational step in any research endeavor. Defining your variables clearly and articulating the causal relationship you aim to investigate is essential. Let's explore this process further.

1. Identify Variables

Identifying variables involves recognizing the key factors you will manipulate or measure in your study. These variables can be classified into independent, dependent, and confounding variables.

  • Independent Variable (IV):  This is the variable you manipulate or control in your study. It is the presumed cause that you want to test.
  • Dependent Variable (DV):  The dependent variable is the outcome or response you measure. It is affected by changes in the independent variable.
  • Confounding Variables:  These are extraneous factors that may influence the relationship between the independent and dependent variables, leading to spurious correlations or erroneous causal inferences. Identifying and controlling for confounding variables is crucial for establishing valid causal relationships.

2. Establish Causality

Establishing causality requires meeting specific criteria outlined by scientific methodology. While correlation between variables may suggest a relationship, it does not imply causation. To establish causality, researchers must demonstrate the following:

  • Temporal Precedence:  The cause must precede the effect in time. In other words, changes in the independent variable must occur before changes in the dependent variable.
  • Covariation of Cause and Effect:  Changes in the independent variable should be accompanied by corresponding changes in the dependent variable. This demonstrates a consistent pattern of association between the two variables.
  • Elimination of Alternative Explanations:  Researchers must rule out other possible explanations for the observed relationship between variables. This involves controlling for confounding variables and conducting rigorous experimental designs to isolate the effects of the independent variable.

3. Write Clear and Testable Hypotheses

Hypotheses serve as tentative explanations for the relationship between variables and provide a framework for empirical testing. A well-formulated hypothesis should be:

  • Specific:  Clearly state the expected relationship between the independent and dependent variables.
  • Testable:  The hypothesis should be capable of being empirically tested through observation or experimentation.
  • Falsifiable:  There should be a possibility of proving the hypothesis false through empirical evidence.

For example, a hypothesis in a study examining the effect of exercise on weight loss could be: "Increasing levels of physical activity (IV) will lead to greater weight loss (DV) among participants (compared to those with lower levels of physical activity)."

By formulating clear hypotheses and operationalizing variables, researchers can systematically investigate causal relationships and contribute to the advancement of scientific knowledge.

Causal Research Design

Designing your research study involves making critical decisions about how you will collect and analyze data to investigate causal relationships.

Experimental vs. Observational Designs

One of the first decisions you'll make when designing a study is whether to employ an experimental or observational design. Each approach has its strengths and limitations, and the choice depends on factors such as the research question, feasibility , and ethical considerations.

  • Experimental Design: In experimental designs, researchers manipulate the independent variable and observe its effects on the dependent variable while controlling for confounding variables. Random assignment to experimental conditions allows for causal inferences to be drawn. Example: A study testing the effectiveness of a new teaching method on student performance by randomly assigning students to either the experimental group (receiving the new teaching method) or the control group (receiving the traditional method).
  • Observational Design: Observational designs involve observing and measuring variables without intervention. Researchers may still examine relationships between variables but cannot establish causality as definitively as in experimental designs. Example: A study observing the association between socioeconomic status and health outcomes by collecting data on income, education level, and health indicators from a sample of participants.

Control and Randomization

Control and randomization are crucial aspects of experimental design that help ensure the validity of causal inferences.

  • Control: Controlling for extraneous variables involves holding constant factors that could influence the dependent variable, except for the independent variable under investigation. This helps isolate the effects of the independent variable. Example: In a medication trial, controlling for factors such as age, gender, and pre-existing health conditions ensures that any observed differences in outcomes can be attributed to the medication rather than other variables.
  • Randomization: Random assignment of participants to experimental conditions helps distribute potential confounders evenly across groups, reducing the likelihood of systematic biases and allowing for causal conclusions. Example: Randomly assigning patients to treatment and control groups in a clinical trial ensures that both groups are comparable in terms of baseline characteristics, minimizing the influence of extraneous variables on treatment outcomes.

Internal and External Validity

Two key concepts in research design are internal validity and external validity, which relate to the credibility and generalizability of study findings, respectively.

  • Internal Validity: Internal validity refers to the extent to which the observed effects can be attributed to the manipulation of the independent variable rather than confounding factors. Experimental designs typically have higher internal validity due to their control over extraneous variables. Example: A study examining the impact of a training program on employee productivity would have high internal validity if it could confidently attribute changes in productivity to the training intervention.
  • External Validity: External validity concerns the extent to which study findings can be generalized to other populations, settings, or contexts. While experimental designs prioritize internal validity, they may sacrifice external validity by using highly controlled conditions that do not reflect real-world scenarios. Example: Findings from a laboratory study on memory retention may have limited external validity if the experimental tasks and conditions differ significantly from real-life learning environments.

Types of Experimental Designs

Several types of experimental designs are commonly used in causal research, each with its own strengths and applications.

  • Randomized Control Trials (RCTs): RCTs are considered the gold standard for assessing causality in research. Participants are randomly assigned to experimental and control groups, allowing researchers to make causal inferences. Example: A pharmaceutical company testing a new drug's efficacy would use an RCT to compare outcomes between participants receiving the drug and those receiving a placebo.
  • Quasi-Experimental Designs: Quasi-experimental designs lack random assignment but still attempt to establish causality by controlling for confounding variables through design or statistical analysis . Example: A study evaluating the effectiveness of a smoking cessation program might compare outcomes between participants who voluntarily enroll in the program and a matched control group of non-enrollees.

By carefully selecting an appropriate research design and addressing considerations such as control, randomization, and validity, researchers can conduct studies that yield credible evidence of causal relationships and contribute valuable insights to their field of inquiry.

Causal Research Data Collection

Collecting data is a critical step in any research study, and the quality of the data directly impacts the validity and reliability of your findings.

Choosing Measurement Instruments

Selecting appropriate measurement instruments is essential for accurately capturing the variables of interest in your study. The choice of measurement instrument depends on factors such as the nature of the variables, the target population , and the research objectives.

  • Surveys :  Surveys are commonly used to collect self-reported data on attitudes, opinions, behaviors, and demographics . They can be administered through various methods, including paper-and-pencil surveys, online surveys, and telephone interviews.
  • Observations:  Observational methods involve systematically recording behaviors, events, or phenomena as they occur in natural settings. Observations can be structured (following a predetermined checklist) or unstructured (allowing for flexible data collection).
  • Psychological Tests:  Psychological tests are standardized instruments designed to measure specific psychological constructs, such as intelligence, personality traits, or emotional functioning. These tests often have established reliability and validity.
  • Physiological Measures:  Physiological measures, such as heart rate, blood pressure, or brain activity, provide objective data on bodily processes. They are commonly used in health-related research but require specialized equipment and expertise.
  • Existing Databases:  Researchers may also utilize existing datasets, such as government surveys, public health records, or organizational databases, to answer research questions. Secondary data analysis can be cost-effective and time-saving but may be limited by the availability and quality of data.

Ensuring accurate data collection is the cornerstone of any successful research endeavor. With the right tools in place, you can unlock invaluable insights to drive your causal research forward. From surveys to tests, each instrument offers a unique lens through which to explore your variables of interest.

At Appinio , we understand the importance of robust data collection methods in informing impactful decisions. Let us empower your research journey with our intuitive platform, where you can effortlessly gather real-time consumer insights to fuel your next breakthrough.   Ready to take your research to the next level? Book a demo today and see how Appinio can revolutionize your approach to data collection!

Book a Demo

Sampling Techniques

Sampling involves selecting a subset of individuals or units from a larger population to participate in the study. The goal of sampling is to obtain a representative sample that accurately reflects the characteristics of the population of interest.

  • Probability Sampling:  Probability sampling methods involve randomly selecting participants from the population, ensuring that each member of the population has an equal chance of being included in the sample. Common probability sampling techniques include simple random sampling , stratified sampling, and cluster sampling .
  • Non-Probability Sampling:  Non-probability sampling methods do not involve random selection and may introduce biases into the sample. Examples of non-probability sampling techniques include convenience sampling, purposive sampling, and snowball sampling.

The choice of sampling technique depends on factors such as the research objectives, population characteristics, resources available, and practical constraints. Researchers should strive to minimize sampling bias and maximize the representativeness of the sample to enhance the generalizability of their findings.

Ethical Considerations

Ethical considerations are paramount in research and involve ensuring the rights, dignity, and well-being of research participants. Researchers must adhere to ethical principles and guidelines established by professional associations and institutional review boards (IRBs).

  • Informed Consent:  Participants should be fully informed about the nature and purpose of the study, potential risks and benefits, their rights as participants, and any confidentiality measures in place. Informed consent should be obtained voluntarily and without coercion.
  • Privacy and Confidentiality:  Researchers should take steps to protect the privacy and confidentiality of participants' personal information. This may involve anonymizing data, securing data storage, and limiting access to identifiable information.
  • Minimizing Harm:  Researchers should mitigate any potential physical, psychological, or social harm to participants. This may involve conducting risk assessments, providing appropriate support services, and debriefing participants after the study.
  • Respect for Participants:  Researchers should respect participants' autonomy, diversity, and cultural values. They should seek to foster a trusting and respectful relationship with participants throughout the research process.
  • Publication and Dissemination:  Researchers have a responsibility to accurately report their findings and acknowledge contributions from participants and collaborators. They should adhere to principles of academic integrity and transparency in disseminating research results.

By addressing ethical considerations in research design and conduct, researchers can uphold the integrity of their work, maintain trust with participants and the broader community, and contribute to the responsible advancement of knowledge in their field.

Causal Research Data Analysis

Once data is collected, it must be analyzed to draw meaningful conclusions and assess causal relationships.

Causal Inference Methods

Causal inference methods are statistical techniques used to identify and quantify causal relationships between variables in observational data. While experimental designs provide the most robust evidence for causality, observational studies often require more sophisticated methods to account for confounding factors.

  • Difference-in-Differences (DiD):  DiD compares changes in outcomes before and after an intervention between a treatment group and a control group, controlling for pre-existing trends. It estimates the average treatment effect by differencing the changes in outcomes between the two groups over time.
  • Instrumental Variables (IV):  IV analysis relies on instrumental variables—variables that affect the treatment variable but not the outcome—to estimate causal effects in the presence of endogeneity. IVs should be correlated with the treatment but uncorrelated with the error term in the outcome equation.
  • Regression Discontinuity (RD):  RD designs exploit naturally occurring thresholds or cutoff points to estimate causal effects near the threshold. Participants just above and below the threshold are compared, assuming that they are similar except for their proximity to the threshold.
  • Propensity Score Matching (PSM):  PSM matches individuals or units based on their propensity scores—the likelihood of receiving the treatment—creating comparable groups with similar observed characteristics. Matching reduces selection bias and allows for causal inference in observational studies.

Assessing Causality Strength

Assessing the strength of causality involves determining the magnitude and direction of causal effects between variables. While statistical significance indicates whether an observed relationship is unlikely to occur by chance, it does not necessarily imply a strong or meaningful effect.

  • Effect Size:  Effect size measures the magnitude of the relationship between variables, providing information about the practical significance of the results. Standard effect size measures include Cohen's d for mean differences and odds ratios for categorical outcomes.
  • Confidence Intervals:  Confidence intervals provide a range of values within which the actual effect size is likely to lie with a certain degree of certainty. Narrow confidence intervals indicate greater precision in estimating the true effect size.
  • Practical Significance:  Practical significance considers whether the observed effect is meaningful or relevant in real-world terms. Researchers should interpret results in the context of their field and the implications for stakeholders.

Handling Confounding Variables

Confounding variables are extraneous factors that may distort the observed relationship between the independent and dependent variables, leading to spurious or biased conclusions. Addressing confounding variables is essential for establishing valid causal inferences.

  • Statistical Control:  Statistical control involves including confounding variables as covariates in regression models to partially out their effects on the outcome variable. Controlling for confounders reduces bias and strengthens the validity of causal inferences.
  • Matching:  Matching participants or units based on observed characteristics helps create comparable groups with similar distributions of confounding variables. Matching reduces selection bias and mimics the randomization process in experimental designs.
  • Sensitivity Analysis:  Sensitivity analysis assesses the robustness of study findings to changes in model specifications or assumptions. By varying analytical choices and examining their impact on results, researchers can identify potential sources of bias and evaluate the stability of causal estimates.
  • Subgroup Analysis:  Subgroup analysis explores whether the relationship between variables differs across subgroups defined by specific characteristics. Identifying effect modifiers helps understand the conditions under which causal effects may vary.

By employing rigorous causal inference methods, assessing the strength of causality, and addressing confounding variables, researchers can confidently draw valid conclusions about causal relationships in their studies, advancing scientific knowledge and informing evidence-based decision-making.

Causal Research Examples

Examples play a crucial role in understanding the application of causal research methods and their impact across various domains. Let's explore some detailed examples to illustrate how causal research is conducted and its real-world implications:

Example 1: Software as a Service (SaaS) User Retention Analysis

Suppose a SaaS company wants to understand the factors influencing user retention and engagement with their platform. The company conducts a longitudinal observational study, collecting data on user interactions, feature usage, and demographic information over several months.

  • Design:  The company employs an observational cohort study design, tracking cohorts of users over time to observe changes in retention and engagement metrics. They use analytics tools to collect data on user behavior , such as logins, feature usage, session duration, and customer support interactions.
  • Data Collection:  Data is collected from the company's platform logs, customer relationship management (CRM) system, and user surveys. Key metrics include user churn rates, active user counts, feature adoption rates, and Net Promoter Scores ( NPS ).
  • Analysis:  Using statistical techniques like survival analysis and regression modeling, the company identifies factors associated with user retention, such as feature usage patterns, onboarding experiences, customer support interactions, and subscription plan types.
  • Findings: The analysis reveals that users who engage with specific features early in their lifecycle have higher retention rates, while those who encounter usability issues or lack personalized onboarding experiences are more likely to churn. The company uses these insights to optimize product features, improve onboarding processes, and enhance customer support strategies to increase user retention and satisfaction.

Example 2: Business Impact of Digital Marketing Campaign

Consider a technology startup launching a digital marketing campaign to promote its new product offering. The company conducts an experimental study to evaluate the effectiveness of different marketing channels in driving website traffic, lead generation, and sales conversions.

  • Design:  The company implements an A/B testing design, randomly assigning website visitors to different marketing treatment conditions, such as Google Ads, social media ads, email campaigns, or content marketing efforts. They track user interactions and conversion events using web analytics tools and marketing automation platforms.
  • Data Collection:  Data is collected on website traffic, click-through rates, conversion rates, lead generation, and sales revenue. The company also gathers demographic information and user feedback through surveys and customer interviews to understand the impact of marketing messages and campaign creatives .
  • Analysis:  Utilizing statistical methods like hypothesis testing and multivariate analysis, the company compares key performance metrics across different marketing channels to assess their effectiveness in driving user engagement and conversion outcomes. They calculate return on investment (ROI) metrics to evaluate the cost-effectiveness of each marketing channel.
  • Findings:  The analysis reveals that social media ads outperform other marketing channels in generating website traffic and lead conversions, while email campaigns are more effective in nurturing leads and driving sales conversions. Armed with these insights, the company allocates marketing budgets strategically, focusing on channels that yield the highest ROI and adjusting messaging and targeting strategies to optimize campaign performance.

These examples demonstrate the diverse applications of causal research methods in addressing important questions, informing policy decisions, and improving outcomes in various fields. By carefully designing studies, collecting relevant data, employing appropriate analysis techniques, and interpreting findings rigorously, researchers can generate valuable insights into causal relationships and contribute to positive social change.

How to Interpret Causal Research Results?

Interpreting and reporting research findings is a crucial step in the scientific process, ensuring that results are accurately communicated and understood by stakeholders.

Interpreting Statistical Significance

Statistical significance indicates whether the observed results are unlikely to occur by chance alone, but it does not necessarily imply practical or substantive importance. Interpreting statistical significance involves understanding the meaning of p-values and confidence intervals and considering their implications for the research findings.

  • P-values:  A p-value represents the probability of obtaining the observed results (or more extreme results) if the null hypothesis is true. A p-value below a predetermined threshold (typically 0.05) suggests that the observed results are statistically significant, indicating that the null hypothesis can be rejected in favor of the alternative hypothesis.
  • Confidence Intervals:  Confidence intervals provide a range of values within which the true population parameter is likely to lie with a certain degree of confidence (e.g., 95%). If the confidence interval does not include the null value, it suggests that the observed effect is statistically significant at the specified confidence level.

Interpreting statistical significance requires considering factors such as sample size, effect size, and the practical relevance of the results rather than relying solely on p-values to draw conclusions.

Discussing Practical Significance

While statistical significance indicates whether an effect exists, practical significance evaluates the magnitude and meaningfulness of the effect in real-world terms. Discussing practical significance involves considering the relevance of the results to stakeholders and assessing their impact on decision-making and practice.

  • Effect Size:  Effect size measures the magnitude of the observed effect, providing information about its practical importance. Researchers should interpret effect sizes in the context of their field and the scale of measurement (e.g., small, medium, or large effect sizes).
  • Contextual Relevance:  Consider the implications of the results for stakeholders, policymakers, and practitioners. Are the observed effects meaningful in the context of existing knowledge, theory, or practical applications? How do the findings contribute to addressing real-world problems or informing decision-making?

Discussing practical significance helps contextualize research findings and guide their interpretation and application in practice, beyond statistical significance alone.

Addressing Limitations and Assumptions

No study is without limitations, and researchers should transparently acknowledge and address potential biases, constraints, and uncertainties in their research design and findings.

  • Methodological Limitations:  Identify any limitations in study design, data collection, or analysis that may affect the validity or generalizability of the results. For example, sampling biases , measurement errors, or confounding variables.
  • Assumptions:  Discuss any assumptions made in the research process and their implications for the interpretation of results. Assumptions may relate to statistical models, causal inference methods, or theoretical frameworks underlying the study.
  • Alternative Explanations:  Consider alternative explanations for the observed results and discuss their potential impact on the validity of causal inferences. How robust are the findings to different interpretations or competing hypotheses?

Addressing limitations and assumptions demonstrates transparency and rigor in the research process, allowing readers to critically evaluate the validity and reliability of the findings.

Communicating Findings Clearly

Effectively communicating research findings is essential for disseminating knowledge, informing decision-making, and fostering collaboration and dialogue within the scientific community.

  • Clarity and Accessibility:  Present findings in a clear, concise, and accessible manner, using plain language and avoiding jargon or technical terminology. Organize information logically and use visual aids (e.g., tables, charts, graphs) to enhance understanding.
  • Contextualization:  Provide context for the results by summarizing key findings, highlighting their significance, and relating them to existing literature or theoretical frameworks. Discuss the implications of the findings for theory, practice, and future research directions.
  • Transparency:  Be transparent about the research process, including data collection procedures, analytical methods, and any limitations or uncertainties associated with the findings. Clearly state any conflicts of interest or funding sources that may influence interpretation.

By communicating findings clearly and transparently, researchers can facilitate knowledge exchange, foster trust and credibility, and contribute to evidence-based decision-making.

Causal Research Tips

When conducting causal research, it's essential to approach your study with careful planning, attention to detail, and methodological rigor. Here are some tips to help you navigate the complexities of causal research effectively:

  • Define Clear Research Questions:  Start by clearly defining your research questions and hypotheses. Articulate the causal relationship you aim to investigate and identify the variables involved.
  • Consider Alternative Explanations:  Be mindful of potential confounding variables and alternative explanations for the observed relationships. Take steps to control for confounders and address alternative hypotheses in your analysis.
  • Prioritize Internal Validity:  While external validity is important for generalizability, prioritize internal validity in your study design to ensure that observed effects can be attributed to the manipulation of the independent variable.
  • Use Randomization When Possible:  If feasible, employ randomization in experimental designs to distribute potential confounders evenly across experimental conditions and enhance the validity of causal inferences.
  • Be Transparent About Methods:  Provide detailed descriptions of your research methods, including data collection procedures, analytical techniques, and any assumptions or limitations associated with your study.
  • Utilize Multiple Methods:  Consider using a combination of experimental and observational methods to triangulate findings and strengthen the validity of causal inferences.
  • Be Mindful of Sample Size:  Ensure that your sample size is adequate to detect meaningful effects and minimize the risk of Type I and Type II errors. Conduct power analyses to determine the sample size needed to achieve sufficient statistical power.
  • Validate Measurement Instruments:  Validate your measurement instruments to ensure that they are reliable and valid for assessing the variables of interest in your study. Pilot test your instruments if necessary.
  • Seek Feedback from Peers:  Collaborate with colleagues or seek feedback from peer reviewers to solicit constructive criticism and improve the quality of your research design and analysis.

Conclusion for Causal Research

Mastering causal research empowers researchers to unlock the secrets of cause and effect, shedding light on the intricate relationships between variables in diverse fields. By employing rigorous methods such as experimental designs, causal inference techniques, and careful data analysis, you can uncover causal mechanisms, predict outcomes, and inform evidence-based practices. Through the lens of causal research, complex phenomena become more understandable, and interventions become more effective in addressing societal challenges and driving progress. In a world where understanding the reasons behind events is paramount, causal research serves as a beacon of clarity and insight. Armed with the knowledge and techniques outlined in this guide, you can navigate the complexities of causality with confidence, advancing scientific knowledge, guiding policy decisions, and ultimately making meaningful contributions to our understanding of the world.

How to Conduct Causal Research in Minutes?

Introducing Appinio , your gateway to lightning-fast causal research. As a real-time market research platform, we're revolutionizing how companies gain consumer insights to drive data-driven decisions. With Appinio, conducting your own market research is not only easy but also thrilling. Experience the excitement of market research with Appinio, where fast, intuitive, and impactful insights are just a click away.

Here's why you'll love Appinio:

  • Instant Insights:  Say goodbye to waiting days for research results. With our platform, you'll go from questions to insights in minutes, empowering you to make decisions at the speed of business.
  • User-Friendly Interface:  No need for a research degree here! Our intuitive platform is designed for anyone to use, making complex research tasks simple and accessible.
  • Global Reach:  Reach your target audience wherever they are. With access to over 90 countries and the ability to define precise target groups from 1200+ characteristics, you'll gather comprehensive data to inform your decisions.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Get your brand Holiday Ready: 4 Essential Steps to Smash your Q4

03.09.2024 | 3min read

Get your brand Holiday Ready: 4 Essential Steps to Smash your Q4

Beyond Demographics: Psychographic Power in target group identification

03.09.2024 | 8min read

Beyond Demographics: Psychographics power in target group identification

What is Convenience Sampling Definition Method Examples

29.08.2024 | 32min read

What is Convenience Sampling? Definition, Method, Examples

Causal research designs and analysis in education

  • Published: 26 July 2024
  • Volume 25 , pages 555–556, ( 2024 )

Cite this article

which research proposal question aims to establish causality

  • Peter M. Steiner 1 &
  • Yongnam Kim 2  

437 Accesses

Explore all metrics

Avoid common mistakes on your manuscript.

Causal inference in education research is crucial for assessing the effectiveness of educational policies, programs, and interventions. Establishing causal relations, that is, identifying interventions that actually work in practice, helps policymakers, educators, and researchers implement strategies that are genuinely beneficial, ensuring resources are allocated efficiently to enhance educational experiences and outcomes. Without rigorous causal inference based on experimental or quasi-experimental designs, efforts to improve education might rely on practices that appear effective but fail to produce actual benefits when scaled or applied in different contexts.

The main challenge with causal inference is that data alone are insufficient to reliably assess whether a causal relation between two variables of interest—the treatment conditions and the outcome—exists. In other words, data are uninformative about the causal relation of observed variables (Cunningham, 2021 ; Pearl & Mackenzie, 2018 ). Causal inference needs more than data. It also needs reliable background knowledge about the data-generating process, that is, subject-matter theory about how study participants got assigned or selected into treatment and control conditions and about the causal determinants of the outcome measure(s) under consideration. Such knowledge allows education researchers to assess whether the assumptions needed for causal inference are likely met and, thus, whether the estimated statistical associations or parameters warrant a causal interpretation. Given the crucial importance of causal assumptions, all contributions to this special issue, entitled “Causal Research Designs and Analysis in Education,” highlight the assumptions about the data-generating process so that effect estimates can be causally interpreted (and potentially generalized). It will become clear that researchers with an interest in evaluating education policies or interventions should rely on randomized or quasi-experimental research designs to quantitatively assess a policy’s or intervention’s impact on outcomes of interest. Though randomized experiments are still considered the gold standard with regard to internal validity, they are often not feasible for ethical or administrative reasons. Since the rationale of randomized experiments is well-known, this special issue focuses on the strongest quasi-experimental designs for causal inference, some of them not yet well-known to education researchers. Moreover, the special issue also tries to provide an overview of the main frameworks for formalizing causal inference.

The first three articles introduce different causal frameworks that provide formal languages to discuss questions related to causation and causal inference. Anglin et al. ( 2024 ) cover Campbell’s validity typology and its associated validity threats, with which most evaluation and education researchers are familiar. Then, Keller and Branson ( 2023 ) discuss the Rubin Causal Model with its potential outcomes notation, and Feng ( 2024 ) provides an introduction to causal graphs (directed acyclic graphs) and their underlying (nonparametric) structural causal models as put forward by Pearl ( 2009 ) or Spirtes et al. ( 2000 ). These three frameworks allow researchers to clearly explicate the causal quantity of interest and to discuss the assumptions needed for the identification of causal effects from experimental or observational data.

Then, the following five articles are devoted to the strongest quasi-experimental designs for education research. The first of these articles by Cham et al. ( 2024 ) provides a general overview of the four quasi-experimental designs covered in this special issue. The first two quasi-experimental designs address situations where the assignment (or selection) mechanism is essentially known and observed so that specific ways of covariate control are able to deconfound the causal relation of interest. These designs include regression discontinuity designs, which are introduced by Suk ( 2024 ), and propensity score matching strategies, which are addressed by Chan ( 2023 ), who also discusses the use of propensity score methods for causal generalization. The other two quasi-experimental methods, difference-in-differences and instrumental variable estimation, are able to deal with unobserved confounding, that is, causal effects are identifiable and estimable even if the observed covariates are not able to remove the entire confounding bias. The basics of difference-in-differences estimation are discussed by Corral and Yang ( 2024 ), while Porter ( 2024 ) provides a comprehensible discussion of the assumptions needed for a successful application of instrumental variables.

Finally, the last three articles cover some selected but important topics in causal inference. Li et al. ( 2024 ) discuss aspects of cluster randomized trials in education research, Qin ( 2024 ) provides an introduction to causal mediation analysis, and Shear and Briggs ( 2024 ) address measurement issues in causal inference.

All the articles in this special issue underscore the crucial importance of causal thinking in education research. By thoughtfully selecting and applying appropriate research designs and analytical methods, education researchers can broaden the scope of their investigations, leading to more significant and impactful evaluations and discoveries.

Anglin, K., Liu, Q., & Wong, V. C. (2024). A primer on the validity typology and threats to validity in education research. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09955-4

Article   Google Scholar  

Cham, H., Lee, H., & Migunov, I. (2024). Quasi-experimental designs for causal inference: An overview. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09981-2

Chan, W. (2023). Propensity score methods for causal inference and generalization. Asia Pacific Education Review . https://doi.org/10.1007/s12564-023-09906-5

Corral, D., & Yang, M. (2024). An introduction to the difference-in-differences design in education policy research. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09959-0

Cunningham, S. (2021). Causal inference: The mixtape . Yale University Press.

Book   Google Scholar  

Feng, Y. (2024). Introduction to causal graphs for education researchers. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09980-3

Keller, B., & Branson, Z. (2023). Defining, identifying, and estimating effects with the rubin causal model: A review for education research. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09957-2

Li, W., Xie, Y., Pham, D., Dong, N., Spybrook, J., & Kelcey, B. (2024). Design and analysis of cluster randomized trials. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09984-z

Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511803161

Pearl, J., & Mackenzie, D. (2018). The book of why. The new science of cause and effect . Basic Books.

Google Scholar  

Porter, S. R. (2024). Understanding the counterfactual approach to instrumental variables: A practical guide. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09982-1

Qin, X. (2024). An introduction to causal mediation analysis. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09962-5

Shear, B. R., & Briggs, D. C. (2024). Measurement issues in causal inference. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09942-9

Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search (2nd ed.). Springer.

Suk, Y. (2024). Regression discontinuity designs in education: A practitioner’s guide. Asia Pacific Education Review . https://doi.org/10.1007/s12564-024-09956-3

Download references

Author information

Authors and affiliations.

University of Maryland, College Park, USA

Peter M. Steiner

Seoul National University, Seoul, Republic of Korea

Yongnam Kim

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Peter M. Steiner .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Steiner, P.M., Kim, Y. Causal research designs and analysis in education. Asia Pacific Educ. Rev. 25 , 555–556 (2024). https://doi.org/10.1007/s12564-024-09988-9

Download citation

Published : 26 July 2024

Issue Date : September 2024

DOI : https://doi.org/10.1007/s12564-024-09988-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

which research proposal question aims to establish causality

Home Market Research Research Tools and Apps

Causal Research: What it is, Tips & Examples

Causal research examines if there's a cause-and-effect relationship between two separate events. Learn everything you need to know about it.

Causal research is classified as conclusive research since it attempts to build a cause-and-effect link between two variables. This research is mainly used to determine the cause of particular behavior. We can use this research to determine what changes occur in an independent variable due to a change in the dependent variable.

It can assist you in evaluating marketing activities, improving internal procedures, and developing more effective business plans. Understanding how one circumstance affects another may help you determine the most effective methods for satisfying your business needs.

LEARN ABOUT: Behavioral Research

This post will explain causal research, define its essential components, describe its benefits and limitations, and provide some important tips.

Content Index

What is causal research?

Temporal sequence, non-spurious association, concomitant variation, the advantages, the disadvantages, causal research examples, causal research tips.

Causal research is also known as explanatory research . It’s a type of research that examines if there’s a cause-and-effect relationship between two separate events. This would occur when there is a change in one of the independent variables, which is causing changes in the dependent variable.

You can use causal research to evaluate the effects of particular changes on existing norms, procedures, and so on. This type of research examines a condition or a research problem to explain the patterns of interactions between variables.

LEARN ABOUT: Research Process Steps

Components of causal research

Only specific causal information can demonstrate the existence of cause-and-effect linkages. The three key components of causal research are as follows:

Causal Research Components

Prior to the effect, the cause must occur. If the cause occurs before the appearance of the effect, the cause and effect can only be linked. For example, if the profit increase occurred before the advertisement aired, it cannot be linked to an increase in advertising spending.

Linked fluctuations between two variables are only allowed if there is no other variable that is related to both cause and effect. For example, a notebook manufacturer has discovered a correlation between notebooks and the autumn season. They see that during this season, more people buy notebooks because students are buying them for the upcoming semester.

During the summer, the company launched an advertisement campaign for notebooks. To test their assumption, they can look up the campaign data to see if the increase in notebook sales was due to the student’s natural rhythm of buying notebooks or the advertisement.

Concomitant variation is defined as a quantitative change in effect that happens solely as a result of a quantitative change in the cause. This means that there must be a steady change between the two variables. You can examine the validity of a cause-and-effect connection by seeing if the independent variable causes a change in the dependent variable.

For example, if any company does not make an attempt to enhance sales by acquiring skilled employees or offering training to them, then the hire of experienced employees cannot be credited for an increase in sales. Other factors may have contributed to the increase in sales.

Causal Research Advantages and Disadvantages

Causal or explanatory research has various advantages for both academics and businesses. As with any other research method, it has a few disadvantages that researchers should be aware of. Let’s look at some of the advantages and disadvantages of this research design .

  • Helps in the identification of the causes of system processes. This allows the researcher to take the required steps to resolve issues or improve outcomes.
  • It provides replication if it is required.
  • Causal research assists in determining the effects of changing procedures and methods.
  • Subjects are chosen in a methodical manner. As a result, it is beneficial for improving internal validity .
  • The ability to analyze the effects of changes on existing events, processes, phenomena, and so on.
  • Finds the sources of variable correlations, bridging the gap in correlational research .
  • It is not always possible to monitor the effects of all external factors, so causal research is challenging to do.
  • It is time-consuming and might be costly to execute.
  • The effect of a large range of factors and variables existing in a particular setting makes it difficult to draw results.
  • The most major error in this research is a coincidence. A coincidence between a cause and an effect can sometimes be interpreted as a direction of causality.
  • To corroborate the findings of the explanatory research , you must undertake additional types of research. You can’t just make conclusions based on the findings of a causal study.
  • It is sometimes simple for a researcher to see that two variables are related, but it can be difficult for a researcher to determine which variable is the cause and which variable is the effect.

Since different industries and fields can carry out causal comparative research , it can serve many different purposes. Let’s discuss 3 examples of causal research:

Advertising Research

Companies can use causal research to enact and study advertising campaigns. For example, six months after a business debuts a new ad in a region. They see a 5% increase in sales revenue.

To assess whether the ad has caused the lift, they run the same ad in randomly selected regions so they can compare sales data across regions over another six months. When sales pick up again in these regions, they can conclude that the ad and sales have a valuable cause-and-effect relationship.

LEARN ABOUT: Ad Testing

Customer Loyalty Research

Businesses can use causal research to determine the best customer retention strategies. They monitor interactions between associates and customers to identify patterns of cause and effect, such as a product demonstration technique leading to increased or decreased sales from the same customers.

For example, a company implements a new individual marketing strategy for a small group of customers and sees a measurable increase in monthly subscriptions. After receiving identical results from several groups, they concluded that the one-to-one marketing strategy has the causal relationship they intended.

Educational Research

Learning specialists, academics, and teachers use causal research to learn more about how politics affects students and identify possible student behavior trends. For example, a university administration notices that more science students drop out of their program in their third year, which is 7% higher than in any other year.

They interview a random group of science students and discover many factors that could lead to these circumstances, including non-university components. Through the in-depth statistical analysis, researchers uncover the top three factors, and management creates a committee to address them in the future.

Causal research is frequently the last type of research done during the research process and is considered definitive. As a result, it is critical to plan the research with specific parameters and goals in mind. Here are some tips for conducting causal research successfully:

1. Understand the parameters of your research

Identify any design strategies that change the way you understand your data. Determine how you acquired data and whether your conclusions are more applicable in practice in some cases than others.

2. Pick a random sampling strategy

Choosing a technique that works best for you when you have participants or subjects is critical. You can use a database to generate a random list, select random selections from sorted categories, or conduct a survey.

3. Determine all possible relations

Examine the different relationships between your independent and dependent variables to build more sophisticated insights and conclusions.

To summarize, causal or explanatory research helps organizations understand how their current activities and behaviors will impact them in the future. This is incredibly useful in a wide range of business scenarios. This research can ensure the outcome of various marketing activities, campaigns, and collaterals. Using the findings of this research program, you will be able to design more successful business strategies that take advantage of every business opportunity.

At QuestionPro, we offer all kinds of necessary tools for researchers to carry out their projects. It can help you get the most out of your data by guiding you through the process.

MORE LIKE THIS

Experimental vs Observational Studies: Differences & Examples

Experimental vs Observational Studies: Differences & Examples

Sep 5, 2024

Interactive forms

Interactive Forms: Key Features, Benefits, Uses + Design Tips

Sep 4, 2024

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with Strategic Research

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Get started on your market research journey with CoreXM

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

  • Search this journal
  • Search all journals
  • View access options
  • View profile
  • Create profile

Add email alerts

You are adding the following journal to your email alerts

New content
SAGE Open

Essential Ingredients of a Good Research Proposal for Undergraduate and Postgraduate Students in the Social Sciences

Introduction, research topic, research background, research aim and objectives, research methodology, research significance/importance, research program, research proposal components that are supposed to feature in the final thesis or dissertation, declaration of conflicting interests, biographies, cite article, share options, information, rights and permissions, metrics and citations, figures and tables.

 Research topicRemarks
1To examine the performance of REITsBadly phrased research topic—it is phrased like a research aim or objective. It is also too broad. It could be turned into a good and well phrased research topic as in 2 below.
2An investigation into the performance of UK REITs from 2007 to 2014 OR An examination of UK REITs’ performance from 2007 to 2014 OR UK REITs’ performance from 2007 to 2014Well and appropriately phrased variously and specific—scope defined regarding geographical location and time period.
3Impacts of new retail developments on existing inner city shopping centers and high street shops: A case study of Liverpool One in Liverpool, the United Kingdom OR Examining the impacts of new retail developments on existing inner city shopping centers and high street shops: A case study of Liverpool One in Liverpool, the United Kingdom OR An investigation into the impacts of new retail developments on existing inner city shopping centers and high street shops: A case study of Liverpool One in Liverpool, the United KingdomWell and appropriately phrased variously and specific—scope defined regarding geographical location and time period.
4A comparative study of construction procurement methods in Italy and GermanyWell phrased and specific regarding the countries of comparison.
5Assess the re-development of Liverpool Central DocksBadly phrased research topic. It could be turned into a good and well phrased research topic as in 2 and 3 above.
6Registration of real estate ownership and access to formal capital for small- and medium-scale enterprises: A comparative study of Zambia and the United KingdomWell phrased and specific regarding the countries of comparison.

A Situation Where No Research Has Been Conducted in Relation to the Topic Under Consideration

A situation where research has been conducted into a topic under consideration but to a certain extent or from a particular perspective, where a topic has been researched from the perspective of a particular discipline, where a topic has been researched in relation to a particular geographical location, comparing issues, where a topic has two or more dimensions but research has only been conducted in relation to one or some of them.

Example 1
Research topicRegistration of RE ownership and accessibility to formal capital for SMEs: A comparative study of Botswana and the Netherlands
Research aimThe aim of the study is to investigate the impact of RE ownership registration on SMEs’ accessibility to formal capital on a comparative basis between Botswana and the Netherlands.
Research objectivesThe achievement of the above research aim will require the pursuit of the following objectives:
To examine the nature of capital constraints among SMEs;
To assess the impact of RE ownership registration on SMEs’ access to capital;
To evaluate the factors responsible for rejecting SMEs’ capital demand by banks and other financial institutions and the importance of RE ownership registration relative to other factors; and
To investigate the differences (if any), which exist between the two countries regarding the effects of RE ownership registration on SMEs’ access to capital.
Example 2
Research topicImpacts of NRDs on existing inner city shopping centers and other city center retail areas: A case study of L1 in Liverpool, the United Kingdom
Research aimThe aim of the research is to examine the impacts of NRDs on existing inner city shopping centers and other city center retail areas using L1 in Liverpool as a case study
Research objectivesThe above aim will be achieved by pursuing the following objectives:
To examine vacancy rates in Liverpool’s existing inner city shopping centers and other city center retail areas since the opening of L1 in 2008;
To assess the level of sales experienced by retailers in Liverpool’s existing inner city shopping centers and other city center retail areas since L1 was opened;
To investigate the changes in occupation of retail space in Liverpool’s existing inner city shopping centers and other city center retail areas since the opening of L1; and
To explore the management strategies adopted by existing inner city shopping center managers and individual shop managers with regard to coping with competition, retaining current business, and attracting new business.

which research proposal question aims to establish causality

 Quantitative research methodologyQualitative research methodology
1It is an inquiry into a social or human problem based normally on testing a theory composed of variables, measured with numbers, and analyzed using statistical procedures to determine whether the predictive generalizations of the theory hold true.It is an inquiry process of comprehending a social or human problem based on building a complex holistic picture formed with words, reporting detailed views of informants and conducted in a natural setting.
2It views truthfulness or reality to exist in the world, which can be objectively measured.It views truthfulness or reality to exist in the world that can be subjectively measured.
3In terms of the relationship between the investigator and what is being investigated, the quantitative research methodology holds that the researcher should remain distant and independent of what is being researched to ensure an objective assessment of the situation.The inquirer normally goes to the site of the target participants to conduct the research. This enables the researcher to develop a level of detail about the individual or place and to be highly involved in the actual experiences of the participants.
4It is not value-laden as the researchers’ values are kept out of the study.It is value-laden as the personal self becomes inseparable from the researcher self.
5The entire process uses the form of reasoning or logic wherein theories and hypotheses are tested in cause-and-effect order. Concepts, variables, and hypotheses are chosen before the study begins and remain fixed throughout the study. The intent of the study is to develop generalizations that contribute to the theory and that enable one to better predict, explain, and comprehend a phenomenon.The reasoning adopted in qualitative research is largely . Various aspects or categories emerge from those under investigation rather than are identified a priori by the researcher. This emergence provides information leading to patterns or theories that help explain a phenomenon. Theory or hypotheses are, therefore, not established a priori. The research objectives may change and be refined as the inquirer learns what question to ask and to whom. The methodology is, therefore, emergent rather than tightly pre-figured.
6Regarding research methods (particularly, primary data collection procedures), questionnaires are used and the questions asked are largely closed-ended where optional responses are provided.Interviews are used for primary data collection and the questions asked are mainly open-ended where no optional responses are provided.
7There is descriptive and inferential numeric analysis of data using statistical packages.Collection of text data, description, and analysis of text or pictures/images, representation of information in figures and tables, all inform qualitative research. Data are coded and analyzed using qualitative software packages.

Strategies of Inquiry

Experiments, ethnography, grounded theory, phenomenology, narrative research, action research, sequential procedure, concurrent procedure, transformative procedure, reflection on strategies of inquiry, types of data and research methods, sampling issues, design of research instrument for primary data collection and pilot studies, ethical issues, data analysis and validation of research findings, what to include in the research methodology section, download to reference manager.

If you have citation software installed, you can download article citation data to the citation manager of your choice

Share this article

Share with email, share on social media, share access to this article.

Sharing links are not relevant where the article is open access and not available if you do not have a subscription.

For more information view the Sage Journals article sharing page.

Information

Published in.

which research proposal question aims to establish causality

  • essential ingredients
  • social sciences
  • writing a good proposal

Rights and permissions

Affiliations, journals metrics.

This article was published in SAGE Open .

Article usage *

Total views and downloads: 148279

* Article usage tracking started in December 2016

See the impact this article is making through the number of times it’s been read, and the Altmetric Score. Learn more about the Altmetric Scores

Articles citing this one

Receive email alerts when this article is cited

Web of Science: 0

Crossref: 10

  • Towards substantive and productive oral language skills and practices ... Go to citation Crossref Google Scholar
  • Exploring the Representation of Women in Technical Roles at a South Af... Go to citation Crossref Google Scholar
  • The current status of faculty members’ pedagogical competence in devel... Go to citation Crossref Google Scholar
  • Chinese technology teacher challenges to infuse engineering design int... Go to citation Crossref Google Scholar
  • Designing a Research Proposal in Qualitative Research Go to citation Crossref Google Scholar
  • Designing a PhD Proposal in Qualitative Research Go to citation Crossref Google Scholar
  • Using children and young people as advocates to inform research design Go to citation Crossref Google Scholar
  • Writing a research proposal Go to citation Crossref Google Scholar
  • Student housing investment and economic development ERICYEBOAH Go to citation Crossref Google Scholar

Figures & Media

View options, view options, access options.

If you have access to journal content via a personal subscription, university, library, employer or society, select from the options below:

I am signed in as:

I can access personal subscriptions, purchases, paired institutional access and free tools such as favourite journals, email alerts and saved searches.

Login failed. Please check you entered the correct user name and password.

Access personal subscriptions, purchases, paired institutional or society access and free tools such as email alerts and saved searches.

loading institutional access options

Click the button below for the full-text content

Alternatively, view purchase options below:

Access journal content via a DeepDyve subscription or find out more about this option.

Also from Sage

  • CQ Library Elevating debate opens in new tab
  • Sage Data Uncovering insight opens in new tab
  • Sage Business Cases Shaping futures opens in new tab
  • Sage Campus Unleashing potential opens in new tab
  • Sage Knowledge Multimedia learning resources opens in new tab
  • Sage Research Methods Supercharging research opens in new tab
  • Sage Video Streaming knowledge opens in new tab
  • Technology from Sage Library digital services opens in new tab

Logo for University of Iowa Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

BONUS Unit 2: Types of Questions

51 RQs – Definition, Fact, Association, Causation

The next two chapters are about research questions and hypotheses. Specifically, they talk about different types of research questions and hypotheses. Research questions can help define things and establish facts. Both RQs and HYs can involve association and causation.

Should you believe it?

The criteria for establishing causality are good things to pay attention to in this unit. This is where you’re likely see a lot of violation between any given primary research and the popular press source applying that research to make a point.

Don’t fall asleep before you get to the discussion of directionality in hypotheses. It’s one of those things that seems so simple, but requires some of that Healthy External Skepticism .

I will admit that this is potentially not terribly exciting stuff. So in the lecture video, I reference a game. You can find it here .

[and now for your regularly scheduled student textbook authors]

Learning Objectives

What is a research question?

RQs – Definition, Fact, Association, Causation

What is a research question.

Research questions are composed of variables and each variable has a unique relationship to the RQ. These relationships could include an association, causation, definition, or a fact. Each are defined below.

Definition-  A research question that seeks to define something.

Example: How do students use cell phones?

Fact-  A research question that seeks to know if something has happened or if something is going to change

Example: Do people use cellphones as a form of communication?

Association*- means they are related, but one does not cause another.

Example: Are bad grades and cell phone use in class related?

*you’ll usually hear this described as “correlation”

Causation- means there is a direct relationship of cause. It follows an A causes B model.

There are three rules a rq must follow in order to test causation :.

  • Temporal : Cause must come before effect.
  • It can’t be Spurious [1] : Some outside factor that is explainable which is causing A to cause B.
  • Co-vary : Must be associated with each other (Co-variables).

Textbook Contribution: Julia, Rob, Grace, Katelyn Group 4 (2019)

Aside: I (DocMC) got a ring light somewhere between Units 9 and 10 and reviewing all the videos …well…I am not convinced that the ring light is a positive addition. Is it just me or does it make my skin look somewhat cartoonified? Or perhaps my health has taken an extreme turn for the worse…and these videos actually feature a cadaver. Very “Weekend at Bernies.”

Games from this video are in a separate chapter so that they don’t make this one an endless scroll.

Got ideas for questions to include on the exam?

Click this link to add them! [this course element is paused because ya’ll aren’t submitting many questions…]

Unit 10: The things you CHOOSE to measure, the questions you choose to ask. Digging deeper into variables and questions

Paradigms — what’s in the box, hys- association, causation, direction, independent and dependent variables related to causation.

  • Spurious: Shows a relationship between two things that does not prove it. Two variables could be independent from each other but may SEEM like they are distinctly related. Use of this word is most fun if you say it like a pirate. Spuuuuurios! ↵

seeks to define something

Communication Research in Real Life Copyright © 2023 by Kate Magsamen-Conrad. All Rights Reserved.

Share This Book

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Causal Reasoning

  • < Previous chapter
  • Next chapter >

25 Causal Argument

Department of Psychological Sciences Birkbeck, University of London London, England, UK

Institute of Philosophy and Political Science TU Dortmund University Dortmund, Germany

Department of Philosophy & Cognitive Science, Lund University, Lund, Sweden

  • Published: 10 May 2017
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter outlines the range of argument forms involving causation that can be found in everyday discourse. It also surveys empirical work concerned with the generation and evaluation of such arguments. This survey makes clear that there is presently no unified body of research concerned with causal argument. It highlights the benefits of a unified treatment both for those interested in causal cognition and those interested in argumentation, and identifies the key challenges that must be met for a full understanding of causal argumentation.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 7
November 2022 5
December 2022 8
January 2023 9
February 2023 8
March 2023 7
April 2023 4
May 2023 1
June 2023 5
July 2023 4
August 2023 1
September 2023 5
October 2023 15
November 2023 13
December 2023 4
January 2024 5
February 2024 4
March 2024 3
April 2024 9
May 2024 5
June 2024 1
July 2024 11
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Active funding opportunity

Nsf 24-510: collaborative research in computational neuroscience (crcns), program solicitation, document information, document history.

  • Posted: December 11, 2023
  • Replaces: NSF 20-609

Program Solicitation NSF 24-510



Directorate for Computer and Information Science and Engineering

Directorate for Biological Sciences

Directorate for Social, Behavioral and Economic Sciences

Directorate for Mathematical and Physical Sciences

Directorate for Engineering

Office of International Science and Engineering



National Institutes of Health

National Institute of Neurological Disorders and Stroke

National Institute of Mental Health

National Institute on Drug Abuse

National Eye Institute

National Institute on Deafness and Other Communication Disorders

National Institute of Biomedical Imaging and Bioengineering

National Institute on Alcohol Abuse and Alcoholism

National Center for Complementary and Integrative Health

National Institute on Aging

Eunice Kennedy Shriver National Institute of Child Health and Human Development



U.S. Dept. of Energy

    Office of Science, Office of Advanced Scientific Computing Research



Federal Ministry of Education and Research, Germany



French National Research Agency



United States-Israel Binational Science Foundation



National Institute of Information and Communications Technology, Japan



State Research Agency, Spain

Full Proposal Deadline(s) (due by 5 p.m. submitter's local time):

Deadline for FY 2024 competition
Deadline for FY 2025 competition
Deadline for FY 2026 competition

Important Information And Revision Notes

This solicitation extends the Collaborative Research in Computational Neuroscience program for three years, with the following modifications:

  • The introduction and program description, including the description of scientific topics and approaches, have been updated in Sections I and II of this solicitation;
  • Cloud computing and high-throughput computing resources may be requested for new proposals and for active awards;
  • Agency contacts, agency-specific instructions, and allowable multilateral proposals have been updated in Section VIII of this solicitation;
  • Information on Results from Prior Support is required for each PI or co-PI who has received funding from any of the CRCNS participating funding organizations within the past five years; note that sharing of data, software, and/or other resources under prior support must be addressed;
  • Letters of collaboration must conform to the format described in Section V.A. of this solicitation;
  • Proposers are advised to review agency-specific budget limitations described in Sections III and VIII of this solicitation. Proposals with budget requests exceeding applicable limits may be returned without review; and
  • A proposal preparation checklist, supplementing the checklist provided in the PAPPG, is provided at the end of Section V.A. of this solicitation.

Any proposal submitted in response to this solicitation should be submitted in accordance with the NSF Proposal & Award Policies & Procedures Guide (PAPPG) that is in effect for the relevant due date to which the proposal is being submitted. The NSF PAPPG is regularly revised and it is the responsibility of the proposer to ensure that the proposal meets the requirements specified in this solicitation and the applicable version of the PAPPG. Submitting a proposal prior to a specified deadline does not negate this requirement.

Summary Of Program Requirements

General information.

Program Title:

Collaborative Research in Computational Neuroscience (CRCNS) Innovative Approaches to Science and Engineering Research on Brain Function
Computational neuroscience provides a theoretical foundation and a rich set of technical approaches for understanding the nervous system at all levels, building on the theory, methods, and findings of computer science, neuroscience, and numerous other disciplines to accelerate the understanding of nervous system structure and function, mechanisms underlying nervous system disorders, and computational strategies used by the nervous system. Through the CRCNS program, the participating funding organizations support collaborative activities that span a broad spectrum of computational neuroscience research, as appropriate to the missions and strategic objectives of each agency. Two classes of proposals will be considered in response to this solicitation: Research Proposals describing collaborative research projects, and Data Sharing Proposals to support sharing of data and other resources. Domestic and international projects will be considered, including proposals seeking parallel international funding. As detailed in the solicitation, opportunities for parallel funding are available for bilateral US-German, US-French, US-Israeli, US-Japanese, and US-Spanish projects, and multilateral projects involving the United States and two or more CRCNS partner countries (see Section VIII of the solicitation for country-specific limitations). Collaborating PIs from outside of the United States are referred to Section VIII of this solicitation for further instructions from the appropriate partner funding agency. Questions concerning a particular project's focus, direction, and relevance to a participating funding organization should be addressed to the appropriate person in the list of agency contacts in Section VIII of the solicitation. NSF will coordinate and manage the review of proposals jointly with participating domestic and foreign funding organizations, through a joint panel review process used by all participating funders. Additional information is provided in Section VI of the solicitation.

Cognizant Program Officer(s):

Please note that the following information is current at the time of publishing. See program website for any updates to the points of contact.

Kenneth Whang, CRCNS Program Coordinator - NSF; Program Director, Division of Information and Intelligent Systems, National Science Foundation, telephone: (703) 292-5149, fax: (703) 292-9073, email: [email protected]

Heather Carroll, CRCNS Administrative Coordinator - NSF; Program Assistant, Division of Information and Intelligent Systems, National Science Foundation, telephone: (703) 292-8475, fax: (703) 292-9073, email: [email protected]

  • 47.041 --- Engineering
  • 47.049 --- Mathematical and Physical Sciences
  • 47.070 --- Computer and Information Science and Engineering
  • 47.074 --- Biological Sciences
  • 47.075 --- Social Behavioral and Economic Sciences
  • 47.079 --- Office of International Science and Engineering
  • 81.049 --- Office of Science Financial Assistance Program
  • 93.173 --- National Institute on Deafness and Other Communication Disorders
  • 93.213 --- National Center for Complementary and Integrative Health
  • 93.242 --- National Institute of Mental Health
  • 93.273 --- National Institute on Alcohol Abuse and Alcoholism
  • 93.279 --- National Institute on Drug Abuse
  • 93.286 --- National Institute of Biomedical Imaging and Bioengineering
  • 93.853 --- National Institute of Neurological Disorders and Stroke
  • 93.865 --- Eunice Kennedy Shriver National Institute of Child Health and Human Development
  • 93.866 --- National Institute on Aging
  • 93.867 --- National Eye Institute

Award Information

Anticipated Type of Award: Standard Grant or Continuing Grant

Estimated Number of Awards: 20 to 30

Anticipated Funding Amount: $5,000,000 to $30,000,000

per year, subject to availability of funds

Eligibility Information

Who May Submit Proposals:

Proposals may only be submitted by the following: Institutions of Higher Education (IHEs) - Two- and four-year IHEs (including community colleges) accredited in, and having a campus located in the US, acting on behalf of their faculty members. Special Instructions for International Branch Campuses of US IHEs: If the proposal includes funding to be provided to an international branch campus of a US institution of higher education (including through use of subawards and consultant arrangements), the proposer must explain the benefit(s) to the project of performance at the international branch campus, and justify why the project activities cannot be performed at the US campus. Non-profit, non-academic organizations: Independent museums, observatories, research laboratories, professional societies and similar organizations located in the U.S. that are directly associated with educational or research activities. For-profit organizations: U.S.-based commercial organizations, including small businesses, with strong capabilities in scientific or engineering research or education and a passion for innovation. U.S. Department of Energy National Laboratories are eligible to submit proposals in response to this solicitation.

Who May Serve as PI:

There are no restrictions or limits.

Limit on Number of Proposals per Organization:

Limit on Number of Proposals per PI or co-PI: 2

In response to this solicitation, an investigator may participate as PI or co-PI in no more than two proposals per review cycle. In the event that a PI or co-PI does appear in either of these roles on more than two proposals, all proposals that include that person as a PI or co-PI will be returned without review. This limit applies to all PIs and co-PIs, based inside or outside of the United States.

Proposal Preparation and Submission Instructions

A. proposal preparation instructions.

  • Letters of Intent: Not required
  • Preliminary Proposal Submission: Not required

Full Proposals:

  • Full Proposals submitted via Research.gov: NSF Proposal and Award Policies and Procedures Guide (PAPPG) guidelines apply. The complete text of the PAPPG is available electronically on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .
  • Full Proposals submitted via Grants.gov: NSF Grants.gov Application Guide: A Guide for the Preparation and Submission of NSF Applications via Grants.gov guidelines apply (Note: The NSF Grants.gov Application Guide is available on the Grants.gov website and on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=grantsgovguide ).

B. Budgetary Information

Cost Sharing Requirements:

Inclusion of voluntary committed cost sharing is prohibited.

Indirect Cost (F&A) Limitations:

Not Applicable

Other Budgetary Limitations:

Other budgetary limitations apply. Please see the full text of this solicitation for further information.

C. Due Dates

Proposal review information criteria.

Merit Review Criteria:

National Science Board approved criteria. Additional merit review criteria apply. Please see the full text of this solicitation for further information.

Award Administration Information

Award Conditions:

Additional award conditions apply. Please see the full text of this solicitation for further information.

Reporting Requirements:

Standard NSF reporting requirements apply.

I. Introduction

Computational neuroscience provides a theoretical foundation and a rich set of technical approaches for understanding the principles and dynamics of the nervous system at all levels. Building on the theory, methods, and findings of computer science, neuroscience, biology, the mathematical and physical sciences, the social and behavioral sciences, engineering, and other fields, computational neuroscience embraces a wide range of innovative approaches to accelerate the understanding of nervous system structure and function, mechanisms underlying nervous system disorders, and computational strategies used by the nervous system.

Furthering these advances, collaboration plays a pivotal role. Collaborative research enables investigators with complementary experience and training, and deep understanding of multiple scholarly fields, to tackle otherwise intractable scientific and technical challenges. Close collaborations enable dynamic development and refinement of models, theories, and analytical or experimental methods, and in-depth interdisciplinary exchange and training. Sharing of data, models, software, and other resources facilitates collaboration at a larger scale, enabling integration and re-use of data and code, development and training of models, rigorous evaluation of models and methods, and alignment of efforts across research communities. International collaborations offer unique opportunities to further expand research perspectives and partnerships, and to develop a community of globally engaged scientists and engineers.

Through the Collaborative Research in Computational Neuroscience (CRCNS) program, the U.S. National Science Foundation (NSF), National Institutes of Health (NIH), and Department of Energy (DOE); the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF); the French National Research Agency (Agence Nationale de la Recherche, ANR); the United States-Israel Binational Science Foundation (BSF); Japan’s National Institute of Information and Communications Technology (NICT); and the Spanish State Research Agency (Agencia Estatal de Investigación, AEI) support collaborative activities spanning a broad spectrum of computational neuroscience research, as appropriate to the missions and strategic objectives of each agency. The participating funding organizations have released parallel documents with further agency-specific information, referenced in Section VIII of this solicitation.

II. Program Description

Following from the above motivations, two classes of proposals will be considered in response to this solicitation: Research Proposals describing collaborative research projects, and Data Sharing Proposals to support sharing of data and other resources. Domestic and international projects will be considered, including proposals seeking parallel international funding as detailed in Sections V.A. and VIII of this solicitation.

Research Proposals should describe innovative, collaborative projects focused on challenging interdisciplinary problems in computational neuroscience. The scope of computational neuroscience is defined inclusively, encompassing structure, function, organization, and computation across all levels of the nervous system, and including theory, modeling, and analysis, disease and normal function, and implications for biological as well as engineered systems.

Collaborative efforts are required. No particular combination of disciplinary backgrounds or scientific approaches is prescribed. Proposers should determine and convincingly demonstrate the complementary expertise and close collaborations needed to make significant interdisciplinary advances.

Examples of potential approaches and topics are given at the end of this section. Proposals selected for funding by this program must be responsive to the mission of a participating funding organization. Questions concerning a particular project's focus, direction, and relevance to a participating funding organization should be addressed to the appropriate person in the list of agency contacts in Section VIII of this solicitation.

Data Sharing Proposals should focus on the preparation and deployment of data, software, code bases, stimuli, models, or other resources in a manner that will enable wide-ranging research advances in computational neuroscience. Data sharing projects are expected to respond to the needs of an identified broad community of researchers, representing any of the scientific areas that would be appropriate for Research Proposals under this solicitation. The major innovation of a data sharing project could relate to the breadth, depth, or importance of the resources being shared. Technical innovations and novel approaches to community development and continuous improvement are encouraged as needed to maximize the effectiveness and impact of shared resources.

Support for data sharing under this solicitation focuses primarily on data and other resources, not more general infrastructure development, or research to acquire the data. Proposers of data sharing projects are strongly encouraged to build on existing facilities and services where possible. A significant data sharing effort may also be proposed as a major component of a Research Proposal. All CRCNS investigators are encouraged to coordinate with other data sharing projects and related activities, including national and international efforts to develop sustainable, extensible neuroscience resources.

Innovative educational and training opportunities are strongly encouraged in all CRCNS proposals to develop research capacity in computational neuroscience, broaden participation in research and education, and increase the impact of computational neuroscience research. Activities at all levels of educational and career development are welcome under this solicitation. International research experiences for students and early-career researchers, described in terms of explicit plans and goals, are strongly encouraged in all projects involving international collaborations.

A broad range of approaches and topics is welcome under this solicitation. The list of examples below illustrates some areas of research that are appropriate under this solicitation. The following list is not intended to be exhaustive or exclusive :

  • Explanatory, predictive, and informative models and simulations of normal and abnormal structures and functions of the nervous system and related disorders;
  • Mathematical, statistical, and other quantitative analyses of research related to genetic, epigenetic, molecular, sub-cellular, cellular, network, systems, behavioral, and/or cognitive neuroscience;
  • Theoretical and computational approaches to delineate and understand the structures and functions of neural circuits and networks;
  • High-Performance Computing (HPC) enabled modeling and simulation approaches for extreme-scale research and understanding;
  • Theoretical and computational approaches that relate nervous system processes to learning algorithms and architectures, probabilistic representations, estimation, prediction, information theory, and inference;
  • Data-driven and informatics-based approaches that exploit large-scale, high-throughput, heterogeneous, and/or complex data;
  • Theory and algorithms for designing experiments and integrating and analyzing data related to imaging, electrophysiological, optogenetic, multi-omic, and other methods;
  • Artificial intelligence and machine learning (AI/ML) approaches that provide new insights into neural data, neural systems, and behavior, and neuroscience that can inform AI/ML;
  • Methods combining AI/ML, statistics, dynamical systems, and/or control theory;
  • Modeling approaches that efficiently assimilate new information, apply existing knowledge to new data, or optimize new data acquisition or closed-loop system performance;
  • Computational strategies for human neuroscience that reduce model bias towards underrepresented groups and improve data coverage, access, equity, and fairness;
  • Computational models examining the mechanisms whereby social determinants of health interact with biological factors, including genetics, to influence risk or resilience for diseases in the nervous system;
  • Methods for measuring and analyzing connectivity, dynamics, information, and causation in neural systems;
  • Integration and modeling of data across levels of analysis, from molecular to circuit level mechanisms implicated with behavior;
  • Explanatory models of spatiotemporal brain dynamics across multiple scales;
  • Approaches that integrate neural and cognitive models;
  • Data-intensive approaches to modeling and analysis, and integrated theory- and data-driven models at different levels of abstraction;
  • Theoretical and computational methods that can be applied to: common pathways, circuits, and mechanisms underlying multiple diseases in the nervous system; integrating brain measures across levels of analysis; and translational research; and
  • Computational approaches in translational research aimed at addressing one or more phases (e.g., target identification) of drug discovery for nervous system disorders, including mechanistic neurobiological models of drug target engagement.

Examples of topics amenable to these approaches include but are not limited to the following:

  • Neurodevelopment, neurodegeneration, neuroinflammation and repair;
  • Pattern recognition and perception, learning, representation, and encoding;
  • Motor control mechanisms and sensorimotor integration;
  • Memory and attention;
  • Cognitive and decision-making functions and dysfunction (including, e.g., impulse control and disinhibition, and addiction, broadly construed);
  • Neural origins of risk and time preference;
  • Judgment, choice formation, and social-behavioral phenomena such as trust, competitiveness, and cooperation, including the role of emotion;
  • Language and communication;
  • Intellectual and developmental disabilities;
  • Neural interface decoding and analysis, control, and modeling of processes affecting neural interfaces and neuroprostheses;
  • Application of knowledge of brain computation to devices;
  • Normal and abnormal sensory processing (vision, audition, olfaction, taste, balance, proprioception, and somatic sensation);
  • Neural mechanisms of adaptation to environmental constraints or disease;
  • Neurological, neuromuscular, and neurovascular disorders;
  • Mental health, mental illness, and related disorders;
  • Alcohol and substance use disorders, including their interaction with eating disorders and other psychiatric and neurological disorders, and the effects of these conditions on cognitive processes;
  • Emergent and state-space properties of dynamic neural networks and ensembles; and
  • Modulation of central and/or peripheral neural processes by complementary and integrative health approaches.

Cloud Computing and High-Throughput Computing Resources (for new proposals and for active funded awards): Many CRCNS projects face data- and computationally-intensive challenges that may benefit from accessing cloud computing or high-throughput computing resources, which provide robust, agile, reliable, and scalable infrastructure. The Cloud Access Program (NSF) and the STRIDES Initiative (NIH) have established partnerships with commercial cloud service providers to provide awardees with cost-effective, flexible access to cloud-based resources. The Partnership to Advance Throughput Computing ( PATh ) facilitates access to distributed high throughput computing technologies and services. The US PIs of new proposals or active funded awards may request these resources according to the instructions at the end of Section V.A. of this solicitation.

Community Input: The participating funders encourage interdisciplinary, community-driven efforts to map out new frontiers at the interface of neuroscience and other disciplines that could reshape brain research and its applications. Inquiries regarding potential workshops, synthesis papers, or similar activities may be sent to [email protected] and/or the agency contacts listed in Section VIII of this solicitation.

III. Award Information

It is anticipated that a minimum of $5 million will be available each year for this competition, with potentially $20 to $30 million annually, depending on the quality of proposals and availability of funds.

Award sizes for CRCNS projects (including all CRCNS-funded project components, inside and outside of the United States) have typically ranged from approximately $100,000 to $250,000 (USD) per year in direct costs, with durations of three to five years. Proposers contemplating projects with higher budget requirements (e.g., multilateral projects) are advised to consult in advance with the CRCNS Program Coordinator-NSF.

Additional agency-specific limitations, including maximum award budgets and durations , are described in Section VIII of this solicitation. All proposers are advised to review the additional limitations that apply to projects funded by the U.S. Department of Energy and National Institutes of Health and, if applicable, funding limitations specified by other national funding organizations from which parallel funding is requested. Proposals with budget requests exceeding applicable limits may be returned without review.

Estimated program budget, number of awards, and average award size and duration are subject to the availability of funds.

Upon conclusion of the review process, meritorious research proposals may be recommended for funding by one or more of the participating funding organizations, at the option of the funders, not the proposer. Subsequent grant administration procedures will be in accordance with the individual policies of the awarding agency.

Further information about agency processes and agency-specific award information is provided in Section VI.B. and Section VIII of this solicitation.

IV. Eligibility Information

Additional Eligibility Info:

Proposal Limit: Proposals submitted in response to this solicitation may not duplicate or be substantially similar to other proposals concurrently under consideration by other programs or study sections of the participating agencies. Duplicate or substantially similar proposals will be returned without review.

V. Proposal Preparation And Submission Instructions

Full Proposal Preparation Instructions : Proposers may opt to submit proposals in response to this Program Solicitation via Research.gov or Grants.gov.

  • Full Proposals submitted via Research.gov: Proposals submitted in response to this program solicitation should be prepared and submitted in accordance with the general guidelines contained in the NSF Proposal and Award Policies and Procedures Guide (PAPPG). The complete text of the PAPPG is available electronically on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg . Paper copies of the PAPPG may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] . The Prepare New Proposal setup will prompt you for the program solicitation number.
  • Full proposals submitted via Grants.gov: Proposals submitted in response to this program solicitation via Grants.gov should be prepared and submitted in accordance with the NSF Grants.gov Application Guide: A Guide for the Preparation and Submission of NSF Applications via Grants.gov . The complete text of the NSF Grants.gov Application Guide is available on the Grants.gov website and on the NSF website at: ( https://www.nsf.gov/publications/pub_summ.jsp?ods_key=grantsgovguide ). To obtain copies of the Application Guide and Application Forms Package, click on the Apply tab on the Grants.gov site, then click on the Apply Step 1: Download a Grant Application Package and Application Instructions link and enter the funding opportunity number, (the program solicitation number without the NSF prefix) and press the Download Package button. Paper copies of the Grants.gov Application Guide also may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] .

In determining which method to utilize in the electronic preparation and submission of the proposal, please note the following:

Collaborative Proposals. All collaborative proposals submitted as separate submissions from multiple organizations must be submitted via Research.gov. PAPPG Chapter II.E.3 provides additional information on collaborative proposals.

See PAPPG Chapter II.D.2 for guidance on the required sections of a full research proposal submitted to NSF. Please note that the proposal preparation instructions provided in this program solicitation may deviate from the PAPPG instructions.

The following information supplements the PAPPG and the NSF Grants.gov Application Guide.

Proposals submitted in response to this solicitation should be prepared according to the general guidelines contained in the PAPPG, as modified by the following additional specific instructions for Research Proposals or Data Sharing Proposals . Additional instructions for International Proposals Seeking Parallel Funding apply only to proposals for projects involving collaborations among organizations in the United States and organizations in other countries, to be funded in parallel by participating agencies of the corresponding countries. Proposals involving other types of international collaboration may also be submitted, for consideration under standard U.S. funding mechanisms. Proposers are advised to discuss such projects with the appropriate agency contact(s) before submitting. The instructions for specific classes of proposals are cumulative, as indicated in the table below:

(PAPPG)

US Research Proposal

X

X

   

X

 

US Data Sharing Proposal

X

 

X

 

X

 

US-Germany, US-France, US-Israel, US-Japan, US-Spain, or Multilateral Research Proposal

X

X

 

X

X

X

US-Germany, US-France, US-Israel, US-Japan, US-Spain, or Multilateral Data Sharing Proposal

X

 

X

X

X

X

Research Proposals

The following additional instructions apply to all Research Proposals submitted in response to this solicitation. If the proposal seeks parallel funding for an international collaboration, please also refer to the instructions below for International Proposals Seeking Parallel Funding.

  • Title: Titles for research proposals should begin with the phrase, “CRCNS Research Proposal:” Additional title prefixes (e.g., “Collaborative Research:” or “RUI:”) may be included as applicable. Although all CRCNS Research Proposals must describe scientific collaborations involving two or more investigators, they do not need to be collaborative proposals in the administrative sense, i.e., separately submitted proposals from multiple organizations seeking US funding on a unified project. (See PAPPG Chapter II.E.3. for more information.)
  • Project Summary: The project summary should convey both the interdisciplinary problem(s) of interest and the methods to be employed. For projects with medical relevance, the statement on broader impacts should include a summary of the project's potential contributions to understanding, preventing, and managing disease, and enhancing public health. Proposers requesting cloud computing and/or high-throughput computing resources as part of the proposal should include the keyword “CloudAccess” and/or “HTCAccess” (one word without space) at the end of the Overview section (before the section on Intellectual Merit).
  • Project Description: In addition to the guidance specified in the PAPPG, including the requirement for a separate section labeled “Broader Impacts,” Project Descriptions for research projects must include a Coordination Plan. Up to two additional pages are permitted in the Project Description for this purpose only, allowing a maximum of 17 pages total. The Coordination Plan must include: 1) the specific roles of the collaborating PIs, co-PIs, other Senior Personnel and paid consultants at all organizations involved; 2) how the project will be managed across organizations and disciplines; 3) identification of the specific coordination mechanisms that will enable cross-organization and/or cross-discipline scientific integration (e.g., workshops, graduate student exchange, project meetings at conferences, communication tools, software repositories, etc.), and 4) specific references to the budget line items that support these coordination mechanisms. Information on Results from Prior Support is required for each PI or co-PI identified on the project who has received funding from any of the CRCNS participating funding organizations with an end date in the past five years, or any current funding, including no-cost extensions. Sharing of data, software, and/or other resources under prior support must be addressed. In cases where the PI or any co-PI has received more than one award (excluding amendments to existing awards), they need only report on the one award that is most closely related to the proposal. For recipients of BSF funding, the prior reporting requirement applies only to funding through NSF-BSF joint programs, not the BSF regular research programs. If the Project Description, excluding the Coordination Plan, exceeds 15 pages, or is missing required information on Results from Prior Support, the proposal will be returned without review.
  • Supplementary Documents: Supplementary documents are limited to the specific types of documentation listed in the PAPPG or other applicable solicitation guidance (e.g., Facilitating Research at Primarily Undergraduate Institutions ), with the following exceptions:
Human Subjects Protection. Proposals involving human subjects should include a supplementary document, no more than two pages in length, summarizing potential risks to human subjects; plans for recruitment and informed consent; inclusion of women, minorities, and children; and planned procedures to protect against or minimize potential risks. Only one Human Subjects Protection document, covering all collaborative components of the project within the two-page limit, may be submitted per project. Vertebrate Animals. Proposals involving vertebrate animals should include a supplementary document, no more than two pages in length, that addresses the following points: Detailed description of the proposed use of the animals, including species, strains, ages, sex, and number to be used; Justification for the use of animals, choice of species, and numbers to be used; Description of procedures for minimizing discomfort, distress, pain, and injury; and Method of euthanasia and the reasons for its selection. Only one Vertebrate Animals document, covering all collaborative components of the project within the two-page limit, may be submitted per project. Data Management Plan. All proposals must include a supplementary document on data management as specified in the PAPPG and CISE Guidance for Data Management Plans ( https://www.nsf.gov/cise/cise_dmp.jsp ). As needed, the Data Management Plan should also address possible differences between U.S. and applicable non-U.S. data protection requirements. Only one Data Management Plan, covering all collaborative components of the project within the two-page limit, may be submitted per project. Costs associated with Data Management Plans (e.g., for making data accessible and reusable for other researchers) may be included in the proposal budget. Letters of Collaboration. Letters to document collaborative arrangements are limited to stating the intent to collaborate and may not contain endorsements or evaluation of the proposed project . These letters must conform to the following format: “If the proposal submitted by Dr. [insert the full name of the Principal Investigator] entitled [insert the proposal title] is selected for funding, it is my intent to [describe tasks and/or resource commitments in 20 words or fewer] as detailed in the Project Description or the Facilities, Equipment or Other Resources section of the proposal.”

Cloud Computing and High-Throughput Computing Resources. Proposals requesting cloud computing and/or high-throughput computing (HTC) resources should include a supplementary document, no more than two pages in length, that includes the following information:

  • (1) the title of the proposal and name(s) of the US PI(s) and institution(s) requesting the resources;
  • (2) a technical description and justification for the requested cloud computing and/or HTC resources;
  • (3) for cloud computing requests, specific cloud computing providers that will be used;
  • (4) for HTC requests, information regarding (a) the expected number of self-contained tasks per ensemble – note that each task can be packaged into one or more batch job; (b) the resource requirements for each task type in the ensemble – for example, requirements for cores, memory, wall-time, and scratch space; (c) the expected number of ensembles; (d) the expected input and output data requirements for each task type; and (e) the expected number and size of shared input files within an ensemble – expected number of times each file is read per ensemble; and
  • (5) the anticipated annual and total costs for accessing the desired cloud computing and/or HTC resources. This cost should not be included in the NSF proposal budget.

PIs may refer to the CloudBank or STRIDES websites for information on estimating the budget for cloud computing resources. PIs may refer to the PATh website or contact [email protected] with questions about HTC resources, using HTC, or estimating credit needs.

The NSF proposal budget should not include the anticipated cost of accessing these resources; that should be reflected in item (5) of the supplementary document. Proposers should include the keyword “CloudAccess” and/or “HTCAccess” (one word without space) at the end of the Overview section (before the section on Intellectual Merit) of the Project Summary.

Proposals containing special information or supplementary documentation that has not been explicitly allowed above, such as article reprints or preprints, or appendices, will be returned without review.

Data Sharing Proposals

The following additional instructions apply to all Data Sharing Proposals submitted in response to this solicitation. If the proposal seeks parallel funding for an international collaboration, please also refer to the further instructions below for International Proposals Seeking Parallel Funding.

  • Title: Titles for data sharing proposals should begin with the phrase, “CRCNS Data Sharing Proposal:” Additional title prefixes (e.g., “Collaborative Research:” or “RUI:”) may be included as applicable. (See PAPPG Chapter II.E.3. for more information about collaborative proposals.)
  • Project Summary: As with Research Proposals, the statement on broader impacts should address medical relevance if appropriate, and the keyword “CloudAccess” and/or “HTCAccess” should be used to indicate a request for cloud computing and/or high-throughput computing resources as part of the proposal.
  • Project Description: In addition to the guidance specified in the PAPPG, including the requirement for a separate section labeled “Broader Impacts,” Project Descriptions for Data Sharing Proposals should address the following points:
Description and significance of the data, software, code bases, stimuli, models, or other resources, including their quality, scientific importance, structure, format, and scale; Relationship to similar data or other resources, relevant standards, coordination with relevant related activities and infrastructure, and potential for integration with other resources; Anticipated range of uses for research and education in computational neuroscience or other fields; Plan for preparation and deployment, including technical plans, metadata and documentation, and plans for outreach and community input; and Anticipated implementation timetable and strategy for evaluation and management over the course of the award period. For proposals involving multiple collaborators, organizations, or collaborating contributors, a Coordination Plan, as described above for Research Proposals, is allowed but not required. As with Research Proposals, up to two additional pages are permitted in the Project Description for the Coordination Plan, for a maximum of 17 pages total. Information on Results from Prior Support – including sharing of data, software, and/or other resources under prior support – must be included for each PI or co-PI identified on the project who has received funding from any of the CRCNS participating funding organizations with an end date in the past five years, or any current funding, including no-cost extensions. For recipients of BSF funding, the prior reporting requirement applies only to funding through NSF-BSF joint programs, not the BSF regular research programs. If the Project Description, excluding the Coordination Plan, exceeds 15 pages, or is missing required information on Results from Prior Support, the proposal will be returned without review.
  • Supplementary Documents: Data management issues are integral to data sharing projects and should be addressed within the project description; the required Data Management Plan supplementary document may refer the reader to the project description. Proposals should include a supplementary document on Human Subjects Protection , as described above for Research Proposals, if sharing of the data or other resources raises potential human subjects issues (e.g., confidentiality). Letters of Collaboration must conform to the format described above for Research Proposals. Other supplementary documents, as described above for Research Proposals, may be included as applicable.
Proposals containing special information or supplementary documentation that has not been explicitly allowed in the PAPPG or this solicitation, such as article reprints or preprints, or appendices, will be returned without review.

International Proposals Seeking Parallel Funding

The following special instructions apply to proposals for projects involving bilateral or multilateral collaborations among organizations in the United States and organizations in other countries, to be funded in parallel by participating agencies of the corresponding countries. US investigators should prepare a proposal according to the instructions below. Collaborating PIs from outside of the United States are referred to Section VIII of this solicitation for further instructions from the appropriate partner funding agency. Non-US partners seeking parallel funding from partner agencies should not request funding in the US budget: they should not request subawards from the US proposer, nor should they submit a lead or non-lead collaborative proposal to NSF.

  • A proposal to NSF should be prepared according to the guidelines above for Research Proposals or Data Sharing Proposals, as appropriate, except as follows. Proposal titles should begin with “CRCNS” followed by a phrase describing the countries involved and the type of proposal, such as “CRCNS US-German Research Proposal:” The countries and proposal types that will be considered for parallel funding are described in Section VIII of this solicitation. The NSF proposal should be submitted by the US partner in the collaboration. The NSF proposal should describe the combined collaborative activities, across all participating countries and investigators, as a unified project.
  • The collaborating PIs, co-PIs, and senior personnel, from all participating countries , must be listed in full at the top of the first page of the Project Description, along with their departmental and organizational affiliations. The NSF Cover Sheet and biographical sketches will include only the investigators affiliated with US organizations. Biographical sketches for PIs, co-PIs, and senior personnel from outside of the United States must be included as supplementary documents in the NSF proposal. They do not need to be in NSF-approved format so long as they are consistent with the length and content requirements specified in the PAPPG.
  • All International Proposals Seeking Parallel Funding must include a Coordination Plan, which should include specific plans for exchange of students and researchers, including timing, duration, and logistical arrangements for visits, and roles of specific project personnel. NSF specifically encourages US students and early-career researchers to spend substantive time abroad collaborating with researchers in foreign organizations. (As with domestic proposals, up to two additional pages are permitted in the Project Description for the Coordination Plan, for a maximum of 17 pages total. If the Project Description, excluding the Coordination Plan, exceeds 15 pages, or is missing required information on Results from Prior Support, the proposal will be returned without review. )
  • The NSF budget pages (in US Dollars) should not include any of the costs of components of the project outside of the United States that are to be funded by partner agencies. Budgets for the non-US components of the project (in the currencies used by the partner agencies) must be prepared according to the instructions of partner agencies, referenced in Section VIII of this solicitation, and included as a supplementary document in the NSF proposal. The range of award sizes described in Section III of this solicitation applies to the combined direct costs of the full project (summed over all CRCNS-funded components of the project, inside and outside of the United States).
  • Statements of current and pending support for investigators outside of the United States; and statements of their facilities, equipment, and other resources should be submitted as supplementary documents in the NSF proposal. They do not need to be in NSF-approved format so long as they are consistent with the length and content requirements specified in the PAPPG.
  • Supplementary documents pursuant to Data Management Plans and, as needed, Postdoctoral Mentoring Plans, Human Subjects Protection, and Vertebrate Animals, should cover all components of the collaborative project, inside and outside of the United States. No more than one document of each of these types may be submitted per collaborative project. Page limits for these documents are specified above and in the PAPPG.
  • Information on Collaborators & Other Affiliations for PIs, co-PIs, and senior personnel from outside of the United States should be submitted under Additional Single Copy Documents .

Cloud Computing and High-Throughput Computing Resources (for active funded awards)

US PIs of active funded CRCNS awards are eligible to request cloud computing and/or high-throughput computing (HTC) resources in support of their funded projects as follows:

  • NSF- and NIH-funded PIs may request cloud computing resources to use public clouds such as Amazon Web Services, Google Cloud Platform, IBM Cloud, and Microsoft Azure. NSF-funded PIs may apply for cloud computing resources via CloudBank, an external cloud access entity supported by NSF’s Cloud Access program. NIH-funded PIs may apply for resources via STRIDES, an initiative of the NIH Office of Data Science Strategy (ODSS), part of a plan for implementing the NIH Strategic Plan for Data Science.
  • All US PIs (funded by NSF, NIH, or DOE) may request HTC resources through the Partnership to Advance Throughput Computing (PATh) project supported by NSF. HTC supports the automated execution of workloads that consist of large ensembles of self-contained inter-dependent tasks that may require large amounts of computing power over long periods of time to complete. Available resources include large-scale compute and GPU servers and nearline storage, as described further on the PATh credit accounts web page.

NSF- or NIH-funded PIs requesting cloud computing resources should contact the NSF or NIH cognizant program officer of their CRCNS project by e-mail with a description of their cloud computing request. The description should include, in no more than two pages, items (1), (2), (3), and (5) as described above for the Cloud Computing and High-Throughput Computing Resources supplementary document. Please include “CRCNS CloudAccess” and the NSF award number, or “CRCNS STRIDES” and the NIH award number, in the e-mail subject line. Cloud computing requests will be internally reviewed. As appropriate, the PI(s) will be contacted with further instructions on how to submit an administrative action on their NSF award to access CloudBank, or how to request an administrative supplement to their NIH award for STRIDES.

PIs requesting HTC resources should contact the NSF, NIH, or DOE cognizant program officer of their CRCNS project by e-mail with a description of their HTC request. The description should include, in no more than two pages, items (1), (2), (4), and (5) as described above for the Cloud Computing and High-Throughput Computing Resources supplementary document. Please include “CRCNS HTCAccess” and the award number for the funded project in the e-mail subject line. HTC resource requests will be internally reviewed. NSF will work directly with PATh to provision credits for approved requests.

Proposal Preparation Checklist

Prior to submission, applicants are strongly encouraged to review the proposal preparation checklist provided in the PAPPG, Exhibit II-1, as well as the following list of items specific to the CRCNS program:

[ ] General:

  • [ ] The proposal is not a duplicate of, or substantially similar to, a proposal concurrently under consideration by other programs or study sections of the participating agencies.

[ ] Cover Sheet:

  • [ ] Proposal title includes “CRCNS Research Proposal:” or “CRCNS Data Sharing Proposal:” or, for International Proposals Seeking Parallel Funding, a similar prefix identifying the countries involved and the type of proposal, such as “CRCNS US-German Research Proposal:”
  • [ ] Collaborative proposal status reflects whether or not two or more organizations are seeking US funding to collaborate on a unified project.

[ ] Project Summary:

  • [ ] The statement on broader impacts addresses medical relevance if appropriate.
  • [ ] Proposers seeking cloud computing and/or high-throughput computing resources include the keyword “CloudAccess” and/or “HTCAccess” (one word without space) at the end of the Overview section (before the section on Intellectual Merit).

[ ] Project Description:

  • [ ] The Project Description, excluding the Coordination Plan of up to two additional pages, does not exceed 15 pages.
  • [ ] Results of prior support are included for all applicable PIs and co-PIs, and address sharing of data, software, and/or other resources under prior support .

[ ] Proposal Budget:

  • [ ] The requested budget does not exceed the applicable limits described in Section VIII of this solicitation.
  • [ ] For International Proposals Seeking Parallel Funding, the NSF budget pages do not include any of the costs of components of the project outside of the United States that are to be funded by partner agencies.

[ ] Special Information and Supplementary Documentation:

  • [ ] For International Projects Seeking Parallel Funding, budgets for any components of the project outside of the United States, to be funded by partner agencies, are included in supplementary documents, in the currencies and according to the instructions of the partner agencies.
  • [ ] Biographical sketches for PIs, co-PIs, and senior personnel from outside of the United States are included as supplementary documents (if applicable).
  • [ ] Letters of collaboration conform to the solicitation-prescribed format (if applicable).
  • [ ] Proposals involving human subjects include a supplementary document, no more than two pages in length.
  • [ ] Proposals involving vertebrate animals include a supplementary document, no more than two pages in length.
  • [ ] Proposals requesting cloud computing or HTC resources include a supplementary document, no more than two pages in length.

Cost Sharing:

Budgets should include travel funds for the PI to attend an annual CRCNS Principal Investigators' meeting.

D. Research.gov/Grants.gov Requirements

For Proposals Submitted Via Research.gov:

To prepare and submit a proposal via Research.gov, see detailed technical instructions available at: https://www.research.gov/research-portal/appmanager/base/desktop?_nfpb=true&_pageLabel=research_node_display&_nodePath=/researchGov/Service/Desktop/ProposalPreparationandSubmission.html . For Research.gov user support, call the Research.gov Help Desk at 1-800-381-1532 or e-mail [email protected] . The Research.gov Help Desk answers general technical questions related to the use of the Research.gov system. Specific questions related to this program solicitation should be referred to the NSF program staff contact(s) listed in Section VIII of this funding opportunity.

For Proposals Submitted Via Grants.gov:

Before using Grants.gov for the first time, each organization must register to create an institutional profile. Once registered, the applicant's organization can then apply for any federal grant on the Grants.gov website. Comprehensive information about using Grants.gov is available on the Grants.gov Applicant Resources webpage: https://www.grants.gov/web/grants/applicants.html . In addition, the NSF Grants.gov Application Guide (see link in Section V.A) provides instructions regarding the technical preparation of proposals via Grants.gov. For Grants.gov user support, contact the Grants.gov Contact Center at 1-800-518-4726 or by email: [email protected] . The Grants.gov Contact Center answers general technical questions related to the use of Grants.gov. Specific questions related to this program solicitation should be referred to the NSF program staff contact(s) listed in Section VIII of this solicitation.

Submitting the Proposal: Once all documents have been completed, the Authorized Organizational Representative (AOR) must submit the application to Grants.gov and verify the desired funding opportunity and agency to which the application is submitted. The AOR must then sign and submit the application to Grants.gov. The completed application will be transferred to Research.gov for further processing.

The NSF Grants.gov Proposal Processing in Research.gov informational page provides submission guidance to applicants and links to helpful resources including the NSF Grants.gov Application Guide , Grants.gov Proposal Processing in Research.gov how-to guide , and Grants.gov Submitted Proposals Frequently Asked Questions . Grants.gov proposals must pass all NSF pre-check and post-check validations in order to be accepted by Research.gov at NSF.

When submitting via Grants.gov, NSF strongly recommends applicants initiate proposal submission at least five business days in advance of a deadline to allow adequate time to address NSF compliance errors and resubmissions by 5:00 p.m. submitting organization's local time on the deadline. Please note that some errors cannot be corrected in Grants.gov. Once a proposal passes pre-checks but fails any post-check, an applicant can only correct and submit the in-progress proposal in Research.gov.

Proposers that submitted via Research.gov may use Research.gov to verify the status of their submission to NSF. For proposers that submitted via Grants.gov, until an application has been received and validated by NSF, the Authorized Organizational Representative may check the status of an application on Grants.gov. After proposers have received an e-mail notification from NSF, Research.gov should be used to check the status of an application.

VI. NSF Proposal Processing And Review Procedures

NSF will coordinate and manage the review of proposals jointly with participating domestic and foreign funding organizations, through a joint panel review process used by all participating funders. Relevant information about proposals and reviews of proposals will be shared between the participating organizations as appropriate. Further information on the processes and requirements of participating funding organizations is detailed in this Section and in Section VIII of this solicitation.

Proposals received by NSF are assigned to the appropriate NSF program for acknowledgement and, if they meet NSF requirements, for review. All proposals are carefully reviewed by a scientist, engineer, or educator serving as an NSF Program Officer, and usually by three to ten other persons outside NSF either as ad hoc reviewers, panelists, or both, who are experts in the particular fields represented by the proposal. These reviewers are selected by Program Officers charged with oversight of the review process. Proposers are invited to suggest names of persons they believe are especially well qualified to review the proposal and/or persons they would prefer not review the proposal. These suggestions may serve as one source in the reviewer selection process at the Program Officer's discretion. Submission of such names, however, is optional. Care is taken to ensure that reviewers have no conflicts of interest with the proposal. In addition, Program Officers may obtain comments from site visits before recommending final action on proposals. Senior NSF staff further review recommendations for awards. A flowchart that depicts the entire NSF proposal and award process (and associated timeline) is included in PAPPG Exhibit III-1.

A comprehensive description of the Foundation's merit review process is available on the NSF website at: https://www.nsf.gov/bfa/dias/policy/merit_review/ .

Proposers should also be aware of core strategies that are essential to the fulfillment of NSF's mission, as articulated in Leading the World in Discovery and Innovation, STEM Talent Development and the Delivery of Benefits from Research - NSF Strategic Plan for Fiscal Years (FY) 2022 - 2026 . These strategies are integrated in the program planning and implementation process, of which proposal review is one part. NSF's mission is particularly well-implemented through the integration of research and education and broadening participation in NSF programs, projects, and activities.

One of the strategic objectives in support of NSF's mission is to foster integration of research and education through the programs, projects, and activities it supports at academic and research institutions. These institutions must recruit, train, and prepare a diverse STEM workforce to advance the frontiers of science and participate in the U.S. technology-based economy. NSF's contribution to the national innovation ecosystem is to provide cutting-edge research under the guidance of the Nation's most creative scientists and engineers. NSF also supports development of a strong science, technology, engineering, and mathematics (STEM) workforce by investing in building the knowledge that informs improvements in STEM teaching and learning.

NSF's mission calls for the broadening of opportunities and expanding participation of groups, institutions, and geographic regions that are underrepresented in STEM disciplines, which is essential to the health and vitality of science and engineering. NSF is committed to this principle of diversity and deems it central to the programs, projects, and activities it considers and supports.

A. Merit Review Principles and Criteria

The National Science Foundation strives to invest in a robust and diverse portfolio of projects that creates new knowledge and enables breakthroughs in understanding across all areas of science and engineering research and education. To identify which projects to support, NSF relies on a merit review process that incorporates consideration of both the technical aspects of a proposed project and its potential to contribute more broadly to advancing NSF's mission "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes." NSF makes every effort to conduct a fair, competitive, transparent merit review process for the selection of projects.

1. Merit Review Principles

These principles are to be given due diligence by PIs and organizations when preparing proposals and managing projects, by reviewers when reading and evaluating proposals, and by NSF program staff when determining whether or not to recommend proposals for funding and while overseeing awards. Given that NSF is the primary federal agency charged with nurturing and supporting excellence in basic research and education, the following three principles apply:

  • All NSF projects should be of the highest quality and have the potential to advance, if not transform, the frontiers of knowledge.
  • NSF projects, in the aggregate, should contribute more broadly to achieving societal goals. These "Broader Impacts" may be accomplished through the research itself, through activities that are directly related to specific research projects, or through activities that are supported by, but are complementary to, the project. The project activities may be based on previously established and/or innovative methods and approaches, but in either case must be well justified.
  • Meaningful assessment and evaluation of NSF funded projects should be based on appropriate metrics, keeping in mind the likely correlation between the effect of broader impacts and the resources provided to implement projects. If the size of the activity is limited, evaluation of that activity in isolation is not likely to be meaningful. Thus, assessing the effectiveness of these activities may best be done at a higher, more aggregated, level than the individual project.

With respect to the third principle, even if assessment of Broader Impacts outcomes for particular projects is done at an aggregated level, PIs are expected to be accountable for carrying out the activities described in the funded project. Thus, individual projects should include clearly stated goals, specific descriptions of the activities that the PI intends to do, and a plan in place to document the outputs of those activities.

These three merit review principles provide the basis for the merit review criteria, as well as a context within which the users of the criteria can better understand their intent.

2. Merit Review Criteria

All NSF proposals are evaluated through use of the two National Science Board approved merit review criteria. In some instances, however, NSF will employ additional criteria as required to highlight the specific objectives of certain programs and activities.

The two merit review criteria are listed below. Both criteria are to be given full consideration during the review and decision-making processes; each criterion is necessary but neither, by itself, is sufficient. Therefore, proposers must fully address both criteria. (PAPPG Chapter II.C.2.d(i). contains additional information for use by proposers in development of the Project Description section of the proposal). Reviewers are strongly encouraged to review the criteria, including PAPPG Chapter II.C.2.d(i), prior to the review of a proposal.

When evaluating NSF proposals, reviewers will be asked to consider what the proposers want to do, why they want to do it, how they plan to do it, how they will know if they succeed, and what benefits could accrue if the project is successful. These issues apply both to the technical aspects of the proposal and the way in which the project may make broader contributions. To that end, reviewers will be asked to evaluate all proposals against two criteria:

  • Intellectual Merit: The Intellectual Merit criterion encompasses the potential to advance knowledge; and
  • Broader Impacts: The Broader Impacts criterion encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes.

The following elements should be considered in the review for both criteria:

  • Advance knowledge and understanding within its own field or across different fields (Intellectual Merit); and
  • Benefit society or advance desired societal outcomes (Broader Impacts)?
  • To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?
  • Is the plan for carrying out the proposed activities well-reasoned, well-organized, and based on a sound rationale? Does the plan incorporate a mechanism to assess success?
  • How well qualified is the individual, team, or organization to conduct the proposed activities?
  • Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?

Broader impacts may be accomplished through the research itself, through the activities that are directly related to specific research projects, or through activities that are supported by, but are complementary to, the project. NSF values the advancement of scientific knowledge and activities that contribute to achievement of societally relevant outcomes. Such outcomes include, but are not limited to: full participation of women, persons with disabilities, and underrepresented minorities in science, technology, engineering, and mathematics (STEM); improved STEM education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society; development of a diverse, globally competitive STEM workforce; increased partnerships between academia, industry, and others; improved national security; increased economic competitiveness of the United States; and enhanced infrastructure for research and education.

Proposers are reminded that reviewers will also be asked to review the Data Management Plan and the Postdoctoral Researcher Mentoring Plan, as appropriate.

Additional Solicitation Specific Review Criteria

For this solicitation, clinical and technological applications are specifically included among the societally relevant outcomes that could be related to a project's Broader Impacts, in addition to the potential outcomes listed above. An NIH Plan for Enhancing Diverse Perspectives (PEDP) should not be submitted; however, specific plans for fostering diversity, inclusivity, and accessibility are strongly encouraged as part of the Broader Impacts section of the Project Description.

The following additional review criterion reflects this solicitation's central goal of enabling high-quality collaborative research.

Quality and value of collaboration Factors to be considered are as follows: Is the expertise of the proposers complementary and well-suited to the problems being addressed? Does the collaboration productively bring together new combinations of investigators? Are there new approaches or resources facilitated as a result of the collaboration? Are the specific roles of each collaborating investigator clear? Is the collaborative activity coordinated efficiently and effectively? To what extent will it contribute to the advancement of multiple collaborating disciplines? To what extent will it lead to the development of high-quality resources that will be useful to the research community at large? To what extent will it provide unique collaborative research experiences for participating students and early-career researchers?

For proposals involving international collaborations, reviewers will additionally consider: mutual benefits; true intellectual collaboration with the foreign partner(s); benefits to be realized from the expertise and specialized skills, facilities, sites, and/or resources of the international counterpart; and active research engagement of U.S. students and early-career researchers, where such individuals are engaged in the research.

NIH Review Criteria

The mission of the NIH is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability. In their evaluations of Intellectual Merit , reviewers will be asked to consider the following criteria that are used by NIH:

Overall Impact. Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following five core review criteria, and additional review criteria (as applicable for the project proposed).

Significance. Does the project address an important problem or a critical barrier to progress in the field? Is the prior research that serves as the key support for the proposed project rigorous? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Investigator(s). Are the PIs, collaborators, and other researchers well suited to the project? If Early Stage Investigators or those in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance, and organizational structure appropriate for the project?

Innovation. Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

Approach. Are the overall strategy, methodology, and analyses well-reasoned and appropriate to accomplish the specific aims of the project? Have the investigators included plans to address weaknesses in the rigor of prior research that serves as the key support for the proposed project? Have the investigators presented strategies to ensure a robust and unbiased approach, as appropriate for the work proposed? Are potential problems, alternative strategies, and benchmarks for success presented? If the project is in the early stages of development, will the strategy establish feasibility and will particularly risky aspects be managed? Have the investigators presented adequate plans to address relevant biological variables, such as sex, for studies in vertebrate animals or human subjects?

If the project involves clinical research, are the plans for 1) protection of human subjects from research risks, and 2) inclusion of minorities and members of both sexes/genders, as well as the inclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?

Environment. Will the scientific environment in which the work will be done contribute to the probability of success? Are the institutional support, equipment and other physical resources available to the investigators adequate for the project proposed? Will the project benefit from unique features of the scientific environment, subject populations, or collaborative arrangements?

Where applicable, the following items will also be considered:

Protections for Human Subjects. For research that involves human subjects but does not involve one of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.

For research that involves human subjects and meets the criteria for one or more of the six categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials.

Inclusion of Women, Minorities, and Individuals Across the Lifespan. When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed.

Vertebrate Animals. The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following criteria: (1) description of procedures involving animals including species, strains, ages, sex, and total number to be used; (2) justifications for the use of animals and for the appropriateness of the species proposed; (3) interventions to minimize discomfort, distress, pain and injury; and (4) justification for euthanasia method if NOT consistent with the American Veterinary Medical Association (AVMA) Guidelines for the Euthanasia of Animals. Reviewers will assess the use of chimpanzees as they would any other application proposing the use of vertebrate animals. For additional information, see http://grants.nih.gov/grants/olaw/VASchecklist.pdf .

Biohazards. Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.

Budget and Period of Support. Reviewers will consider whether the budget and the requested period of support are fully justified and reasonable in relation to the proposed research.

B. Review and Selection Process

Proposals submitted in response to this program solicitation will be reviewed by Ad hoc Review and/or Panel Review.

Reviewers will be asked to evaluate proposals using two National Science Board approved merit review criteria and, if applicable, additional program specific criteria. A summary rating and accompanying narrative will generally be completed and submitted by each reviewer and/or panel. The Program Officer assigned to manage the proposal's review will consider the advice of reviewers and will formulate a recommendation.

NSF Process: Those proposals selected for funding by NSF will be handled in accordance with standard NSF procedures. After scientific, technical and programmatic review and consideration of appropriate factors, the NSF Program Officer recommends to the cognizant Division Director whether the proposal should be declined or recommended for award. NSF is striving to be able to tell applicants whether their proposals have been declined or recommended for funding within six months. The time interval begins on the date of receipt. The interval ends when the Division Director accepts the Program Officer's recommendation.

A summary rating and accompanying narrative will be completed and submitted by each reviewer. In all cases, reviews are treated as confidential documents. Verbatim copies of reviews, excluding the names of the reviewers, are sent to the Principal Investigator/Project Director by the Program Officer. In addition, the proposer will receive an explanation of the decision to award or decline funding.

In all cases, after programmatic approval has been obtained, the proposals recommended for funding will be forwarded to the Division of Grants and Agreements or the Division of Acquisition and Cooperative Support for review of business, financial, and policy implications. After an administrative review has occurred, Grants and Agreements Officers perform the processing and issuance of a grant or other agreement. Proposers are cautioned that only a Grants and Agreements Officer may make commitments, obligations or awards on behalf of NSF or authorize the expenditure of funds. No commitment on the part of NSF should be inferred from technical or budgetary discussions with a NSF Program Officer. A Principal Investigator or organization that makes financial or personnel commitments in the absence of a grant or cooperative agreement signed by the NSF Grants and Agreements Officer does so at their own risk.

NIH Process: For those proposals that are selected for potential funding by participating NIH Institutes or Centers, the PI will be required to resubmit the proposal in an NIH-approved format directly to the Center for Scientific Review ( http://www.csr.nih.gov/ ) of the NIH. PIs invited to resubmit to NIH will receive further information on resubmission procedures from NIH.

An applicant will not be allowed to increase the proposed budget or change the scientific content of the application in the resubmission to the NIH. NIH budgets may not exceed $250,000 in direct costs, and the total direct costs requested for all years may not exceed the total requested on the NSF application. However, in some cases, NIH Institutes may request that the budget request be reallocated across the years of the grant to conform to NIH modular budget practices. Indirect costs on any foreign subawards/subcontracts will be limited to eight (8) percent. Applicants will be expected to utilize the Multiple Principal Investigator option at the NIH ( http://grants.nih.gov/grants/multi_PI/ ) as appropriate.

Proposals that are selected for potential funding by participating NIH Institutes or Centers will be subject to the NIH Data Management and Sharing policy ( NIH NOT-OD-21-013 , effective January 25, 2023) intended to promote the sharing of scientific data. Following the resubmission instructions provided by NIH, applicants planning research that results in the generation of scientific data will need to submit a Data Management and Sharing Plan, as described at https://sharing.nih.gov/data-management-and-sharing-policy . As outlined in the NIH Guide Notice Supplemental Policy Information: Allowable Costs for Data Management and Sharing ( NIH NOT-OD-21-015 ), investigators may request funds toward data management and sharing in the budget and budget justification sections of their applications.

These NIH applications will be entered into the NIH IMPAC II system. The results of the review will be presented to the involved Institutes' or Centers' National Advisory Councils for the second level of review. Subsequent to the Council reviews, NIH Institutes and Centers will make their funding determinations and selected awards will be made. Subsequent grant administration procedures for NIH awardees, including those related to New and Early Stage Investigators ( http://grants.nih.gov/grants/new_investigators/ ), will be in accordance with the policies of NIH. Applications selected for NIH funding will use the NIH R01 funding mechanism.

At the end of the project period, renewal applications for projects funded by the NIH are expected to be submitted directly to the NIH as Renewal Applications, rather than as proposals to the CRCNS program. Principal Investigators should contact their NIH Program Officer for additional information. For informational purposes, NIH Principal Investigators may wish to consult the NIH Grants and Funding web site ( https://grants.nih.gov/grants/about_grants.htm ), which provides excellent generic information about all aspects of NIH grantsmanship, including Renewal Applications.

DOE Process: For proposals that are selected for funding consideration by DOE, the DOE will ask and provide guidance to the applicant(s) to resubmit the proposal in a DOE-approved format directly to the DOE via the DOE Portfolio Analysis and Management System (PAMS) or Grants.gov. Each of these DOE applications will be accompanied by a cover letter that associates the application with CRCNS. Applicants will not be allowed to increase the proposed budget or change the scientific content of the application in the resubmission to the DOE unless approved or negotiated by the DOE.

BMBF Process: On the basis of the evaluation, suitable project ideas will be selected for funding. Successful applicants will be informed in writing of the result of the selection procedure.

In the second phase of the procedure, applicants whose applications have received a positive evaluation will be invited to present a formal application for funding. A decision will be made after a final evaluation. Forms for funding applications, guidelines, leaflets, information and auxiliary terms and conditions are available on the Internet at http://www.foerderportal.bund.de/ or can be obtained from the project management organization. Applicants are strongly advised to use the electronic application system "easy" to draft (project outlines and) formal applications ( http://www.foerderportal.bund.de/ ).

ANR Process: Taking into consideration the joint panel review recommendation outcome and consultation with the participating funding organizations, ANR will select the projects to be funded. ANR will inform the French applicants of the outcome of the selection through its usual online channel. The grant agreements of the selected projects will be issued in accordance with the ANR standard process and the call-specific funding regulations and requirements published on the ANR website.

BSF Process: BSF requires parallel submission of the proposal by the U.S. and Israeli PIs, according to its submission regulations and using an identical project description to that submitted to the NSF. However, BSF will not conduct an independent selection process, but rather will review the research programs selected for funding by the NSF and/or NIH and include Israeli PIs, and in most cases fund them if sufficient resources are available. BSF will notify all applicants of the results and online availability of the review material. BSF submission instructions can be found using the link: https://www.bsf.org.il/funding-opportunities/nsf-bsf-joint-research-grants/the-programs/

NICT Process: NICT and NSF will decide on projects to be selected after a consultation based on a proposal selection produced from the joint panel review. After this selection is officially approved by the NICT board of directors, NICT will inform Japanese applicants of the outcome. For successful applicants, a funding agreement will be entered in accordance with the standard NICT regulations on research funding.

AEI Process: Taking into consideration the joint panel review recommendation outcome and consultation with the participating funding organizations, AEI will select the projects that it wishes to fund. AEI will inform the Spanish applicants of the outcome of the selection. The national support to the Spanish parties will be implemented through the State Research Program Call on International Joint Programming R&D Projects [ Proyectos de I+D+I de Programación Conjunta Internacional (PCI) ]. Spanish researchers will abide by national rules according to the PCI call. The Granting Resolution ( Resolución Definitiva de Concesión ) formalizes the agreement between the AEI and the Spanish beneficiaries and for all intents and purposes this Resolution acts as a formal contract between the parties.

ISCIII Process: Applicants eligible for funding will be invited to present a formal application for funding to ISCIII. Forms for funding applications, guidelines, leaflets, information and auxiliary terms and conditions are available on the Internet at https://sede.isciii.gob.es . Applicants must use the electronic application system to draft (project outlines and) formal applications ( https://sede.isciii.gob.es ). ISCIII will select the project proposals that it wishes to fund taking into consideration the joint panel review recommendation final outcome of the evaluation and in consultation with the participating funding organizations. ISCIII will inform the applicants based in institutions located in Spain of the outcome of the selection procedure. After programmatic approval, the grant agreements of the selected projects will be issued in accordance with ISCIII standard funding regulations pursuant to the call of the Strategic Action for Health ( Acción Estrategica en Salud ).

VII. NSF Award Administration Information

A. notification of the award.

Notification of an NSF award is made to the submitting organization by an NSF Grants and Agreements Officer. Organizations whose proposals are declined will be advised as promptly as possible by the cognizant NSF Program administering the program. Verbatim copies of reviews, not including the identity of the reviewer, will be provided automatically to the Principal Investigator. (See Section VI.B. for additional information on the review process.)

B. Award Conditions

An NSF award consists of: (1) the award notice, which includes any special provisions applicable to the award and any numbered amendments thereto; (2) the budget, which indicates the amounts, by categories of expense, on which NSF has based its support (or otherwise communicates any specific approvals or disapprovals of proposed expenditures); (3) the proposal referenced in the award notice; (4) the applicable award conditions, such as Grant General Conditions (GC-1)*; or Research Terms and Conditions* and (5) any announcement or other NSF issuance that may be incorporated by reference in the award notice. Cooperative agreements also are administered in accordance with NSF Cooperative Agreement Financial and Administrative Terms and Conditions (CA-FATC) and the applicable Programmatic Terms and Conditions. NSF awards are electronically signed by an NSF Grants and Agreements Officer and transmitted electronically to the organization via e-mail.

*These documents may be accessed electronically on NSF's Website at https://www.nsf.gov/awards/managing/award_conditions.jsp?org=NSF . Paper copies may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] .

More comprehensive information on NSF Award Conditions and other important information on the administration of NSF awards is contained in the NSF Proposal & Award Policies & Procedures Guide (PAPPG) Chapter VII, available electronically on the NSF Website at https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .

Administrative and National Policy Requirements

Build America, Buy America

As expressed in Executive Order 14005, Ensuring the Future is Made in All of America by All of America’s Workers (86 FR 7475), it is the policy of the executive branch to use terms and conditions of Federal financial assistance awards to maximize, consistent with law, the use of goods, products, and materials produced in, and services offered in, the United States.

Consistent with the requirements of the Build America, Buy America Act (Pub. L. 117-58, Division G, Title IX, Subtitle A, November 15, 2021), no funding made available through this funding opportunity may be obligated for an award unless all iron, steel, manufactured products, and construction materials used in the project are produced in the United States. For additional information, visit NSF’s Build America, Buy America webpage.

Special Award Conditions:

Attribution of support in publications must acknowledge the joint program, as well as the funding organization and award number, by including a phrase such as, "as part of the NSF/NIH/DOE/ANR/BMBF/BSF/NICT/AEI Collaborative Research in Computational Neuroscience Program."

C. Reporting Requirements

For all multi-year grants (including both standard and continuing grants), the Principal Investigator must submit an annual project report to the cognizant Program Officer no later than 90 days prior to the end of the current budget period. (Some programs or awards require submission of more frequent project reports). No later than 120 days following expiration of a grant, the PI also is required to submit a final project report, and a project outcomes report for the general public.

Failure to provide the required annual or final project reports, or the project outcomes report, will delay NSF review and processing of any future funding increments as well as any pending proposals for all identified PIs and co-PIs on a given award. PIs should examine the formats of the required reports in advance to assure availability of required data.

PIs are required to use NSF's electronic project-reporting system, available through Research.gov, for preparation and submission of annual and final project reports. Such reports provide information on accomplishments, project participants (individual and organizational), publications, and other specific products and impacts of the project. Submission of the report via Research.gov constitutes certification by the PI that the contents of the report are accurate and complete. The project outcomes report also must be prepared and submitted using Research.gov. This report serves as a brief summary, prepared specifically for the public, of the nature and outcomes of the project. This report will be posted on the NSF website exactly as it is submitted by the PI.

More comprehensive information on NSF Reporting Requirements and other important information on the administration of NSF awards is contained in the NSF Proposal & Award Policies & Procedures Guide (PAPPG) Chapter VII, available electronically on the NSF Website at https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .

VIII. Agency Contacts And Agency-specific Information

Please note that the program contact information is current at the time of publishing. See program website for any updates to the points of contact.

General inquiries regarding this program should be made to:

For questions related to the use of NSF systems contact:

For questions relating to Grants.gov contact:

  • Grants.gov Contact Center: If the Authorized Organizational Representatives (AOR) has not received a confirmation message from Grants.gov within 48 hours of submission of application, please contact via telephone: 1-800-518-4726; e-mail: [email protected] .

The Spanish Research Agency (AEI) will consider US-Spanish Research Proposals and US-Spanish Data Sharing Proposals submitted to NSF in response to this solicitation. All the information required for Spanish applicants to successfully submit a proposal can be found in the annex to this solicitation available on the AEI-MCIN webpage at https://www.aei.gob.es/noticias/anuncio-convocatoria-proyectos-bilaterales-estados-unidos-participacion-aei-marco . AEI strongly encourages Spanish applicants contact the national point of contact before the proposal is submitted.

In response to this solicitation, an investigator may participate as PI or co-PI in no more than one proposal involving Spain per review cycle. AEI recommends the signature of a consortium agreement covering financial and intellectual property issues as well as the management and delivery of project activities to all partners involved in bilateral or multilateral projects. Resulting scientific data not subject to intellectual property rights must comply with FAIR principles ( http://doi.org/10.1038/sdata.2016.18 ).

It is not necessary to submit a parallel proposal directly to AEI; nonetheless, a notification of submission should be sent, within one week following the NSF proposal deadline, to the national point of contact:

Esther Chacón, email: [email protected] or [email protected]

The French National Research Agency (ANR) will consider US-French Research Proposals and US-French Data Sharing Proposals, Multilateral Research Proposals, and Multilateral Data Sharing Proposals involving the United States and the partnering countries Israel and/or Japan, submitted to NSF in response to this solicitation. ANR will finance projects of maximum four years. The modalities of participation of the French applicants are presented in the annex to this solicitation available on the ANR website at ( https://anr.fr/crcns-2024 ). ANR requires the signature of a consortium agreement covering financial and intellectual property issues as well as the management and delivery of project activities. It is not necessary to submit a parallel full proposal to ANR; nonetheless, an annex for French Participants including a publishable lay summary of the project as well as a financial plan for the French partners should be submitted to NSF as supplementary material included in the proposal and in parallel sent to [email protected] within one week following the NSF proposal deadline .

ANR encourages researchers to contact EBRAINS infrastructure ( [email protected] ) to inquire about modalities for advanced computational resources.

The French applicants are strongly encouraged to contact ANR prior to submission:

Sheyla Mejia, Scientific Coordinator, Biology and Health Department, telephone: +33 1 7809 8014, email: [email protected]

Maurice Tia, Scientific Officer, Information and Communication Sciences and Technologies Department, telephone: +33 1 7273 0690, email: [email protected]

Germany's Federal Ministry of Education and Research will consider US-German Research Proposals and US-German Data Sharing Proposals submitted in response to this solicitation. The durations of these projects are expected to be no greater than three years. Investigators contemplating projects that would require longer durations are advised to discuss their project requirements with the appropriate agency contact(s) before submitting. Collaborating investigators in projects selected for funding that involve Germany will provide assurance to BMBF that a cooperation agreement, covering issues including intellectual property, has been established. It is not necessary to submit a parallel full proposal to BMBF; nonetheless, a financial plan for the German partners must be submitted to NSF as supplementary material included in the proposal . German applicants are referred to the BMBF Richtlinien ( https://www.gesundheitsforschung-bmbf.de/de/16749.php ) for further instructions, and are urged to contact the project management organization for advice on applications:

Katja Hüttner, DLR Projektträger für das BMBF, telephone: +49 228 3821 2177, email: [email protected] , web: http://www.dlr.de/pt

Sophia Schach, DLR Projektträger für das BMBF, telephone: +49 228 3821 1743, email: [email protected] , web: http://www.dlr.de/pt

The U.S.-Israel Binational Science Foundation will consider US-Israeli Research Proposals, US-Israeli Data Sharing Proposals, Multilateral Research Proposals, and Multilateral Data Sharing Proposals submitted in response to this solicitation. No more than five years of support may be requested. A proposal with the same project description as the proposal to NSF must be submitted by the Israeli PI to the BSF. The budget for the Israeli component of the project should be expressed in US Dollars. Submittal instructions are available at: https://www.bsf.org.il/funding-opportunities/nsf-bsf-joint-research-grants/the-programs/

Questions should be directed to:

Yael Dressler, telephone: +972-2-5828239, email: [email protected]

Rachel Haring, telephone: +972-2-5828239, email: [email protected]

DOE budgets may not exceed $400,000 per year in total costs (including direct and indirect costs) requested on the NSF application. The durations of these projects are expected to be no greater than three years.

Further questions may be directed to:

Robinson Pino, Program Manager, Office of Science, Advanced Scientific Computing Research, telephone: (301) 903-1263, email: [email protected]

Japan’s National Institute of Information and Communications Technology (NICT) will consider US-Japanese Research Proposals, US-Japanese Data Sharing Proposals, and Multilateral Research and Data Sharing Proposals involving the United States and the partnering countries France and/or Israel, submitted in response to this solicitation. The durations of these projects are expected to be no greater than three years. In a supplementary document, investigators should provide assurance that an agreement covering issues such as intellectual property has been or will be established within a reasonable time after the notifications of awarded projects.

There are two types of US-Japanese projects: one is under NICT’s extramural Commissioned ICT Research and Development Program, and the other is under NICT’s intramural R&D funding program for NICT researchers. Projects may involve extramural or intramural Japanese investigators, but not both. A proposal with the same project description as the proposal to NSF must be submitted by the Japanese PI to NICT. Japanese applicants should refer to NICT’s solicitation (Japanese language only) for more information.

CRCNS is affiliated with the NIH Blueprint for Neuroscience Research ( http://neuroscienceblueprint.nih.gov/ ), and involves ten participating NIH Institutes and Centers. An NIH Notice ( NOT-MH-24-140 ) is being issued in parallel with this solicitation. Proposals are selected for potential NIH funding on the basis of the common CRCNS joint review process; resubmission of proposals directly to NIH is by invitation only. No NIH awards will exceed $250,000 per year in direct costs.

The CRCNS program supports human research projects such as observational studies and Basic Experimental Studies involving Humans ( BESH ), i.e., studies that meet both the definition of basic research and the NIH definition of a clinical trial . However, Phase I-IV clinical trials with clinical outcomes as the primary outcomes to assess efficacy will not be accepted. Please be aware of NIH’s definition of clinical trials , which has specific requirements for applicants proposing BESH studies. For research projects that 1) involve human subjects and 2) have public health relevance, applicants are strongly encouraged to contact Dr. Siavash Vaziri ( [email protected] ) prior to submitting an application to determine whether it could be supported by NIH through this program.

A search for “CRCNS” in the NIH RePORTER system ( https://reporter.nih.gov/ ) will show a list of CRCNS projects supported by NIH. Further questions may be directed to:

Siavash Vaziri (NIH Chair), Program Director, National Institute of Mental Health, telephone: (301) 443-1576, email: [email protected]

Wen Chen, Program Director, National Center for Complementary and Integrative Health, telephone: (301) 451-3989, email: [email protected]

Amanda DiBattista, Program Director, Neurobiology of Aging and Neurodegeneration Branch, Division of Neuroscience, National Institute on Aging, telephone: (301) 496-9350, email: [email protected]

Qi Duan, Program Director, National Institute of Biomedical Imaging and Bioengineering, telephone: (301) 451-4780, email: [email protected]

Marie Gill, Program Specialist, National Institute of Neurological Disorders and Stroke, telephone: (301) 451-1449, email: [email protected]

John A. Matochik, Program Director, National Institute on Alcohol Abuse and Alcoholism, telephone: (301) 451-7319, email: [email protected]

Brett Miller, Program Director, Eunice Kennedy Shriver National Institute of Child Health and Human Development, telephone: (301) 496-9849, email: [email protected]

Jessica Mollick, Program Officer, National Institute on Drug Abuse, telephone: (301) 827-2949, email: [email protected]

Leslie Osborne, Program Director, National Institute of Neurological Disorders and Stroke, telephone: (301) 496-9964, email: [email protected]

Amy Poremba, Program Director, Central Pathways for Hearing and Balance, National Institute on Deafness and Other Communication Disorders, telephone: (301) 496-1804, email: [email protected]

Merav Sabri, Program Director, Central Processing of Taste and Smell, National Institute on Deafness and Other Communication Disorders, telephone: (301) 827-0908, email: [email protected]

Coryse St Hillaire-Clarke, Program Director, Sensory and Motor Disorders of Aging, Division of Neuroscience, National Institute on Aging, telephone: (301) 496-9350, email: [email protected]

Cheri Wiggs, Program Director, Division of Extramural Research, National Eye Institute, telephone: (301) 451-2020, email: [email protected]

A search in the NSF Award Search will show a list of CRCNS projects supported by NSF. Further questions for NSF program officers may be directed to:

Kenneth Whang, Program Director, Division of Information and Intelligent Systems, telephone: (703) 292-5149, fax: (703) 292-9073, email: [email protected]

Zhilan Feng, Program Director, Division of Mathematical Sciences, telephone: (703) 292-7523, email: [email protected]

Dwight Kravitz, Program Director, Division of Behavioral and Cognitive Sciences, telephone: (703) 292-8740, email: [email protected]

Maija Kukla, Program Director, Office of International Science and Engineering, telephone: (703) 292-4940, email: [email protected]

Floh Thiels, Program Director, Division of Integrative Organismal Systems, telephone: (703) 292-8167, email: [email protected]

Steven Zehnder, Associate Program Director, Division of Chemical, Bioengineering, Environmental, and Transport Systems, telephone: (703) 292-7014, email: [email protected]

Lucy Zhang, Program Director, Division of Civil, Mechanical, and Manufacturing Innovation, telephone: (703) 292-5015, email: [email protected]

IX. Other Information

The NSF website provides the most comprehensive source of information on NSF Directorates (including contact information), programs and funding opportunities. Use of this website by potential proposers is strongly encouraged. In addition, "NSF Update" is an information-delivery system designed to keep potential proposers and other interested parties apprised of new NSF funding opportunities and publications, important changes in proposal and award policies and procedures, and upcoming NSF Grants Conferences . Subscribers are informed through e-mail or the user's Web browser each time new publications are issued that match their identified interests. "NSF Update" also is available on NSF's website .

Grants.gov provides an additional electronic capability to search for Federal government-wide grant opportunities. NSF funding opportunities may be accessed via this mechanism. Further information on Grants.gov may be obtained at https://www.grants.gov .

About The National Science Foundation

The National Science Foundation (NSF) is an independent Federal agency created by the National Science Foundation Act of 1950, as amended (42 USC 1861-75). The Act states the purpose of the NSF is "to promote the progress of science; [and] to advance the national health, prosperity, and welfare by supporting research and education in all fields of science and engineering."

NSF funds research and education in most fields of science and engineering. It does this through grants and cooperative agreements to more than 2,000 colleges, universities, K-12 school systems, businesses, informal science organizations and other research organizations throughout the US. The Foundation accounts for about one-fourth of Federal support to academic institutions for basic research.

NSF receives approximately 55,000 proposals each year for research, education and training projects, of which approximately 11,000 are funded. In addition, the Foundation receives several thousand applications for graduate and postdoctoral fellowships. The agency operates no laboratories itself but does support National Research Centers, user facilities, certain oceanographic vessels and Arctic and Antarctic research stations. The Foundation also supports cooperative research between universities and industry, US participation in international scientific and engineering efforts, and educational activities at every academic level.

Facilitation Awards for Scientists and Engineers with Disabilities (FASED) provide funding for special assistance or equipment to enable persons with disabilities to work on NSF-supported projects. See the NSF Proposal & Award Policies & Procedures Guide Chapter II.E.6 for instructions regarding preparation of these types of proposals.

The National Science Foundation has Telephonic Device for the Deaf (TDD) and Federal Information Relay Service (FIRS) capabilities that enable individuals with hearing impairments to communicate with the Foundation about NSF programs, employment or general information. TDD may be accessed at (703) 292-5090 and (800) 281-8749, FIRS at (800) 877-8339.

The National Science Foundation Information Center may be reached at (703) 292-5111.

ABOUT THE NATIONAL INSTITUTES OF HEALTH

The National Institutes of Health (NIH) mission is to uncover new knowledge that will lead to better health for everyone. NIH works toward that mission by conducting research in its own laboratories; supporting the research of non-Federal scientists in universities, medical schools, hospitals, and research institutions throughout the country and abroad; helping in the training of research investigators; and fostering communication of medical information. The NIH institutes and centers participating in this program contribute to NIH's mission through research efforts aimed at understanding, treating, and preventing disease states that involve or are related to the nervous system.

  • The mission of the National Institute of Neurological Disorders and Stroke (NINDS) is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease. NINDS supports research projects that range from basic studies of the nervous system to Phase III clinical trials. Through the CRCNS program, NINDS will not support definitive clinical trials of therapeutic devices, such as a traditional feasibility study and/or pivotal trial (see https://www.fda.gov/regulatory-information/search-fda-guidance-documents/medical-device-accessories-describing-accessories-and-classification-pathways for the definition of an early feasibility study, feasibility study and pivotal trial). The NINDS is interested in supporting collaborative research in innovative computational analysis, simulation and modeling of physiological and pathological structures and functions of the nervous system, and mechanisms underlying neurological neuromuscular and neurovascular disorders.
  • The mission of the National Institute of Mental Health (NIMH) is to transform the understanding and treatment of mental illnesses through basic and clinical research, paving the way for prevention, recovery, and cure. NIMH supports research programs in neuroscience and basic behavioral science, genomics, technology development, translational research, global mental health, and services and intervention, please see: https://www.nimh.nih.gov/research-priorities/research-areas/index.shtml . The NIMH Strategic Plan for Research provides a broad roadmap for the Institute’s research priorities, encompassing a range from fundamental science of the brain and behavior to public health impact. For specifics about the NIMH strategic plan for research, please see: http://www.nimh.nih.gov/research-priorities/index.shtml .
  • National Institute on Drug Abuse (NIDA) supported research is aimed at increasing the understanding of the causes and consequences of substance use disorders (SUDs), and in how to prevent and treat them. NIDA supports a broad research program in basic and clinical neuroscience research ranging from molecular biology to cognition, including studies investigating fundamental behavior and brain circuitry relevant to substance use. NIDA is also interested in research on the co-morbidity of SUDs with other psychiatric disorders, and understanding interactions between substance use and HIV. NIDA is also interested in advancing research on the social determinants of health related to brain development, substance use, and addiction.
  • The National Eye Institute (NEI) supports basic and clinical research aimed at increasing our understanding of the eye and the visual system in normal health and disease.
  • The National Institute on Deafness and Other Communication Disorders (NIDCD) supports biomedical and behavioral research related to normal and disordered processes of hearing, balance, smell, taste, voice, speech and language. Basic and clinical studies of genetic, molecular, cellular, physiological, biochemical, and behavioral aspects of function in health and disease are encouraged.
  • The National Institute of Biomedical Imaging and Bioengineering (NIBIB) supports research and development of new and novel computational methods for modeling, simulation and analysis for the purpose of detecting, treating, and preventing disease. For projects developing computational methods for image analysis and post-processing, where the computation is not linked to the direct testing or generation of a neuroscience hypothesis, please refer to the NIBIB program for image processing: https://www.nibib.nih.gov/research-funding/image-processing-visual-perception-and-display .
  • The National Institute on Alcohol Abuse and Alcoholism (NIAAA) supports basic, clinical and behavioral research to increase the understanding of normal and abnormal biological functions and behavior relating to alcohol use, to improve the diagnosis, prevention, and treatment of alcohol use disorders, and to enhance quality health care to reduce the burden of alcohol abuse and addiction.
  • The National Center for Complementary and Integrative Health (NCCIH) supports research using scientific methods and advanced technologies (e.g., fMRI, MRI, PET) to study a diverse array of complementary medical and health care systems, practices (e.g., mindfulness-based interventions, music-based interventions, force-based manipulations, thermotherapies, meditative movement-based interventions) and natural product interventions (e.g., microbial-based therapeutics including probiotics, microbial metabolites, botanicals and related dietary supplements, and animal-derived peptides and toxins) with the goal of understanding their potential contribution to whole person health, pain, emotional well-being, and other symptoms. In addition, the NCCIH supports the integration of the technologies with multiple physiological studies to understand the connections and interactions across the systems involving brain and the rest of the nervous system such as interoception, and/or the impact of multi-component interventions on multisystem connections and interactions in pre-clinical models. Inclusive in this goal is to support collaborative computational approaches to study genetic, molecular, and neuroimaging, neurobiological and behavioral data that can be combined and brought to bear on understanding the underlying mechanisms of action of these complementary and integrative health approaches.
  • The mission of the National Institute on Aging (NIA) is to seek to understand the nature of aging and the aging process, and diseases and conditions associated with growing older, in order to extend the healthy, active years of life. NIA supports and conducts genetic, biological, clinical, behavioral, social, and economic research on aging. Aging Well in the 21st Century: Strategic Directions for Research on Aging is NIA's roadmap for progress in aging research and outlines the Institute's goals and vision. The NIA Division of Neuroscience supports basic, clinical and epidemiological research to understand the neural and behavioral processes associated with the aging brain. Research on Alzheimer's disease and related dementias of aging is of particular interest.
  • The Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) supports the full spectrum of basic, clinical, and translational research in the biomedical and behavioral neuroscience arenas, particularly as they affect developing systems and rehabilitation.

For the latest information about NIH programs, visit the NIH website at http://www.nih.gov/ .

ABOUT THE U.S. DEPARTMENT OF ENERGY

The mission of the Department of Energy (DOE) is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions. The DOE Office of Science (SC) mission is to deliver scientific discoveries and major scientific tools to transform our understanding of nature and advance the energy, economic and national security of the United States. SC is the Nation’s largest Federal sponsor of basic research in the physical sciences and the lead Federal agency supporting fundamental scientific research for our Nation’s energy future. Within SC, the Advanced Scientific Computing Research (ASCR) program’s mission is to advance applied mathematics and computer science; deliver the most sophisticated computational scientific applications in partnership with disciplinary science; advance computing and networking capabilities; and develop future generations of computing hardware and software tools for science and engineering in partnership with the research community, including U.S. industry. ASCR supports state-of-the-art capabilities that enable scientific discovery through computation.

ABOUT THE FEDERAL MINISTRY OF EDUCATION AND RESEARCH (GERMANY)

Research and development in areas such as chemistry and materials science, semiconductors, laser and plasma technology together with the latest production processes are the basis for new technological developments of tomorrow. The Federal Ministry of Education and Research (BMBF) provides financial support for innovative projects and ideas under targeted research funding programmes. The range covers everything from basic scientific research, environmentally friendly sustainable development, new technologies, information and communication technologies, the life sciences, work design; structural research funding at institutions of higher education to innovation support and technology transfer. Research funding supports scientific institutions and enterprises. The BMBF also funds individual researchers via special funding institutions.

ABOUT THE FRENCH NATIONAL RESEARCH AGENCY

The French National Research Agency is a public organization devoted to competitive project-based funding in both fundamental and applied research. Its objectives are to promote scientific and technological development. The ANR mission is to concentrate the research efforts on national societal and economic priorities while maintaining a good balance between fundamental and applied research. It funds all science and technology areas.

ABOUT THE U.S.-ISRAEL BINATIONAL SCIENCE FOUNDATION

The U.S.-Israel Binational Science Foundation (BSF) promotes scientific relations between the U.S. and Israel by supporting collaborative research projects in a wide area of basic and applied scientific fields, for peaceful and non-profit purposes. The foundation is owned equally by the two governments, and financed by endowments created by both governments. The BSF is an independent organization, and is governed by a board of governors consisting of equal numbers of U.S. and Israeli members. Since its creation in 1972, it has supported over 5000 joint U.S.-Israeli research projects. 45 Nobel Laureates, 7 Turing Laureates, and 7 Fields Medal Laureates have participated in BSF-supported projects.

ABOUT THE NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGY (JAPAN)

As the sole public institution in Japan to specialize in ICT, the National Institute of Information and Communications Technology (NICT) engages in the full spectrum of research and development in ICT from basic to applied research with an integrated perspective. Additionally, NICT supports the wider ICT sector through research funding and promoting collaboration with the academic and business communities in Japan as well as with research institutes overseas.

ABOUT THE STATE RESEARCH AGENCY (SPAIN)

The State Research Agency (AEI) is a Spanish agency responsible for the promotion of scientific and technical research in all areas of knowledge through the competitive and efficient allocation of public resources, the monitoring of actions financed and their impact, and advice on action planning or initiatives through which the R&D policies of the General State Administration are implemented. The Agency is attached to the Ministry of Science, Innovation and Universities through the auspices of the Spanish Secretariat of State for Universities, Research, Development and Innovation.

The National Science Foundation promotes and advances scientific progress in the United States by competitively awarding grants and cooperative agreements for research and education in the sciences, mathematics, and engineering.

To get the latest information about program deadlines, to download copies of NSF publications, and to access abstracts of awards, visit the NSF Website at

2415 Eisenhower Avenue, Alexandria, VA 22314

(NSF Information Center)

(703) 292-5111

(703) 292-5090

Send an e-mail to:

or telephone:

(703) 292-8134

(703) 292-5111

Privacy Act And Public Burden Statements

The information requested on proposal forms and project reports is solicited under the authority of the National Science Foundation Act of 1950, as amended. The information on proposal forms will be used in connection with the selection of qualified proposals; and project reports submitted by proposers will be used for program evaluation and reporting within the Executive Branch and to Congress. The information requested may be disclosed to qualified reviewers and staff assistants as part of the proposal review process; to proposer institutions/grantees to provide or obtain data regarding the proposal review process, award decisions, or the administration of awards; to government contractors, experts, volunteers and researchers and educators as necessary to complete assigned work; to other government agencies or other entities needing information regarding proposers or nominees as part of a joint application review process, or in order to coordinate programs or policy; and to another Federal agency, court, or party in a court or Federal administrative proceeding if the government is a party. Information about Principal Investigators may be added to the Reviewer file and used to select potential candidates to serve as peer reviewers or advisory committee members. See System of Record Notices , NSF-50, "Principal Investigator/Proposal File and Associated Records," and NSF-51, "Reviewer/Proposal File and Associated Records.” Submission of the information is voluntary. Failure to provide full and complete information, however, may reduce the possibility of receiving an award.

An agency may not conduct or sponsor, and a person is not required to respond to, an information collection unless it displays a valid Office of Management and Budget (OMB) control number. The OMB control number for this collection is 3145-0058. Public reporting burden for this collection of information is estimated to average 120 hours per response, including the time for reviewing instructions. Send comments regarding the burden estimate and any other aspect of this collection of information, including suggestions for reducing this burden, to:

Suzanne H. Plimpton Reports Clearance Officer Policy Office, Division of Institution and Award Support Office of Budget, Finance, and Award Management National Science Foundation Alexandria, VA 22314

National Science Foundation

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

Causal assessment in evidence synthesis: A methodological review of reviews

Michal shimonovich.

1 MRC/CSO Social & Public Health Sciences Unit, University of Glasgow, Glasgow UK

Anna Pearce

Hilary thomson, srinivasa vittal katikireddi, associated data.

Data sharing is not applicable to this article as no new data were created or analysed in this study.

In fields (such as population health) where randomised trials are often lacking, systematic reviews (SRs) can harness diversity in study design, settings and populations to assess the evidence for a putative causal relationship. SRs may incorporate causal assessment approaches (CAAs), sometimes called ‘causal reviews’, but there is currently no consensus on how these should be conducted. We conducted a methodological review of self‐identifying ‘causal reviews’ within the field of population health to establish: (1) which CAAs are used; (2) differences in how CAAs are implemented; (3) how methods were modified to incorporate causal assessment in SRs. Three databases were searched and two independent reviewers selected reviews for inclusion. Data were extracted using a standardised form and summarised using tabulation and narratively. Fifty‐three reviews incorporated CAAs: 46/53 applied Bradford Hill (BH) viewpoints/criteria, with the remainder taking alternative approaches: Medical Research Council guidance on natural experiments (2/53, 3.8%); realist reviews (2/53, 3.8%); horizontal SRs (1/53, 1.9%); ‘sign test’ of causal mechanisms (1/53, 1.9%); and a causal cascade model (1/53, 1.9%). Though most SRs incorporated BH, there was variation in application and transparency. There was considerable overlap across the CAAs, with a trade‐off between breadth (BH viewpoints considered a greater range of causal characteristics) and depth (many alternative CAAs focused on one viewpoint). Improved transparency in the implementation of CAA in SRs in needed to ensure their validity and allow robust assessments of causality within evidence synthesis.

What is already known

Despite the potential benefits, there is currently no comprehensive and agreed upon approach for incorporating causal assessment approaches (CAAs) in systematic reviews (SRs) and reviews of reviews (RoRs).

What is new

To our knowledge this is the first methodological review to establish current practice of CAAs in SRs. Bradford Hill viewpoints (sometimes called criteria) were the most commonly used, but how they were implemented and transparency in reporting implementation varied greatly. There was overlap across the approaches with some focusing on one or two viewpoints while others considering several elements of causal assessment.

Potential impact for RSM readers outside the authors' field

For CAAs to be incorporated into SRs/RoRs across all fields, investigators must ensure transparency in choice of viewpoints and clarity around implementation, including justification or guidance used to inform operationalisation.

This methodological review offers examples of how CAAs can be implemented to maintain the transparency, robustness, and rigorous approach of SRs.

1. INTRODUCTION

Causal assessment involves researchers and policy makers interrogating the evidence to understand if a cause‐and‐effect relationship exists between an exposure and an outcome. 1 , 2 By bringing together evidence surrounding a research question, evidence synthesis is arguably preferable to relying on an individual study for causal assessment. 3 This is particularly true in population health where evidence is mixed and potential causes are complex. 4

The utility of evidence synthesis, including systematic reviews (SRs), in causal inference depends both on review conduct (which should be done as rigorously and transparently as possible 5 ) as well as what evidence is synthesised. The types of studies included in SRs may affect the certainty of a causal relationship. This may be especially important where the available evidence is predominantly from non‐randomised studies (NRSs) 4 , 6 where there is a high risk of bias due to confounding when compared to randomised controlled trials (RCTs), 7 , 8 as is common in SRs addressing population health questions. 4 Results from NRSs, even those with large sample sizes, 9 may be misleading if not interpreted in light of potential sources of bias 10 and may threaten the potential for SRs to evaluate causality.

The approach to evidence synthesis to evaluate a putative causal link between an exposure and outcome may differ from evaluating an association between an exposure and outcome. 5 , 8 To improve the assessment of causality, methods used in SRs may need to be adapted. 11 While there are not clearly defined and agreed means of adjudicating causality, including within SRs, 11 there are various guidelines and approaches that can be used to assess one or more aspects of causality. 4 Going forward, the guidelines and approaches used to assess causality will be referred to as causal assessment approaches (CAAs), with the Bradford Hill (BH) viewpoints or criteria particularly influential. They may be incorporated into the evidence synthesis—sometimes referred to as ‘causal reviews’—to help establish if a causal relationship exists. 11

Some CAAs, such as the BH, qualitatively evaluate different characteristics of causal relationships. 3 BH viewpoints address several key characteristics of causal relationships: strength of association, temporality, dose response, consistency, specificity, plausibility, experiment, coherence, and analogy. Similarly, the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology provides a systematic approach to assessing certainty within reviews which indicates confidence that the effect estimated in evidence synthesis is close to the true effect (i.e., the causal effect). 12 While GRADE is not always thought of as a CAA, it has been argued that it incorporates many aspects of the BH viewpoints 13 such as incorporating risk of bias, indirectness and confounding. 14

Other CAAs may be explicitly based on the counterfactual definition of causality. The ‘fundamental issue in causal inference’ of missing, unobserved data means that investigators cannot determine the difference between the observed effect when the individual has been exposed to the potential cause under investigation and the unobserved counterfactual outcome had the individual not been exposed, all other things being equal. 15 Thus, application of the counterfactual definition asks investigators to consider if the unexposed group would have the same risk of the outcome as the exposed group had they also been exposed. 6 Direct acyclic graphs (DAGs) 16 and sufficient component cause (SCC) models (also known as causal pies) incorporate counterfactual principles in their systematic evaluation of, among other things, confounding and multifactorial causes. 17 Epidemiologists have argued triangulating across different CAAs may help improve evaluation of putative causal relationships. 4 , 18 This might be particularly valuable in population health, where randomised trials are typically not possible.

The aim of this methodological review is to understand how CAAs are incorporated into population health SRs and review of systematic reviews (RoRs). We will identify SRs/RoRs that explicitly incorporate CAAs and consider how they have implemented CAAs. We will seek to elucidate any differences in the conduct of SRs/RoRs for causal assessment and consider the implications for investigators interested in using SRs/RoRs to assess causality.

2.1. Review aims and scope

In this paper we use the term ‘causal SR/RoR’ to refer to SRs/RoRs which have self‐identified as assessing causality and have explicitly incorporated a CAA. Our focus on self‐identifying SRs/RoRs that have explicitly incorporated a CAA is largely due to resource and time constraints. SRs/RoRs were included if they referred to causal assessment in the title or abstract and explicitly applied CAA in the main text. Therefore, we will likely not identify SRs/RoRs that use elements of CAA but do not explicitly refer to it in the title or abstract. However, as the overall aim is to gain a broad understanding of how CAAs are incorporated into population health SRs/RoRs and offer insight into the variation for researchers wanting to conduct a causal SR/RoR, we believe the SRs/RoRs identified within this aim will provide that.

For the purposes of this review, CAA refers to the plans and procedures applied by investigators and may include any guideline, framework, tool or method used by investigators to assess causality. 19 Some CAA examples include BH viewpoints, DAGs, GRADE or causal pies. CAAs may be informed implicitly and explicitly, and to varying degrees, by investigators' philosophical worldviews, study designs and research methodology. 19 The assumptions about a causal relationship may be viewed through a variety of frameworks including, but not limited to: deterministic (an exposure is expected to always produce the outcome and the outcome does not occur without the exposure); probabilistic (an exposure increases the likelihood of an outcome); or multifactorial (an exposure may be a component of a complex cause that is sufficient, but not necessary, to produce the outcome). 20 , 21 For the purposes of this methodological review, we are agnostic under which frameworks authors were operating.

A methodological review analyses study methods. 22 The aim of this methodological review is to identify and describe the various approaches to assessing causality in public/population health SRs/RoRs. We focus on population health, both because of its importance and the challenges in elucidating causal relationships due to the complex relationship structures and reliance on NRSs. 4

Our aim to consider the ways in which CAAs are incorporated into population health SRs and RoRs was addressed using three objectives:

This objective aims to identify and describe the explicitly incorporated CAAs of self‐identifying SRs/RRs. We will note any themes of CAA characteristics that emerge.

This objective aims to narratively describe how CAAs are implemented. We will highlight differences and similarities in how different CAAs are implemented and, if possible, how the same CAAs are implemented across different SRs/RoRs.

This objective summarises the ways SR stages are adapted to either identify evidence relevant to the CAA or analyse evidence specific to the CAA.

2.2. Stage 2: Identifying relevant studies

2.2.1. eligibility criteria.

The eligibility criteria for this methodological review were developed according to a protocol for mixed methods: sample, phenomenon of interest, design, evaluation and research type (SPIDER). 23 We excluded ‘research type’ due to limited relevance to our research aims. Because of the variety in CAAs and because we are not limiting our search to specific interventions or outcomes, SPIDER was deemed more appropriate than a protocol based on population, intervention, comparison and outcome (PICO). 24 Explanations and justifications of how each protocol category and the corresponding inclusion and exclusion criteria are summarised in Table  1 .

Inclusion and exclusion criteria for reviews adapted from sample, phenomenon of interest, design, evaluation, and research type (SPIDER) protocol for mixed method studies 23 , 92

Protocol categoryExplanationCriteria
Explanation based on (34, 36)Operationalised in methodological reviewIncludeExclude
SampleSimilar to ‘population’ in PICO, sample refers to the participants included in the studiesThe sample in this methodological review refers to the remit of the reviews, rather than the samples of the studies within those reviews. The remit is population or public health. There are no restrictions on criteria related to the study design, conditions, characteristics or settings within the reviewsPopulation health research (including public health interventions, health policy interventions, exposures such as risk factors and determinants). We focus on population health due to the importance of causal assessment in understanding both complex health interventions as well as potential health risks Clinical interventions (including pharmaceutical, surgical or psychological interventions)
Phenomenon of interestPhenomenon of interest relates to the aim or focus of the included reviewsThe phenomena of interest are reviews that explicitly stated their aim was to identify a causal relationship, including those that identified putative pathways for a causal relationshipReviews that explicitly stated they aimed to identify a putative causal relationship between exposure and population health outcome. We focus on explicit evaluations of causality largely due to time and resource constraintsExplicit mention that likelihood of causal relationship was not considered
DesignDesign refers to the study design (including any theoretical frameworks) used to inform the research methodsThe review design will be limited to systematic reviews (SRs) and reviews of SRs (RoRs)SRs and RoRs. RoRs are also included because they use similar methods and often aspire to achieve the transparency of SRs17. Because of the variation in how SRs/RoRs are defined, we defined inclusion by self‐identification. Determining how closely a review followed SR/RoR principles and methods was beyond the scope of this methodological reviewNon‐systematic reviews including methodological review
EvaluationThe term ‘evaluation’ is comparable to ‘outcomes’ in PICO. To accommodate qualitative research, evaluation includes unmeasurable findingsFor the purposes of this methodological review, this refers to what approaches are incorporated into the reviewsReviews that have incorporated approach(es) to causal assessment. We limited SRs/RoRs to those that explicitly incorporated one or more causal assessment approach (CAA) because of resources and time constraintsNo explicit mention approach has been incorporated to support causal assessment
Research typeThe research type refers to either quantitative, qualitative or mixed methodsThis protocol category was not utilised as we did not have restrictions related to criteria for study design of the reviewNot applicableNot applicable

Abbreviation: PICO, population, intervention, comparison and outcome.

Full list of exclusion criteria

  • Reviews of clinical intervention or evaluation studies or other studies not related to population or public health.
  • Reviews that do not self‐identify as having conducted or considered causal assessment.
  • Reviews that do not self‐identify as a SR or RoR.

We excluded reviews that hypothesised, but did not evaluate, possible causal mechanisms, links or pathways, 25 , 26 , 27 , 28 , 29 or reviews that included studies that aimed to assess, or stated that they had assessed, causality but did not implement any causal assessment (see Table  1 ).

2.2.2. Search strategy

The goal of the search was to identify SRs and RoRs in population health that assess causality. We identified reviews in a systematic search of three electronic bibliographic databases conducted in February 2020: EMBASE, Medline, and CINAHL. Our search included keywords related to ‘systematic review’ and ‘causality’ in the title and abstract and, where possible, as subject headings. To limit the search to SRs, one of our key terms was the subject heading, ‘systematic review’. We also included terms such as ‘causal’ or ‘causation’ or ‘causal assessment’ or ‘causal evaluation’ in the title or abstract. As we focused on recent practice in SRs and RoRs used for causal assessment, our search was limited to January 2000–February 4, 2020. The reviews were further limited to English language reviews and the population in our search were limited to human subjects. The research team finalised the search strategy in consultation with an information specialist (see Appendix  A for full search strategy).

2.3. Stage 3: Study selection

Following de‐duplication using Covidence, titles and abstracts were exported to EndNote X9 © and screened in two stages: (1) title and abstract and (2) full‐text. At both stages, reviews were independently reviewed by two investigators (MS and: HT, SVK, or AP). A third reviewer was consulted about disagreements at either stage.

2.3.1. Data extraction

Data extraction was completed by MS. A second reviewer (HT, SVK or AP) checked a 10% sample of purposively selected reviews that spanned a range of different CAA and provided good coverage all the potential issues that might arise. As most of the outcomes were qualitative descriptions of methods rather than statistical estimates, we did not calculate specific interrater reliability measures. Rather, we aimed to explore interpretation of phenomena through discussion as is common in qualitative research, particularly focusing on non‐BH CAA methods. 30 The data extraction form (see Appendix  B ) included both structured and free‐text domains and was piloted before finalising. We extracted data on key study information such as type of review, study designs included in the review, and PICO features as well as which CAA was used (e.g., BH viewpoints), key features of causal approaches (e.g., identifying confounders, temporality, etc.) and criteria used to meet each CAA (e.g., specific study design).

2.3.2. Data summary and synthesis

The data were tabulated to facilitate comparison across SRs/RoRs that used a particular CAA as well as comparison across reviews that incorporated different CAAs. The data for each CAA were then summarised narratively to describe the variations in how CAAs were implemented. We tabulated the following information which was considered to be quantifiable: the number of reviews that used each CAA; which BH viewpoints were used; if the viewpoints were defined; how authors determined if viewpoints were met (or in other words, did they identify and apply indicators); how overall support for viewpoints was determined; and how the viewpoints were applied (Table  3 ). We thematically collated free‐text responses, such as the impact of causal approach on SR/RoR stages (Table  4 ), where possible. Both this ‘quantifiable’ information and other qualitative information were synthesised descriptively.

Overview of how Bradford Hill (BH) viewpoints were applied, categorised by five domains

DomainDescriptionSummary of results
Viewpoints usedThe viewpoints used in each review

Strength of association: 44/46, 95.7%

Temporality: 44/46, 95.7%

Dose–response: 43/46, 93.5%

Consistency: 41/46, 89.1%

Plausibility: 38/46, 82.6%

Experiment: 32/46, 69.6%

Coherence: 21/46, 45.7%

Specificity: 18/46, 39.1%

Analogy: 15/46, 32.6%

Viewpoint definitionWhether a description, interpretation or definition of each viewpoint is providedDescription,/interpretation/definition provided: 15/46, 32.6% , , , , , , , , , , , , , ,
Viewpoint indicatorsCriteria to determine if viewpoint had been met are reported

Indicators used and reported: 19/46, 41.3%

Example of indicators include, but are not limited to, quantitative ranges (e.g., risk ratio (RR) or odds ratio (OR) between 3.0 and 8.0 for strong association) or qualitative thresholds (e.g., at least one credible mechanism to explain association for plausibility)

Overall support for viewpointsReport level or degree of support for each viewpoint (e.g., strong, moderate, and weak)44/46 (95.7%) , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , reviews described support for each viewpoint to be met. This was done narratively, quantitatively (e.g., number of studies in support of viewpoint; probability that viewpoint was met) or some combination of both
Viewpoint applicationViewpoints were applied before or after evidence was synthesised and could be applied to all studies as a collective, studies individually or groups of studies (e.g., by study design, by exposure/outcome relationship). Some reviews applied viewpoints in more than one way

36/46 (78.3%) reviews assessed each BH viewpoint by applying them across the evidence (i.e., after studies were synthesised with studies considered collectively)

13/46 (28.3%) applied the viewpoints across the exposure/outcome relationship(s) under study (i.e., after studies were synthesised, but with studies or different exposure/outcomes considered individually)

12/46 (26.1%) applied the viewpoints to each included study (i.e., before synthesis with studies considered separately)

4/46 (8.7%) reviews assessed if viewpoints were met by applying them separately to different study designs

1/46 (2.2%) review applied the viewpoints to studies deemed high‐quality only

It was unclear how 2/46 (4.3%) reviews applied viewpoints to evidence

Note : The domain descriptions and corresponding reviews are summarised.

Impact of causal assessment approaches (CAAs) o conduct of systematic review (SR) stages

Review stageNumber of reviews (  = 53)CAA incorporated into SR stage
Review aims and objective39/53, 73.6% , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Most reviews explicitly stated that one of their review aims and objectives was to assess (statistically and/or narratively) evidence for causal relationship. The nature of how causality was assessed varied across reviews, where reviews might focus on statistically analysing the evidence and/or on narratively assessing the evidence
Review design41/53, 77.4% , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Most reviews included their CAA as a specific part of their overall review design for identifying, synthesising, and analysing evidence. Including CAA in a SR study design suggests that causal assessment was an a priori consideration for the SR, an important fact to consider when critically appraising SRs (which was beyond the scope of this review)
Inclusion/exclusion criteria14/53, 26.4% , , , , , , , , , , , , ,

Of the 14 reviews that designed criteria to reflect incorporating CAAs, 10 reviews (all of which utilised BH viewpoints except for one realist review) included studies that considered potential causal pathways or excluded studies that did not.

Another four reviews (all but one using alternative CAAs) limited studies to those deemed most useful for assessing causality: observational studies utilising analytical methods to account for confounding, study designs that provide experimental evidence; and studies testing two competing causal mechanisms

Search terms3/53, 5.7% , , A few reviews designed their search strategies to specifically identify studies that support their CAA. Common terms included: causality, experiment, instrumental variable, regression discontinuity, and mechanism
Data extraction14/53, 26.4% , , , , , , , , , , , , , All 14 reviews , , , , , , , extracted information that supported the CAA such as the confounding variables that were conditioned upon in their included studies, study design information that may strengthen causal inference or possible causal mechanisms
Synthesis51/53, 96.2% , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 44 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , reviews that utilise BH viewpoints used synthesised evidence to understand if viewpoints were met, though not all explained how evidence was used or the criteria for a viewpoint to be met. There was a mix in synthesised evidence being narratively assessed only, statistically analysed only, or both narratively and statistically assessed to determine if viewpoints were met (see Section ‘Indicators used for meeting Bradford Hill viewpoints’ for more detail). Additional detail on how evidence was synthesised for all CAAs can be found in Section 
Conclusion53/53, 100% , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , All reviews considered causality in their conclusions but it was unclear in three reviews whether a conclusion regarding a causal relationship was drawn. , For reviews utilising BH viewpoints, overall certainty of conclusion reflected overall certainty of viewpoints being met (see Section ‘Support for Bradford Hill viewpoints’). Twenty‐five reviews , , , , , , , , , , , , , , , , , , , , , , , , stated that they had found some evidence (with varying degrees of certainty) for a causal relationship between the exposure and outcome under study, while 18 , , , , , , , , , , , , , , , , , found limited or no evidence for a causal relationship. Nine studies said that they were not able to draw conclusive conclusions about a causal relationship and needed further evidence , , , , , , , , ,

3.1. Included reviews

Figure  1 shows the flowchart of the searches. 31 The search resulted in 1345 references. Out of 1339 de‐duplicated screened references, 140 full texts were assessed and 53 reviews were included (five were RoRs, 32 , 33 , 34 , 35 , 36 all of which used BH viewpoints).

An external file that holds a picture, illustration, etc.
Object name is JRSM-13-405-g001.jpg

PRISMA flow diagram with primary reasons for excluding full text reviews [Colour figure can be viewed at wileyonlinelibrary.com ]

The review characteristics, including the exposure topic area, CAA(s) used, and critical appraisal tool(s) applied by the review are provided in Appendix  C . Forty‐six reviews (46/53, 86.7%) applied BH viewpoints, 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 with a further seven using ‘alternative’ approaches: two (2/53, 3.8%) incorporated the Medical Research Council (MRC) guidance on natural experiments 78 , 79 ; two reviews (2/53, 3.8%) utilised realist reviews as a CAA 80 , 81 ; one (1/53, 1.9%) utilised horizontal SRs 82 ; one (1/53, 1.9%) incorporated ‘sign testing’ of causal mechanisms 83 ; and another one (1/53, 1.9%) used a causal cascade model. 84

The complete list of CAAs identified (objective 1) and descriptions of how CAAs were implemented (objective 2) can be found in Table  2 . We provide additional detail comparing implementation of BH viewpoints in Table  3 . Because most other CAAs were only used by one or two reviews, we were only able to compare implementation for BH viewpoints. A comparison of how realist reviews and MRC guidance on natural experiments were implemented was described narratively in Sections  3.2.2 and 3.2.3 .

Overview of description of causal assessment approaches (CAAs) and how they were incorporated into systematic reviews (SRs) and reviews of reviews (RoRs)

CAANumber of reviewsDescription of CAAHow CAA was incorporated into SRs
Bradford Hill (BH) viewpoints46 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , BH viewpoints, also known as criteria, are a set of nine characteristics to consider when assessing a causal relationship. The nine viewpoints are: strength of association, consistency, specificity, temporality, dose–response, plausibility, coherence, experiment, and analogyThe most commonly used CAA, there was considerable variation in which BH viewpoints were used and how they were operationalised. There was also variation in transparency and clarity about how the viewpoints were incorporated and used in causal assessment
Medical research council (MRC) guidance on natural experiments2 , The MRC guidance on natural experiments posits that certain study designs and analytic methods are more suitable to assess causality than others, and suggests that results from different studies be compared. The MRC guidance on natural experiments highlights study design, including carefully defining control groups to establish exchangeability with exposed individuals and testing underpinning methodological assumptions as important for establishing causality. It also draws attention to some methods (such as difference‐in‐differences, regression discontinuity designs, and instrumental variable analysis) which can address unmeasured, as well as measured, confoundersTwo reviews , used MRC guidance on natural experiments. One limited their scope to studies that incorporated methods deemed to be of high quality in MRC guidance on natural experiments, while the other review considered study design and analytic methods during evidence synthesis
Realist reviews2 , Realist reviews are an established CAA with an existing set of guidelines for incorporating realist synthesis principles into SRs. One of the main aims of realist reviews is understand contexts and reasons for a causal relationship. Realist reviews often incorporate different forms of evidence, including theoretical evidenceBoth realist reviews narratively assessed causal mechanisms that may explain the relationship under study, and both determined that further evidence is needed to understand possible mechanisms
Horizontal systematic review1 This CAA was developed by the review authors to collate evidence of causal effects across a range of study designs and risk (identified for having varying properties, such as threat of confounding, measurement error or proximity to the outcome on the causal pathwayThe authors considered evidence from observational studies (that accounted for confounding and reverse causation), genetic studies using Mendelian randomisation, and RCTs for four risk factors Separate meta‐analyses were conducted for each risk factor and by study design. The meta‐analysis results were compared across risk factors, considering the differing sources and level of bias across the different methods
Sign test hypotheses1 This approach, interrogates the evidence for reverse causation (such that the outcome is in fact the cause of the exposure)The authors interrogated the evidence for two the direction of causation between the exposure and outcome to establish whether there was evidence for one direction being stronger than the other
Causal cascade method1 Based on logic model developed to illustrate the ‘framework of causal relationships’, the authors conducted a Bayesian meta‐analyses on the heterogeneity across RCTs

Authors hypothesised reasons for heterogeneity found in RCTs evaluating breast cancer screening on mortality—including attendance rates, the accuracy of screening tests, and social class. The logic model in Figure  of the review illustrates the framework of causal relationship and includes the key cascade components (attendance rates and sensitivity) that may account for differences in two outcomes (advanced breast cancer and breast cancer mortality).

The authors then considered the trial evidence across these different inter‐related factors to consider whether heterogeneity in the evidence base could be explained by these factors. Based on the assumptions in the logic model and the included studies, the review estimated the relative risk of advanced‐stage breast cancer and breast cancer mortality by three different attendance rates and sensitivity in trials (a total of nine scenarios). Overall, they found that attendance rate and sensitivity may explain statistical heterogeneity across trials

Note : The review topics, in terms of exposures, varied: sixteen (16/53, 30.2%) reviews focused on occupational health 32 , 33 , 34 , 35 , 60 , 61 , 62 , 63 , 64 , 65 , 69 , 70 , 73 , 74 , 75 ; eleven (11/53, 20.8%) on environmental health 40 , 44 , 46 , 47 , 49 , 59 , 68 , 71 , 78 , 79 ; nine (9/53, 17.0%) on nutritional health 43 , 51 , 52 , 53 , 54 , 55 , 57 , 77 , 81 ; four (4/53, 7.5%) on smoking 42 , 45 , 66 , 72 ; four (4/53, 7.5%) on mental health 48 , 56 , 67 , 82 ; three (3/53, 5.7%) on alcohol consumption 36 , 38 , 39 ; two (2/53, 3.8%) on child health 50 , 58 ; two (2/53, 3.8%) on health inequalities 41 , 83 ; one (1/53, 1.9%) on diagnostics 84 ; and one (1/53, 1.9%) on respiratory diseases. 37

Abbreviation: RCTs, randomised controlled trials.

3.2.1. BH viewpoints

While the majority of reviews applied BH viewpoints to assess causality, there was considerable variation in how they were implemented. As described in Section  2.3.1 , we extracted information to evaluate how implemention of BH viewpoints varied which we categorise into five key domains: (1) viewpoints used; (2) viewpoint definition; (3) viewpoint indicators (i.e., how was the viewpoint assessed as being ‘met’); (4) assessment of overall support for viewpoints; and (5) if viewpoints were considered across the body of evidence or in another way (e.g., across a single study or relationship). An overview of each domain can be found in Table  3 .

BH viewpoints assessed

Twelve (12/46, 26.1%) 38 , 40 , 41 , 46 , 52 , 53 , 55 , 59 , 65 , 69 , 70 , 71 , 72 , 76 SRs/used all nine BH viewpoints. Coherence, specificity and analogy were the least commonly assessed, featuring in fewer than half of the reviews. Three (3/46, 6.5%) SRs 33 , 34 , 57 combined coherence and plausibility. In all but three SRs (3/46, 4.3%) 41 , 53 , 57 it was unclear why certain viewpoints were excluded.

Definitions of BH viewpoints

Clearly defining viewpoints is important for transparent implementation of BH viewpoints, but fewer than half (16/46, 34.8%) of reviews 32 , 33 , 34 , 35 , 36 , 38 , 43 , 46 , 53 , 54 , 55 , 57 , 58 , 70 , 72 , 76 did so explicitly and the definitions varied. For example, consistency was defined by Fenton and colleagues 43 as variation across different study designs while others 32 , 33 , 34 , 35 , 57 described consistency as variation across populations and settings. Norman and colleagues defined consistency as observing a comparable association across various study designs, populations, settings and regions. 58 Livesey and colleagues were the only review to consider sources of heterogeneity when evaluating consistency across studies. 53

Indicators used for meeting BH viewpoints

Viewpoint indicators (i.e., criteria to determine if viewpoints are met) are useful for understanding differences in how BH viewpoints were used in causal assessment. However, fewer than half (19/46, 41.3%) of reviews reported what criteria were used to determine if a viewpoint was met. For those that did provide indicators, there was considerable variation. For example, the indicators for assessing strength of association, though all quantitative, varied widely: risk ratio (RR) greater than 1.20 52 ; RR greater than 0.9 for protective factors and greater than 1.25 for harmful factors 55 ; RR or odds ratio (OR) between 3.0 and 8.0 42 ; hazard ratio (HR) greater than or equal to 3.0 45 ; OR greater than 4.0 50 , 60 , 61 , 62 , 63 , 64 , 73 , 74 , 75 ; RR greater than 5.0 46 ; or a greater than 10% increased risk and statistically significant. 32 None of the reviews that provided indicators for strength of association considered confounding adjustment, including residual or unmeasured confounding, when assessing whether strength of association was met. This is important as bias may fully explain a large association (and small associations may not entirely explained by bias). However, some of the reviews (9/46, 19.6%) 44 , 47 , 49 , 50 , 56 , 58 , 59 , 66 , 69 broadly considered the findings from individual studies or their findings when evidence was synthesised in the context of confounding and bias, which in some reviews was also referred to as ‘alternative explanations’.

While strength of association relied on quantitative indicators, some indicators for other viewpoints were less definitive. Five reviews 38 , 42 , 46 , 52 , 55 provided indicators for the plausibility viewpoint (out of 38 reviews that included plausibility in their assessment). Two reviews 52 , 55 determined that plausibility was met if at least one credible, hypothetical mechanism explained the association (e.g., empirical studies demonstrating a relationship), though neither clarified what was meant by ‘credible’. Similarly, Hughes and colleagues determined that the relationship under study was plausible if there were positive animal or mechanistic data. 46 On the other hand, rather than focus on hypothetical explanations for an association, two other reviews 38 , 42 noted that an association between the exposure and outcome under study in human studies was sufficient evidence for plausibility. None of the SRs/RoRs explained why certain indicators were used, making it challenging to discern the underlying reasons for the variation of indicators used for a given viewpoint (e.g., range of indicators for strength of association).

Support for BH viewpoints

Viewpoint indicators describe the necessary criteria to determine if each viewpoint was met (at the study level), while the overall support for each viewpoint reflects the extent to which each viewpoint was met (based on the body of evidence). Most reviews (44/46, 95.7%) 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 set out to assess the level of support provided in the evidence reviewed for the viewpoints (or viewpoint indicator, where applicable) being met. Two (2/46, 4.3%) reviews 48 , 65 did not report whether the level of support for the viewpoints was or was not met. Assessments were done narratively or quantitatively, with seventeen reviews (17/44, 38.6%) 32 , 33 , 34 , 38 , 41 , 43 , 45 , 46 , 52 , 54 , 60 , 61 , 66 , 70 , 73 , 74 , 75 using both narrative and quantitative approaches to assess the support for viewpoints being met. Seventeen (17/44, 38.6%) reviews 36 , 37 , 39 , 44 , 47 , 49 , 50 , 51 , 53 , 56 , 57 , 59 , 67 , 68 , 69 , 71 , 76 provided narrative‐only assessments. Most SRs/RoRs (27/44, 61.4% of those that assessed support for viewpoints) included at least one quantitative assessment including:

  • strong/moderate/weak: fourteen reviews (14/44, 31.8%) 33 , 34 , 38 , 42 , 45 , 58 , 60 , 61 , 62 , 63 , 64 , 73 , 74 , 75
  • yes, no, strong, poor, none (1/44, 2.3%) 70
  • conclusive, inconclusive, null (1/44, 2.3%) 32
  • +++ evidence from several well‐designed studies, ++ evidence from several studies but with important limitations; + emerging evidence from a few studies or conflicting results from several studies, — criterion not met (1/44, 2.3%) 55
  • supportive, not applicable/not examined, no association/negative (1/44, 2.3%) 66
  • high, moderate, doubtful/low, unclear (1/44, 2.3%) 72
  • yes/no (5/44, 11.4%) 40 , 43 , 52 , 54 , 77
  • number of studies that supported each viewpoint when assessing if the viewpoint was met two reviews (2/44, 4.5%) 35 , 41
  • probability that each viewpoint was met (1/44, 2.3%) 46

Application of BH viewpoints to evidence

Most SRs/RoRs using BH viewpoints (36/46, 78.3%), 33 , 34 , 36 , 38 , 39 , 40 , 42 , 43 , 44 , 45 , 46 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 assessed each BH viewpoint by applying them across the body of evidence (i.e., after studies were synthesised with studies considered collectively) while another thirteen (13/46, 28.2%) reviews 35 , 37 , 40 , 41 , 43 , 48 , 53 , 54 , 55 , 58 , 60 , 62 , 64 applied the viewpoints across the exposure/outcome relationship(s) under study (i.e., after studies were synthesised, studies considered collectively by exposure/outcome relationship). Twelve SRs/RoRs 35 , 38 , 48 , 54 , 60 , 61 , 62 , 63 , 64 , 66 , 73 , 74 , 75 applied the viewpoints to each included study (i.e., before synthesis with studies considered separately). Four (4/46, 8.7%) SRs/RoRs 32 , 54 , 57 , 58 applied viewpoints separately to different study designs, while one (1/46, 2.2%) SR 47 applied the viewpoints only to studies deemed higher quality in terms of causal assumptions. It was unclear how two reviews 65 , 77 applied viewpoints on evidence.

3.2.2. MRC guidance on natural experiments

Two SRs 78 , 79 used the MRC guidance on natural experiments 85 to conduct causal assessment. This CAA involves identifying observational studies that appropriately and comprehensively address bias, deeming them most suitable to assess causality. The guidance focusses predominantly on natural experiment study designs and other analytical approaches that compare outcomes pre‐ and post‐intervention, partly to discern if the exposure preceded the outcome. The guidance favours analytical methods that address observable and measurable sources of bias from confounding (e.g., matching, regression adjustment, and propensity scores) and unmeasured or residual confounding (e.g., differences in differences, instrumental variables, and regression discontinuity).

Martin and colleagues 78 identified studies that consider the relationship between the built characteristics of an environment and obesity that applied any of the analytical methods described by the MRC guidance on natural experiments to address observable or unobservable confounders. 78 They found that the observed associations in studies using methods to address particular sources of bias (e.g., longitudinal studies which are more suitable to consider the temporal ordering of variables) were comparable with those that did not (such as cross‐sectional studies, which cannot always establish temporality). The comparable results appear to increase the validity of observational studies in determining strength of association.

Molenberg and colleagues, 79 on the other hand, did not limit their search to studies that incorporated these analytical methods. Instead, they extracted evidence from the included studies that, based on the MRC guidance on natural experiments, used methods that may help elucidate the possible causal relationship between infrastructural intervention to promote cycling and cycling outcomes. Specifically, they noted which studies considered multiple comparison groups to test robustness of findings (e.g., infrastructural intervention on cycling for cyclists vs. non‐cyclists) and the use of complementary research methodologies (e.g., trends from surveys). They aimed to also consider the effect of changes in the infrastructural intervention on a neutral outcome that is expected to be independent from the intervention (i.e., a falsification outcome), though did not identify any studies that used falsification outcomes. Thus, this application of the MRC guidance on natural experiments appears to reflect the principles of three BH viewpoints: temporality (focusing on study designs that ensure the exposure preceded the outcome), experiment (study designs focus on comparing pre‐ and post‐intervention), and specificity (falsification outcomes).

3.2.3. Realist reviews

A realist review is an evidence synthesis strategy used to investigate the context and mechanisms through which an exposure‐outcome relationship operates. 86 Realist reviews aim to provide a more iterative approach to examining complex interventions than traditional SRs, which have been criticised for being too inflexible. In doing so, the included realist reviews appear to focus on the BH viewpoint of plausibility. Two SRs utilised the realist review approach to assess causality. 80 , 81 DeBono and colleagues evaluated the relationship between participation in the US food stamp programme and obesity 81 while Blair and colleagues 80 applied a realist review to understand the causal mechanism through which neighbourhood impact depression. Both SRs underscored the goal of realist reviews to explore and explain the causal mechanism of the relationship under study, which both SRs did in part by extracting the posited causal pathways from the included studies and then narratively assessing the evidence for different pathways. Neither SR found strong evidence for any of the proposed mechanisms.

3.2.4. Horizontal SR

Kuper and colleagues implemented a novel approach to causal assessment across a body of evidence for a range of risk factors which they called a ‘horizontal SR’. They examined the relationship between four risk factors (depression, exercise, C reactive protein, and diabetes) and coronary heart disease, using various study designs. 82 Within and across the risk factors, they compared findings across study designs which addressed confounding and reverse causality to different degrees and in different ways: observational studies with multivariable adjustments; studies using genetic variants as instrumental variables (Mendelian randomisation); and RCTs. 82

For each risk factor, they conducted a meta‐analysis by each study design type and subsequently compared the meta‐analysis results of the three risk factors with an unknown causal role (depression, exercise, and C reactive protein) against the meta‐analysis results of the risk factor they designated an established cause (diabetes). The comparison of observational studies suggested that diabetes and C reactive protein had a causal role in coronary heart disease, while, according to the authors, observational evidence for exercise and diabetes was more susceptible to bias and thus their causal effect on coronary heart disease was inconclusive. There was only evidence from Mendelian randomisation studies and RCTs for C reactive protein, where it appears that C reactive protein did not have a causal role, making it difficult to compare results and thus make any causal inferences.

In identifying studies that address bias and study designs comparable to experimental evidence, this CAA utilised the principles of the BH viewpoints strength of association across different study designs and experiment. Unlike the reviews applying BH viewpoints, in a horizontal review the size of association identified from the observational studies is considered in the context of appropriately account for confounding and other forms of bias. Kuper and colleagues also considered other forms of bias including measurement bias and publication bias. The authors also appear to consider specificity as by looking at different risk factors they may implicitly, and unintentionally, suggesting there is no evidence for specificity. They also appear to account for consistency within the horizontal SR as they are not only evaluating effect estimates across different study designs and risk factors, but also explicitly review explanations for statistical heterogeneity and evaluate temporality while considering reverse causality.

3.2.5. Sign test hypotheses

A SR by Kroger and colleagues explored the relationship between socioeconomic status and health by comparing two competing hypotheses that could explain the putative causal relationship. 83 The health selection hypothesis suggests that differences in health status cause socioeconomic status while the social causation hypothesis suggests that resources available to people with higher socioeconomic status have better health (i.e., reverse causality). To determine which mechanism is more likely to be causal, Kroger and colleagues conducted a sign‐test to compare the probabilities of health selection versus social causation based on the conclusions of included studies. This CAA reflects an approach to testing for temporality by testing the reverse direction of the pathways between the exposure and outcome.

The authors ran three meta‐regressions: one for all studies providing support to the health selection hypothesis; one for all studies in support of the social causation hypothesis; and one for all studies that found equal support for both hypotheses (i.e., the null hypothesis). They regressed the preference for the three theories against study characteristics including age, education and income of the included studies' samples, which were found to be somewhat predictive of support for a given theory. Overall, they did not find a consensus in support for either theory. Thus, it appears that strength of association is implemented in the context of understanding temporality. This CAA uses temporality, also used in BH viewpoints, to assess reverse causation.

3.2.6. Causal cascade model

One SR implemented the principles of DAGs (‘conditional independence for the parameters and variables implicated’ p5 84 ) and developed a Bayesian causal model illustrating the ‘framework of causal relationships’, p3. 84 The model illustrated the framework of causal relationships Chen and colleagues aimed to understand the heterogeneity of advanced breast cancer risk and mortality breast cancer across breast cancer screening trials. They focused on two hypothesised reasons for variation in trials examining breast cancer mortality: attendance rate in screening trials and test sensitivity to breast cancer mammography (i.e., incidence rate of interval cancer/expected incidence rate). In other words, their aim was to elucidate the statistical heterogeneity of advanced breast cancer risk within breast cancer screening trials given these two possible explanations. They considered the impact of different combinations of attendance rates (90%, 60%, and 30%) and sensitivity rates (95%, 75%, and 55%) on breast cancer risk and mortality rates. They found that both attendance rates and sensitivity explained the heterogeneity of trials. This CAA overlaps with the BH viewpoint of consistency, which is concerned with heterogeneity across the evidence.

3.3. CAAs impact on conduct of SR stages

In this section we considered whether and how CAAs impacted different stages of SR conduct (objective 3): objective of the review; description of the study design; inclusion and exclusion criteria; search strategy; data extraction; and evidence synthesis and conclusion. 87 The key findings and adaptations made in each stage are summarised below in Table  4 .

There were seven SRs/RoRs 33 , 39 , 47 , 49 , 58 , 71 , 77 (all utilising BH viewpoints) where causal assessment does not appear to have been incorporated into conduct of research objectives, review design, search strategies, inclusion criteria or data extraction. One SR 74 appears to have incorporated CAA at each SR stage.

4. DISCUSSION

4.1. overview of how caas are incorporated into population health srs (objective 1 and 2).

Though there was some variation in how it was implemented, the most common CAA used by SRs/RoRs was BH viewpoints, which are considered among the most influential and comprehensive approaches to causal assessment. 88 , 89 Other CAAs included realist reviews and MRC guidance on natural experiments, which both have existing implementation guidance. 85 , 86 The remaining CAAs (horizontal SR; sign‐test hypothesis, and causal cascade model) were developed by SR authors, though the causal cascade model incorporated principles of DAGs. A common theme across the alternative CAAs was that most focused on one or two key aspects of causal assessment (e.g., one of the BH viewpoints). The overlap across CAAs also suggests that insight into implementing viewpoints should include reviews utilising BH viewpoints as well as reviews utilising alternative CAAs as both may offer useful insights for a given viewpoint. The comparison across CAAs suggests that while it may be preferable for some SRs/RoRs to take an in‐depth look at one characteristic of causal assessment, in another SR/RoR it would be preferable to consider many, depending on the focus and priorities for the review. Reviews that focus on one or two BH viewpoints (as opposed to several or all viewpoints) may find it easier to provide greater transparency about how the given viewpoint was implemented.

We found considerable variation in how BH viewpoints were used including their transparency, which was part of a broader understanding in how CAAs were implemented (objective 2). Transparent reporting of methods is a key component of SRs and lack of transparency in how CAAs were implemented in SRs/RoRs might result in assessments of causality not being reproducible which undermines the strength a SR/RoR. Based on our assessment of SR/RoRs using BH viewpoints, transparency of how viewpoints were implemented can be improved by (1) providing reasons for why certain viewpoints were used or omitted, (2) offering clear viewpoint definition and indicators, and (3) utilising a variety approaches for assessing support for viewpoints and applying viewpoints.

Firstly, as only three reviews explained why certain viewpoints were excluded, 42 , 54 , 57 we are unsure if variation in which viewpoints were used reflected differences in viewpoints' perceived relevance for causal assessment or which viewpoints were more easily understood and applied. Moreover, only one‐third of reviews defined their included viewpoints while just 40% indicated how viewpoints were met. Limited clarity of why certain indicators were used makes it difficult to understand why there was, for example, a broad range for what was considered a ‘large’ effect estimate (strength of association) or what would be considered a ‘credible’ mechanism (plausibility). Finally, different approaches for assessing support for viewpoints and applying viewpoints improved overall transparency. Reviews that, for example, used both narrative and quantitative support for viewpoints provided more comprehensive assessment of the extent to which viewpoints were met than those that only provided quantitative or narrative assessments of support. Relatedly, reviews that implemented viewpoints across different study groupings (e.g., across all synthesised studies, across studies synthesised by exposure, and across individual studies) appear to more comprehensively consider causality than those that do not. Despite its importance, only a few reviews stood out as example of a rigorous and transparent application of BH viewpoints. Four reviews 38 , 46 , 54 , 70 defined the viewpoints, provided indicators and used both a narrative and quantitative rankings to describe certainty or likelihood of viewpoints having been met, with one of them explaining why certain viewpoints were not included. 54

4.1.1. Impact of incorporating CAAs on conducting SRs (objective 3)

Explicitly incorporating causal assessment into review objectives and CAAs into review study design, as most reviews did (see Table  4 ), are examples of how researchers can conduct causal SRs with clear research goals and explicit use of causal inference. 1 To a lesser degree, CAA also impacted the search strategy, inclusion criteria and data extraction. It may be that so few reviews (3/53) 43 , 74 , 78 designed their search to specifically identify terms such as ‘causal mechanism’ or ‘causality’ because doing so creates a low sensitivity search. It appears that an alternative approach is to have a highly sensitive search with a set of inclusion and exclusion criteria designed to identify studies most relevant to causal assessment, which about one quarter of reviews did (14/53, 26.4%). For example, the horizontal review and the MRC guidance on natural experiments review by Martin and colleagues designed their inclusion criteria to ensure their review included studies that assess bias and experimental evidence. Similarly, one quarter of reviews extracted information from included studies that supported causal assessment. Most CAAs were incorporated into the synthesis process. This includes using evidence to understand if BH viewpoints were met, synthesising evidence to understand causal mechanisms in realist reviews, sign test of the evidence for reverse causality, or test the evidence for statistical heterogeneity (causal cascade model). Finally, all reviews drew conclusions regarding causal relationships, suggesting it is a key component of a causal SR.

4.2. Strengths and weaknesses of methodological review

This methodological review is the first we have identified that summarises the use of causal approaches in SRs in current practice. It builds on literature exploring the use of SRs in causal assessment 11 , 90 and aiming to improve transparency and robustness around causal assessment. 1 , 91 Our findings are consistent with criticisms of causal SRs that there is no consensus on how to conduct a causal SR, though we found this variety may in fact strengthen causal assessment in SRs. The range of CAAs and variety in how a given CAA was implemented (both within BH viewpoints and across CAAs that utilised one or two BH viewpoints) provide many examples of causal SR that may be of use to different causal SRs with different areas of focus. In other words, it may be more relevant (given the exposure/outcome relationship under study, type of evidence available, main point of disagreement in the literature) for some reviews to focus on the BH viewpoint of experiment or temporality and for others to focus on several viewpoints in less detail.

This review has several limitations. The primary limitation in this methodological review was that the search was not sufficiently sensitive to identify reviews that did not use causal language in their title or abstract. Thus, we missed reviews that either implicitly applied causal approaches (such as sensitivity analyses for unmeasured confounding) or explicitly applied causal approaches but did not reference them in the title or abstract. 7 We did not limit the search to specific population health topics, such as sexual health or men's health, as we aimed to include a broad range of population health SRs. That is, we designed the review to help us explore the range of possible CAAs across a broad area rather than exploring in greater detail issues of causal assessment specific to a particular topic. It is possible that we have overlooked useful insights from SRs of NRSs in subject areas outside population health. In addition, we focused our search on SRs/RoRs as they are considered the gold standard of evidence synthesis, so we may have missed additional CAA used by non‐SRs/RoRs. Relatedly, due to the limited number of reviews utilising alternative CAAs, we were only able to describe differences in how BH viewpoints were implemented. Moreover, we did not critically appraise the reviews and thus did not account for quality of SRs in our consideration of how causal approaches were applied.

5. IMPLICATIONS FOR FUTURE CONDUCT OF CAUSAL SRS

The range of CAAs, including variation across reviews that applied BH viewpoints, offer examples of how the same characteristic of causality could be implemented. Alternative CAAs that focus on one or two viewpoints appear to go into greater detail on those viewpoints (compared to reviews incorporating BH viewpoints) both in how the viewpoint is implemented and also appear to present greater transparency about how it has been implemented. However, reviews incorporating BH viewpoints (even though most did not use all nine viewpoints) appear to consider a broader range of characteristics of causality than the CAAs we identified. Investigators aiming to conduct causal SRs may need to consider which balance of depth and breadth is most appropriate for their consideration of a putative causal relationship.

Investigators should consider a range of CAAs and choose the approach that provides the greatest insight into whether a causal relationship exists, and this is especially true of BH viewpoints. This finding is consistent with an earlier theoretical comparison of BH viewpoints with other CAAs to elucidate viewpoints' theoretical underpinnings; our findings suggest that alternative CAA offer practical examples for improving the way individual viewpoints are implemented. For instance, the causal cascade model approach for evaluating heterogeneity and the horizontal review approach to evaluate the impact of different study designs the potential biases associated with each are potentially valuable for implementing the BH viewpoints consistency and strength of association. Formal testing (horizontal review, sign‐testing mechanisms) and comprehensive evaluation (realist review) of putative mechanisms is necessary to increase transparency around assumptions of plausibility. The MRC guidance on natural experiments lays out the analytical methods and study designs useful for implementing experiment. Falsification outcomes, as used by the MRC guidance on natural experiments, or comparing the associations of different exposure/outcome groupings, as the horizontal review and sign‐testing mechanism CAA did, may be useful approaches to evaluating specificity. Moreover, coherence and analogy were two of the three most infrequently used viewpoints. Though we are unsure why they were excluded, as they were used by fewer than half of reviews and as they were the only viewpoints that did not overlap with the alternative CAAs, their utility in causal assessment is not clearly supported.

SRs/RoRs applying BH viewpoints varied in how the viewpoints were implemented and transparency reporting on implementation. We found that transparent reporting of why viewpoints were implemented in a certain way (or considered not at all) is potentially as important as how viewpoints were implemented. That is, it may be more useful to understand why viewpoints have been excluded than to apply all nine viewpoints. Clarity, such as in defining viewpoints and providing criteria for how they may be met, also increases transparency reporting BH viewpoints. Where possible, we believe a more comprehensive approach to implementing the viewpoint is preferable. For example, reviews that describe the support for each viewpoint both narratively and quantitatively (e.g., strong/moderate/weak) offer greater transparency of how support for each viewpoint was considered. Transparent reporting of how viewpoints were implemented may clarify inconsistencies in how BH viewpoints were used.

6. CONCLUSION

This methodological review has evaluated how SRs/RoR that assess causality (‘causal reviews’) in population health research are conducted and reported. It contributes to the literature aimed at improving causal assessment in SRs, for which there are currently no established guidelines. While our goal was not to propose guidelines, our findings suggest overlap across the CAAs with BH viewpoints such that alternative CAAs appear to emphasise one or two viewpoints. This indicates that alternative CAAs should be used to inform, and improve, how BH viewpoints are implemented. Moreover, as there are also no guidelines for incorporating BH viewpoints, the most commonly applied CAA, we identified five key areas where reviews can be transparent: reasons for excluding viewpoints; viewpoint definition; viewpoint indicators; support for viewpoints; and application of viewpoints. The more transparent and clear reviews are about how CAAs are implemented, the greater clarity there is likely to be on how CAAs impact different SR stages which was not always clear. Overall, we found that clarity, transparency and engagement with other CAAs are the key approaches to conducting a causal SR.

AUTHOR CONTRIBUTIONS

MS led (and AP, HT, and SVK supervised) conceptualization, methodology, investigation, analysis, and writing ‐ original draft. AP, HT, and SVK also validated findings and contributed to writing ‐ review & editing.

FUNDING INFORMATION

Michal Shimonovich received funding from the Medical Research Council and the Medical, Veterinary and Life Sciences School at the University of Glasgow for her PhD. The author received no additional financial support for the research, authorship, and/or publication of this article.

CONFLICT OF INTEREST

The authors declare there is no conflict of interest.

Supporting information

Appendix S1 Supporting Information.

Shimonovich M, Pearce A, Thomson H, Katikireddi SV. Causal assessment in evidence synthesis: A methodological review of reviews . Res Syn Meth . 2022; 13 ( 4 ):405‐423. doi: 10.1002/jrsm.1569 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Funding information Chief Scientist Office, Grant/Award Number: SPHSU17; Medical Research Council, Grant/Award Numbers: MC_ST_U18004, MRC MC_UU_00022/2; NRS Senior Clinical Fellowship, Grant/Award Number: SCAF/15/02; Wellcome Trust, Grant/Award Number: 205412/Z/16/Z

DATA AVAILABILITY STATEMENT

IMAGES

  1. Causal Research: Definition, Examples and How to Use it

    which research proposal question aims to establish causality

  2. Please I need help and detailed guidelines about

    which research proposal question aims to establish causality

  3. Causality and Reasoning in Research

    which research proposal question aims to establish causality

  4. Research Proposal Question Examples

    which research proposal question aims to establish causality

  5. Organizational Research Methods Week 2: Causality

    which research proposal question aims to establish causality

  6. Causality in research

    which research proposal question aims to establish causality

VIDEO

  1. Developing a Research Proposal

  2. chapter 9 The Proposal question answer class 10th English first flight

  3. The Proposal |Class 10th English

  4. Research proposal vs research interest

  5. Proposal 101: What Is A Research Topic?

  6. Why do research proposals get rejected?

COMMENTS

  1. Causal Research Design: Definition, Benefits, Examples

    Causal Research Design: Definition, Benefits, Examples

  2. Causal Research: Definition, Design, Tips, Examples

    Causal Research: Definition, Design, Tips, Examples

  3. 8 Preparing a Causal Research Design

    Abstract. This chapter outlines the principles and practices of preparing a causal research design in qualitative research. It argues that a good causal research design is one that (1) poses a causal research question, (2) identifies what is at stake in answering this question, (3) describes the key concepts and variables, (4) offers a causal hypothesis, (5) accounts for competing explanations ...

  4. Designing a Research Question

    Abstract. This chapter discusses (1) the important role of research questions for descriptive, predictive, and causal studies across the three research paradigms (i.e., quantitative, qualitative, and mixed methods); (2) characteristics of quality research questions, and (3) three frameworks to support the development of research questions and ...

  5. A step-by-step guide to causal study design using real-world data

    Due to the need for generalizable and rapidly delivered evidence to inform healthcare decision-making, real-world data have grown increasingly important to answer causal questions. However, causal inference using observational data poses numerous challenges, and relevant methodological literature is vast. We endeavored to identify underlying unifying themes of causal inference using real-world ...

  6. PDF Causation and Research Design

    establish them. Causal effect (nomothetic perspective)When variation in one phenomenon, an inde-pendent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable. Example of a nomothetic causal effect: Individuals arrested for domestic assault tend to

  7. A clinician's guide to conducting research on causal effects

    Abstract. Surgeons are uniquely poised to conduct research to improve patient care, yet a gap often exists between the clinician's desire to guide patient care with causal evidence and having adequate training necessary to produce causal evidence. This guide aims to address this gap by providing clinically relevant examples to illustrate ...

  8. Introduction to Causal Inference Principles

    Recent decades have seen a surge in the development of methods to answer questions regarding causality—such as whether exposure to particulate matter increases mortality—from data arising from randomized and non-randomized (observational) studies. Those methods introduced formal tools that increase rigor in design, analyses, and interpretation of studies that establish causality and ...

  9. PDF Introduction to Causal Research

    Introduction. The goal of causal research is to provide evidence of the effectiveness of a program, intervention, or policy change on one or more desired outcomes. The counterfactual: the outcomes that would have happened had the same people not received the program during the same timeframe. Well-designed causal research studies provide a ...

  10. Causal research designs and analysis in education

    Causal inference in education research is crucial for assessing the effectiveness of educational policies, programs, and interventions. Establishing causal relations, that is, identifying interventions that actually work in practice, helps policymakers, educators, and researchers implement strategies that are genuinely beneficial, ensuring resources are allocated efficiently to enhance ...

  11. Causal Research: What it is, Tips & Examples

    Causal Research: What it is, Tips & Examples

  12. Causal Research: Definition, examples and how to use it

    Causal Research: Definition, Examples and How to Use it

  13. Principles of Causation

    Causation refers to a process wherein an initial or inciting event (exposure) affects the probability of a subsequent or resulting event (outcome) occurring.[1][2] Epidemiologists' definitions of causation and methods for establishing causal relationships (causality) have evolved. Contemporary studies involving causality require strong assumptions, causal-structural subject-matter knowledge ...

  14. Essential Ingredients of a Good Research Proposal for Undergraduate and

    Based on what literature review means, it is inappropriate to use "literature review" as a title or heading of a section in a research proposal or a chapter in a dissertation or thesis albeit it is commonly used that way. For instance, Naoum (2013) has used it as a section heading in a sample research proposal. In writing a research ...

  15. Understanding Causal Research: Definition, Examples, and Applications

    Causal research is a type of investigation that seeks to establish a cause-and-effect relationship between variables. It aims to determine whether changes in one variable (the independent variable) lead to changes in another variable (the dependent variable). This method of research is crucial in understanding the reasons behind certain ...

  16. RQs

    49 RQs - Definition, Fact, Association, Causation The next two chapters are about research questions and hypotheses. Specifically, they talk about different types of research questions and hypotheses. Research questions can help define things and establish facts. Both RQs and HYs can involve association and causation.

  17. Historical Overview: The Four Approaches to Causality

    The questions in the last paragraph that are not answered by Koch's postulates illustrate a much broader issue: our ability to come up with general rules for establishing causality will always be limited by the specifics of a given causal question. In the example of proving that a specific infectious agent is the cause of a specific disease ...

  18. Establishing Causality Flashcards

    Study with Quizlet and memorize flashcards containing terms like analytic epidemiology research, causality, operationalization and more. ...

  19. Causal Argument

    Abstract. This chapter outlines the range of argument forms involving causation that can be found in everyday discourse. It also surveys empirical work concerned with the generation and evaluation of such arguments. This survey makes clear that there is presently no unified body of research concerned with causal argument.

  20. Methods for Evaluating Causality in Observational Studies

    Methods for Evaluating Causality in Observational Studies

  21. Establishing Causal Claims in Medicine

    ABSTRACT. Russo and Williamson [2007. "Interpreting Causality in the Health Sciences." International Studies in the Philosophy of Science 21: 157-170] put forward the following thesis: in order to establish a causal claim in medicine, one normally needs to establish both that the putative cause and putative effect are appropriately correlated and that there is some underlying mechanism ...

  22. A Practical Guide to Writing Quantitative and Qualitative Research

    A Practical Guide to Writing Quantitative and Qualitative ...

  23. Collaborative Research in Computational Neuroscience (CRCNS)

    Title: Titles for research proposals should begin with the phrase, "CRCNS Research Proposal:" Additional title prefixes (e.g., "Collaborative Research:" or "RUI:") may be included as applicable. Although all CRCNS Research Proposals must describe scientific collaborations involving two or more investigators, they do not need to be ...

  24. Causal assessment in evidence synthesis: A methodological review of

    Causal assessment involves researchers and policy makers interrogating the evidence to understand if a cause‐and‐effect relationship exists between an exposure and an outcome. 1 , 2 By bringing together evidence surrounding a research question, evidence synthesis is arguably preferable to relying on an individual study for causal assessment ...