Research-Methodology

Deductive Approach (Deductive Reasoning)

A deductive approach is concerned with “developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis” [1]

It has been stated that “deductive means reasoning from the particular to the general. If a causal relationship or link seems to be implied by a particular theory or case example, it might be true in many cases. A deductive design might test to see if this relationship or link did obtain on more general circumstances” [2] .

Deductive approach can be explained by the means of hypotheses, which can be derived from the propositions of the theory. In other words, deductive approach is concerned with deducting conclusions from premises or propositions.

Deduction begins with an expected pattern “that is tested against observations, whereas induction begins with observations and seeks to find a pattern within them” [3] .

Advantages of Deductive Approach

Deductive approach offers the following advantages:

  • Possibility to explain causal relationships between concepts and variables
  • Possibility to measure concepts quantitatively
  • Possibility to generalize research findings to a certain extent

Alternative to deductive approach is  inductive approach.  The table below guides the choice of specific approach depending on circumstances:

Wealth of literature Abundance of sources Scarcity of sources
Time availability Short time available to complete the study There is no shortage of time to compete the study
Risk To avoid risk Risk is accepted, no theory may emerge at all

Choice between deductive and inductive approaches

Deductive research approach explores a known theory or phenomenon and tests if that theory is valid in given circumstances. It has been noted that “the deductive approach follows the path of logic most closely. The reasoning starts with a theory and leads to a new hypothesis. This hypothesis is put to the test by confronting it with observations that either lead to a confirmation or a rejection of the hypothesis” [4] .

Moreover, deductive reasoning can be explained as “reasoning from the general to the particular” [5] , whereas inductive reasoning is the opposite. In other words, deductive approach involves formulation of hypotheses and their subjection to testing during the research process, while inductive studies do not deal with hypotheses in any ways.

Application of Deductive Approach (Deductive Reasoning) in Business Research

In studies with deductive approach, the researcher formulates a set of hypotheses at the start of the research. Then, relevant research methods are chosen and applied to test the hypotheses to prove them right or wrong.

Deductive Approach Deductive Reasoning

Generally, studies using deductive approach follow the following stages:

  • Deducing  hypothesis from theory.
  • Formulating  hypothesis in operational terms and proposing relationships between two specific variables
  • Testing  hypothesis with the application of relevant method(s). These are quantitative methods such as regression and correlation analysis, mean, mode and median and others.
  • Examining  the outcome of the test, and thus confirming or rejecting the theory. When analysing the outcome of tests, it is important to compare research findings with the literature review findings.
  • Modifying  theory in instances when hypothesis is not confirmed.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research approaches. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,   research design ,  methods of data collection ,   data analysis  and   sampling   are explained in this e-book in simple words.

John Dudovskiy

Deductive Approach (Deductive Reasoning)

[1] Wilson, J. (2010) “Essentials of Business Research: A Guide to Doing Your Research Project” SAGE Publications, p.7

[2] Gulati, PM, 2009, Research Management: Fundamental and Applied Research, Global India Publications, p.42

[3] Babbie, E. R. (2010) “The Practice of Social Research” Cengage Learning, p.52

[4] Snieder, R. & Larner, K. (2009) “The Art of Being a Scientist: A Guide for Graduate Students and their Mentors”, Cambridge University Press, p.16

[5] Pelissier, R. (2008) “Business Research Made Easy” Juta & Co., p.3

  • Privacy Policy

Research Method

Home » Inductive Vs Deductive Research

Inductive Vs Deductive Research

Table of Contents

Inductive Vs Deductive Research

Inductive and deductive research are two contrasting approaches used in Research to develop and test theories .

Inductive Research

  • Definition : Inductive research starts with specific observations or real examples of events, trends, or social processes. From these observations, researchers identify patterns and develop broader generalizations or theories.
  • Observation : Begin with detailed observations of the world.
  • Pattern Recognition : Identify patterns or regularities in these observations.
  • Theory Formation : Develop theories or hypotheses based on the identified patterns.
  • Conclusion : Make generalizations that can be applied to broader contexts.
  • Example : A researcher observes that students who study in groups tend to perform better on exams. From this pattern, they might develop a theory that group study is more effective than studying alone.

Deductive Research

  • Definition : Deductive research starts with a theory or hypothesis and then designs a research strategy to test this hypothesis. It moves from the general to the specific.
  • Theory : Begin with an existing theory or hypothesis.
  • Hypothesis Development : Formulate a hypothesis based on the theory.
  • Data Collection : Collect data to test the hypothesis.
  • Analysis : Analyze the data to determine whether it supports or refutes the hypothesis.
  • Conclusion : Draw conclusions that confirm or challenge the initial theory.
  • Example : A researcher starts with the hypothesis that “students who study for more than 3 hours a day perform better on exams.” They then collect data to see if this hypothesis holds true.

Key Differences

  • Inductive : Moves from specific observations to broader generalizations (bottom-up approach).
  • Deductive : Moves from a general theory to specific observations or experiments (top-down approach).
  • Inductive : Theories are developed based on observed patterns.
  • Deductive : Theories are tested through empirical observation.
  • Inductive : Useful in exploring new phenomena or generating new theories.
  • Deductive : Effective for testing existing theories or hypotheses.

Both inductive and deductive research approaches are crucial in the development and testing of theories. The choice between them depends on the research goal: inductive for exploring and generating new theories, and deductive for testing existing ones.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Clinical Research Vs Lab Research

Clinical Research Vs Lab Research

Exploratory Vs Explanatory Research

Exploratory Vs Explanatory Research

Qualitative Vs Quantitative Research

Qualitative Vs Quantitative Research

Essay Vs Research Paper

Essay Vs Research Paper

Market Research Vs Marketing Research

Market Research Vs Marketing Research

Research Question Vs Hypothesis

Research Question Vs Hypothesis

  • What is Deductive Research? Meaning, Stages & Examples

Emmanuel

Introduction

Deductive research is a scientific approach that is used to test a theory or hypothesis through observations and empirical evidence. This research method is commonly used in disciplines such as physics, biology, psychology, and sociology, among others. In this article, we will explore the meaning of deductive research, its components, and some examples of its application.

What is Deductive Research?

Deductive research starts with a theory, forms a hypothesis, and tests it through observation and evidence. In other words, it involves starting with a general theory and then making a specific prediction based on that theory. This prediction is called a hypothesis, and it is tested through observations and data analysis. Deductive research aims to support or disprove theories with evidence, advancing scientific knowledge through experimentation and analysis.

Related – Inductive Research: What It Is, Benefits & When to Use

Components of Deductive Research

Deductive research is composed of several components, which are as follows:

  • Theory Development

The first step in deductive research is to develop a theory that explains a particular phenomenon. This theory is based on existing research and knowledge, and it provides a general framework for understanding the phenomenon.

  • Hypothesis Formulation

Next, create a hypothesis based on the theory. It tests the theory’s validity through a verifiable prediction. It asserts a statement that can be confirmed or refuted through careful observations and thorough data analysis.

  • Data Collection

To test the hypothesis in deductive research, one must actively collect data. To study a phenomenon, methods like surveys, experiments, observations, etc. are used based on their nature.

  • Data Analysis

The fourth step in deductive research is to analyze the data that has been collected. This involves using statistical methods to determine whether the hypothesis is supported by the data or not. If the data support the hypothesis, it provides evidence that the theory is valid. If the data does not support the hypothesis, it indicates that the theory needs to be revised or rejected.

To conclude deductive research, analyze data and draw a conclusion that either supports or rejects the hypothesis. Researchers may need to revise the theory based on the results or conduct further research to confirm them.

What are the Main Characteristics of Deductive Research?

Deductive research starts with a theory or hypothesis, followed by experiments that test its validity and support.

Another important characteristic of deductive research is that it uses objective and empirical methods to gather and analyze data. This means that researchers rely on measurable data and observations to test their hypotheses, rather than relying on personal opinions or subjective judgments.

Deductive research uses deductive reasoning. It starts with a theory, makes predictions, and tests them through observation or experimentation.

Read Also: What is Empirical Research Study? [Examples & Method]

Stages of the Deductive Research Process

The main stages of the deductive research process include theory development, hypothesis formulation, data collection, data analysis, and conclusion. Researchers start by identifying a problem or question, reviewing the literature, and building a theoretical framework. They formulate a hypothesis and collect and analyze data to test it. They draw a conclusion based on whether the data supports the hypothesis or not.

Difference between Deductive Vs Inductive Research

Deductive and inductive research are two different approaches to scientific research that have distinct differences. Researchers use a top-down approach called deductive research, beginning with a theory or hypothesis, and collecting data to test it.  Inductive research, however, starts with observations and builds a theory or hypothesis from them, using a bottom-up approach.

One of the main differences between the two approaches is the order in which they proceed. Deductive research begins with a theory, then tests it with experiments. Inductive research starts with data, then forms a theory.

Another difference is the degree of certainty associated with the results. Deductive research aims to prove or disprove a specific hypothesis, so the results tend to be more definitive. Inductive research generates theories based on observations, resulting in more tentative and revisable outcomes compared to deductive research.

To choose between deductive and inductive research, consider the research question and resources. Deductive research suits experiments, while inductive research suits exploring novel or complex systems.

Steps in Deductive Research

A. Formulating a research question: This involves Identifying a research question based on existing knowledge and a literature review.

B. Developing a hypothesis: Develop a specific hypothesis or set of hypotheses based on the research question and relevant theories.

C. Designing the research: Plan the research design including sampling, data collection methods, and research instruments.

  • Sampling: Identify the population and sampling technique to be used in the study.
  • Data collection methods: Choose appropriate data collection methods such as surveys, experiments, or observations.
  • Research instruments: Develop or select appropriate research instruments such as questionnaires or tests.

D. Analyzing the data: Collect and analyze the data using appropriate statistical techniques to test the hypothesis.

E. Drawing conclusions: Based on the analysis of the data, draw conclusions that either support or reject the hypothesis or suggest the need for further research.

Examples of Deductive Research

  • The study tests the hypothesis that “exposure to violent video games increases aggressive behavior in children.”

In this study, the researcher would formulate the research question based on existing literature and knowledge. They would develop a hypothesis by proposing that exposure to violent video games increases aggressive behavior in children.

The researcher would first select a group of children, decide on the method of collecting data (such as surveys or observations), and create research tools (like a questionnaire) to study the potential impact of violent video games on aggressive behavior.

The researcher would then collect and analyze the data, using appropriate statistical techniques to test the hypothesis. If the results of the analysis support the hypothesis, the researcher would draw conclusions that exposure to violent video games increases aggressive behavior in children. If the results do not support the hypothesis, the researcher would draw conclusions that there is no significant relationship between exposure to violent video games and aggressive behavior.

  • A test of the hypothesis that “ employees who work in a positive work environment have higher levels of job satisfaction compared to employees who work in a negative work environment.”

In this study, the researcher would formulate the research question based on existing literature and knowledge. After conducting some research, they developed a hypothesis that a positive work environment is strongly correlated with higher levels of job satisfaction.

Next, the researcher would design the study, including selecting a sample of employees, determining the data collection method (e.g., survey, observation), and developing the research instruments (e.g., a questionnaire to assess the work environment and job satisfaction).

Using appropriate statistical techniques to test the hypothesis, the researcher will analyze the sample. If the results of the analysis support the hypothesis, the researcher would conclude that working in a positive work environment leads to higher levels of job satisfaction. If the results do not support the hypothesis, the researcher would conclude that there is no significant relationship between work environment and job satisfaction.

  • Testing the hypothesis that “increasing the amount of exercise a person does leads to a decrease in body weight.”

In this study, the researcher would formulate the research question based on existing literature and knowledge. They would then develop a hypothesis, in this case, the hypothesis that increasing exercise leads to a decrease in body weight.

Therefore, the researcher would design the study, including selecting a sample of participants, determining the data collection method (e.g., survey, observation), and developing the research instruments (e.g., a questionnaire to assess exercise habits and body weight).

The researcher would then collect and analyze the data, using appropriate statistical techniques to test the hypothesis. If the results of the analysis support the hypothesis, the researcher would conclude that increasing exercise leads to a decrease in body weight. If the results do not support the hypothesis, the researcher would draw conclusions that there is no significant relationship between exercise and body weight.

  • A study tests the hypothesis that “ students who attend schools with a higher teacher-to-student ratio have higher levels of academic achievement compared to students who attend schools with a lower teacher-to-student ratio .”

In this study, the researcher would formulate the research question based on existing literature and knowledge. They would formulate a hypothesis that a higher teacher-to-student ratio correlates with greater academic achievement.

The researcher would design the study, including selecting a sample of students, determining the data collection method (e.g., observation, survey), and developing the research instruments (e.g., a questionnaire to assess teacher-to-student ratio and academic achievement).

If the results of the analysis support the hypothesis, the researcher would draw conclusions that a higher teacher-to-student ratio leads to higher levels of academic achievement. If the results do not support the hypothesis, the researcher would draw conclusions that there is no significant relationship between teacher-to-student ratio and academic achievement.

  • A hypothesis test on  “ smoking is a risk factor for lung cancer.”

In this study, the researcher would formulate the research question based on existing literature and knowledge. They would then develop a hypothesis, in this case, the hypothesis that smoking is a risk factor for lung cancer.

Then, the researcher would design the study, including selecting a sample of participants, determining the data collection method (e.g., medical records, interviews), and developing the research instruments (e.g., a questionnaire to assess smoking status and lung cancer diagnosis).

After collecting and analyzing the data, using appropriate statistical techniques to test the hypothesis, if the results of the analysis support the hypothesis, the researcher would draw conclusions that smoking is a risk factor for lung cancer. If the results do not support the hypothesis, the researcher would draw conclusions that there is no significant relationship between smoking and lung cancer.

Explore: Population of Interest – Definition, Determination, Comparisons

Advantages of Deductive Research

  • Clearly defined research question: Deductive research starts with a clearly defined research question, which helps to keep the study focused and targeted.
  • Testable hypotheses: The use of hypotheses in deductive research allows for the testing of specific predictions, which can produce more reliable and valid results.
  • Structured approach: Deductive research is a structured approach that uses a logical and systematic process to test hypotheses, making it easier to replicate and build upon previous research.
  • Clear conclusions: Deductive research actively generates clear and concise conclusions by analyzing data, thus providing valuable insights to inform policy decisions and guide future research.
  • Efficiency: Deductive research can be more efficient in terms of time and resources since it focuses on testing specific hypotheses rather than exploring unknown phenomena.

Limitations of Deductive Research

  • Limited scope: Deductive research may have a limited scope since it starts with a specific hypothesis and may miss other relevant factors that could impact the research question.
  • Biases: The use of hypotheses in deductive research can lead to confirmation biases, where researchers may only look for evidence that supports their hypothesis while ignoring evidence that contradicts it.
  • Limited generalizability: Since deductive research is based on specific hypotheses and tests, the results may only be generalizable to a specific population, time, or setting.
  • Lack of flexibility: Deductive research is a structured approach, which may limit the researcher’s ability to explore new and unexpected findings that arise during the study.
  • Reductionism: Deductive research can be reductionistic since it often breaks down complex phenomena into smaller, more manageable parts, potentially overlooking the complex interrelationships among different factors.

In conclusion, deductive research involves a systematic process of identifying a research question, formulating a hypothesis, designing and implementing a research plan, collecting and analyzing data, and drawing conclusions based on the results. By following this process, researchers can gain insights into the relationships between variables and develop a deeper understanding of the phenomenon they are studying.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • data analysis
  • deductive research
  • empirical research
  • inductive vs deductive research
  • research surveys

Formplus

You may also like:

Research Summary: What Is It & How To Write One

Introduction A research summary is a requirement during academic research and sometimes you might need to prepare a research summary...

what is deductive reasoning in quantitative research

Serial Position Effect: Meaning & Implications in Research Surveys

Have you ever noticed how the first performer in a competition seems to set the tone for the rest of the competition, while everything...

Inductive Research: What It Is, Benefits & When to Use

You’ve probably heard or seen “inductive research” and “deductive research” countless times as a researcher. These are two different...

Documentary Research: Definition, Types, Applications & Examples

Introduction Over the years, social scientists have used documentary research to understand series of events that have occurred or...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Inductive vs Deductive Research Approach (with Examples)

Inductive vs Deductive Reasoning | Difference & Examples

Published on 4 May 2022 by Raimo Streefkerk . Revised on 10 October 2022.

The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory .

Inductive reasoning moves from specific observations to broad generalisations , and deductive reasoning the other way around.

Both approaches are used in various types of research , and it’s not uncommon to combine them in one large study.

Inductive-vs-deductive-reasoning

Table of contents

Inductive research approach, deductive research approach, combining inductive and deductive research, frequently asked questions about inductive vs deductive reasoning.

When there is little to no existing literature on a topic, it is common to perform inductive research because there is no theory to test. The inductive approach consists of three stages:

  • A low-cost airline flight is delayed
  • Dogs A and B have fleas
  • Elephants depend on water to exist
  • Another 20 flights from low-cost airlines are delayed
  • All observed dogs have fleas
  • All observed animals depend on water to exist
  • Low-cost airlines always have delays
  • All dogs have fleas
  • All biological life depends on water to exist

Limitations of an inductive approach

A conclusion drawn on the basis of an inductive method can never be proven, but it can be invalidated.

Example You observe 1,000 flights from low-cost airlines. All of them experience a delay, which is in line with your theory. However, you can never prove that flight 1,001 will also be delayed. Still, the larger your dataset, the more reliable the conclusion.

Prevent plagiarism, run a free check.

When conducting deductive research , you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research.

The deductive research approach consists of four stages:

  • If passengers fly with a low-cost airline, then they will always experience delays
  • All pet dogs in my apartment building have fleas
  • All land mammals depend on water to exist
  • Collect flight data of low-cost airlines
  • Test all dogs in the building for fleas
  • Study all land mammal species to see if they depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed
  • 10 out of 20 dogs didn’t have fleas
  • All land mammal species depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed = reject hypothesis
  • 10 out of 20 dogs didn’t have fleas = reject hypothesis
  • All land mammal species depend on water = support hypothesis

Limitations of a deductive approach

The conclusions of deductive reasoning can only be true if all the premises set in the inductive study are true and the terms are clear.

  • All dogs have fleas (premise)
  • Benno is a dog (premise)
  • Benno has fleas (conclusion)

Many scientists conducting a larger research project begin with an inductive study (developing a theory). The inductive study is followed up with deductive research to confirm or invalidate the conclusion.

In the examples above, the conclusion (theory) of the inductive study is also used as a starting point for the deductive study.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Streefkerk, R. (2022, October 10). Inductive vs Deductive Reasoning | Difference & Examples. Scribbr. Retrieved 15 October 2024, from https://www.scribbr.co.uk/research-methods/inductive-vs-deductive-reasoning/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, inductive reasoning | types, examples, explanation, what is deductive reasoning | explanation & examples, a quick guide to experimental design | 5 steps & examples.

what is deductive reasoning in quantitative research

Deductive / Quantitative research approach

what is deductive reasoning in quantitative research

What is quantitative research approach?

The deductive or quantitative research approach is concerned with situations in which data can be analyzed in terms of numbers. The researcher primarily uses post positivist claims for developing knowledge (i.e., cause and effect thinking, reduction to specific variables and hypotheses and questions, uses of measurement and observation), employs strategies of inquiry such as experiments and questionnaires, and Data Collection via predetermined instruments that yield Statistical Analysis . Its results are more readily analyzed and interpreted. The most common quantitative research methods are experiments and questionnaires.

MAIN SERVICES

Hire a statistician, statswork popup.

Statswork_Logo

  • Privacy Overview
  • Strictly Necessary Cookies
  • 3rd Party Cookies

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

  • How it works

researchprospect post subheader

Inductive and Deductive Reasoning – Examples & Limitations

Published by Alvin Nicolas at August 14th, 2021 , Revised On October 26, 2023

“Deductive reasoning is the procedure for utilising the given information. On the other hand, Inductive reasoning is the procedure of achieving it.” Henry Mayhew. Inductive and deductive reasoning takes into account assumptions and incidents. Here is all you need to know about inductive vs deductive reasoning.

Deductive reasoning begins with an assumption and moves from generalised instances to a certain conclusion. On the other hand, inductive reasoning begins with a conclusion and moves from certain incidents to a generalised conclusion.

Deductive Reasoning Strategy

The deductive reasoning method starts with a theory-driven hypothesis that directs data collection and investigation.

If there is an existing theory to explain a specific  topic , you form a hypothesis based on that theory that guides  data collection and analysis. You might design a particular survey to collect information about a set of variables in your  hypothesis  which you then evaluate.

Deductive Reasoning Strategy

1. Existing theory

  • All mangoes are fruits (general statement)
  • All dancers know how to sing. (general statement)

2. Hypothesis

  • All fruits have seeds (general statement)
  • Seems knows how to sing (general statement)

3. Data Collection

  • Collect different types of fruits to see if they have seeds.
  • Collect information about singers and dancers to know how many of them can dance and sing?

4. Conclusion

  • Mangoes have seeds (specific conclusion/confirmation)
  • Hence, Seema is a dancer. (Rejection)

Limitations of Deductive Reasoning

Deductive reasoning considers no other evidence except the given premises. The conclusion is drawn based on proof. The basic premise needs to be true to draw a positive  conclusion  about the theory.

  • All mangoes are fruits. (Premise)
  • All fruits have seeds. (Premise)
  • Mangoes have seeds. (Conclusion)

From the first and second premise in the above example, the third statement is concluded.

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Does your Research Methodology Have the Following

Inductive Reasoning Strategy 

The inductive reasoning method starts with a research-based question and data collection to develop a hypothesis and theory.

If you have a research question that is a testable concept, you proceed with data collection about the connection between two or more observations. Based on those observations, you form a hypothesis which you use to evaluate and develop a theory.

Inductive Reasoning Strategy

1. Research Question

  • Does regular exercise promote maximum weight loss?
  • Are all cats brown?

2. Observation

  • Regular exercise enables maximum weight loss.
  • All cats in this area are brown.

3. Observing Patterns

  • You might begin by collecting relevant data from a group of individuals. Try to understand their views, experiences through conducting surveys and interviews.
  • You can examine cats from different areas to find out if all of them are brown?

4. Confirmation or Rejection

  • Regular exercise promotes maximum weight loss if it is incorporated with a balanced diet and a healthy lifestyle.
  • All cats in this particular area are brown, but all cats are not generally brown.

Limitations of Inductive Reasoning

Inductive reasoning may be logically true but may or may not be realistically true. Your consequences are likely to be wrong.

Suppose we take an example of a fruit basket and pick up two raw fruits and assume that all fruits in the basket are raw. It might be apparently true but may not be logically true because there might be ripe fruits along with the raw fruits.

Inductive Vs. Deductive Reasoning Comparative Analysis 

  • Inductive reasoning is the generalised conclusion based on general knowledge by observing a specific outcome. These observations may change or remain constant. It may be logically true or may not be true.
  • Deductive reasoning may seem simple but it can go wrong if the given premise is wrong. However, deductive reasoning concludes based on the given premises and proofs. It is important to implement the logic properly otherwise the entire process of reasoning and its conclusion will be wrong.
  • We use inductive and deductive reasoning in everyday life. We develop beliefs based on our experiences. Both types of reasoning depend on evidence. It helps to get close to the truth.
  • “Never join a group of book burners. You can merely hide evidence, but you can’t suppress the existing truth.” Dwight D. Eisenhower
  • One of the main differences between these two methods is that the conclusion may not be logical and may change. The outcome can be strong or weak but not entirely true or false.
  • We get a new piece of information in the inductive reasoning approach based on the other available information. On the other hand, we don’t get a new piece of information as we already have an existing fact.
  • In deductive reasoning, we already know that our basic information is a fact, and it leads to a new outcome which may be another fact. Whereas in inductive reasoning, the basic information is a hypothesis, and it may sometimes lead to a false conclusion.
  • Inductive reasoning and deductive reasoning depend on the facts and evidence, the search for truth, which can be accepted or rejected. Lawyers and Scientists use facts followed by evidence to prove their statements. However, we can predict the situation based on the collected information, but no one can prove what happened exactly in the past without any strong proof. Example – Arthur Canon Doyle made inductive reasoning prominent. One of his iconic characters ‘Sherlock Holmes’ uses an inductive reasoning approach in his investigations in the quest for truth, which may or may not support a specific incident.
  • Superstitious beliefs are inductive. For instance, it is considered to be a bad omen if a cat crosses someone’s path; it is believed to follow another path to avoid bad luck. It can be sometimes true coincidentally, but we cannot consider it as a logical fact.
  • It happens when someone ignores the good luck experiences and remembers the associated bad experiences to prove an assumption without any strong evidence.
  • The whole legal system is based on inductive reasoning, where a Lawyer’s arguments try to relate the facts based on the evidence to prove their assumption, which can either be strong or weak.
  • One of the main disadvantages of inductive reasoning is that it does not guarantee 100% accuracy. The outcome is likely to be wrong. It turns out to be advantageous if you have incomplete information. Example – All pigeons are white. It is an assumption that needs research based on the shreds of evidence. You need to study pigeons because you haven’t seen all the pigeons. The conclusion may prove your assumption partially wrong as all pigeons are not white, but a few pigeons might be white.

“Deductive reasoning is the procedure for utilising the given information. On the other hand, Inductive reasoning is the procedure of achieving it.” Henry Mayhew

Inductive reasoning enables you to develop general ideas from a specific logic. You investigate a general hypothesis to get a deep knowledge about it, which enriches your thoughts, enables you to question, and develops an argument, which may not always be true but enriches your perception and knowledge.

Deductive reasoning is the opposite of inductive reasoning. As the word ‘deduct’ itself describes that it deducts a specific idea from the general hypothesis. It makes you dependent on the present theory and doesn’t allow you to reason as per your perception because it is already a proven theory which you use to study in detail.

“Inductive reasoning makes man master of his environment; it is an achievement.”  Mohammed Iqbal

It makes it clear that we can’t acquire full knowledge at once. Inductive reasoning gives the ability to question a specific assumption, investigate, and gather evidence to support or reject it based on the consequences. On the other hand, deductive reasoning follows the existing fact without any personal assumption depending on the basics and premises.

Hence, both kinds of reasoning methods are used in research and have their specific limitations and advantages depending on the pieces of evidence, facts, and research procedure.

Frequently Asked Questions

What is meant by inductive reasoning.

Inductive reasoning involves drawing general conclusions from specific observations or evidence. It moves from specific instances to broader theories without guaranteeing absolute truth. It’s common in scientific research and helps formulate hypotheses based on patterns and trends observed.

What is the difference between inductive and deductive reasoning?

Inductive reasoning involves drawing general conclusions from specific observations or instances. It goes from specific to general. Deductive reasoning starts with a general statement or hypothesis and examines the possibilities to reach a specific, logical conclusion. It goes from general to specific. Both are critical to scientific and everyday thinking.

What is an example of inductive reasoning?

After observing that the sun rises in the east every morning for years, one concludes that the sun always rises in the east. This is inductive reasoning: drawing a general conclusion from consistent specific observations, even though the observations are not exhaustive or absolute proof of the conclusion.

What are examples of deductive reasoning?

All men are mortal. Socrates is a man. Therefore, Socrates is mortal. This is deductive reasoning: starting with a general principle and applying it to a specific case to reach a logical conclusion. It proceeds from general to specific, ensuring that the conclusion is certain if the premises are true.

What is inductive reasoning in simple words?

Inductive reasoning is making broad generalisations from specific observations. Essentially, you observe particular instances and then draw a probable conclusion about the entire group. For example, if all swans you’ve seen are white, you might conclude all swans are white, even if you haven’t seen every swan.

You May Also Like

This article provides the key advantages of primary research over secondary research so you can make an informed decision.

Experimental research refers to the experiments conducted in the laboratory or under observation in controlled conditions. Here is all you need to know about experimental research.

A case study is a detailed analysis of a situation concerning organizations, industries, and markets. The case study generally aims at identifying the weak areas.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Frequently asked questions

How do you use deductive reasoning in research.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1-2 The Nature of Evidence: Inductive and Deductive Reasoning

Renate Kahlke; Jonathan Sherbino; and Sandra Monteiro

Different philosophies of science (Chapter 1-1) inform different approaches to research, characterized by different goals, assumptions, and types of data. In this chapter, we discuss how post-positivist, interpretivist, constructivist, and critical philosophies form a foundation for two broad approaches to generating knowledge: quantitative and qualitative research. There are several often-discussed distinctions between these two approaches, stemming from their philosophical roots, specifically, a commitment to objectivity versus subjective interpretation, numbers versus words, generalizability versus transferability, and deductive versus inductive reasoning. While these distinctions often distinguish qualitative and quantitative research, there are always exceptions. These exceptions demonstrate that quantitative and qualitative research approaches have more in common than what superficial descriptions imply.

Key Points of the Chapter

By the end of this chapter the learner should be able to:

  • Describe quantitative research methodologies
  • Describe qualitative research methodologies
  • Compare and contrast these two approaches to interpreting data

Rayna (they/their) stared at the blinking cursor on her screen. They had been recently invited to revise and resubmit their first qualitative research manuscript. Most of the editor and reviewer comments had been relatively easy to handle, but when Rayna reached Reviewer 2’s comments, they were caught off guard. The reviewer acknowledged that they came from a quantitative background, but then went on to write: “I am worried about the generalizability of this study, with a sample of only 25 residents. Wouldn’t it be better to do a survey to get more perspectives?”

Rayna hadn’t seen any other qualitative studies talk about generalizability, so they weren’t entirely sure how to address this comment. Maybe the study won’t be valuable if the results aren’t generalizable! Reviewer 2 might be right that the sample size is too small! They immediately panic-email one of their co-authors:

Hi Rayna, We get these kinds of reviews all the time. Don’t worry, it’s not a flaw in your study. That said, I think we can build our field’s knowledge about qualitative research approaches by explaining some of these concepts. Start with the one by Monteiro et al. and then read the one by Wright and colleagues. These are both good primers and will give you some language to help respond to this reviewer.

With a sigh of relief, Rayna reads through the attached papers (1-8) and continues her mission to craft a response.

Deeper Dive on this Concept

Qualitative and quantitative research are often talked about as two different ways of thinking and generating knowledge. We contrast a reliance on words with a reliance on numbers, a focus on subjectivity with a focus on objectivity. And to some extent these approaches are different, in precisely the ways described. However, many experienced researchers on both sides of the line will tell you that these differences only go so far.

Drawing on Chapter 1-1, there are different philosophies of science that inform researchers and their research products. Generally, quantitative research is associated with post-positivism; researchers seek to be objective and reduce their bias in order to ensure that their results are as close as possible to the truth. Post-positivists, unlike positivists, acknowledge that a singular truth is impossible to define. However, truth can be defined within a spectrum of probability. Quantitative researchers generate and test hypotheses to make conclusions about a theory they have developed. They conduct experiments that generate numerical data that  define the extent to which an observation is “true.” The strength of the numerical data  suggests whether the observation can be generalized beyond the study population. This process is called deductive reasoning because it starts with a general theory that narrows to a hypothesis that is tested against observations that support (or refute) the theory.

Qualitative researchers, on the other hand, tend to be guided by interpretivism, constructivism, critical theory or other perspectives that value subjectivity. These analytic approaches do not address bias because bias assumes a misrepresentation of “truth” during collection or analysis of data. Subjectivity emphasizes the position and perspective assumed during analysis, articulating that there is no external objective truth, uninformed by context. ( See Chapter 1-1 .) Qualitative methods seek to deeply understand a phenomenon, using the rich meaning provided by words, symbols, traditions, structures, identity constructs and power dynamics (as opposed to simply numbers). Rather than testing a hypothesis, they generate knowledge by inductively generating new insights or theory (i.e. observations are collected and analyzed to build a theory). These insights are contextual, not universal.  Qualitative researchers translate their results within a rich description of context so that readers can assess the similarities and differences between contexts and determine the extent to which the study results are transferable to their setting.

While these distinctions can be helpful in distinguishing quantitative and qualitative research broadly, they also create false divisions. The relationships between these two approaches are more complex, and nuances are important to bear in mind, even for novices, lest we exacerbate hierarchies and divisions between different types of knowledge and evidence.

As an example, the interpretation of quantitative results is not always clear and obvious – findings do not always support or refute a hypothesis. Thus, both qualitative and quantitative researchers need to be attentive to their data. While quantitative research is generally thought to be deductive, quantitative researchers often do a bit of inductive reasoning to find meaning in data that hold surprises. Conversely, qualitative data are stereotypically analyzed inductively, making meaning from the data rather than proving a hypothesis. However, many qualitative researchers apply existing theories or theoretical frameworks, testing the relevance of existing theory in a new context or seeking to explain their data using an existing framework. These approaches are often characterized as deductive qualitative work.

As another example, quantitative researchers use numbers, but these numbers aren’t always meaningful without words. In surveys, interpretation of numerical responses may not be possible without analyzing them alongside free-text responses. And while qualitative researchers rarely use numbers, they do need to think through the frequency with which certain themes appear in their dataset. An idea that only appears once in a qualitative study may have great value to the analysis, but it is also important that researchers acknowledge the views that are most prevalent in a given qualitative dataset.

Key Takeaways

  • Quantitative research is generally associated with post-positivism. Since researchers seek to get as close to ‘the truth’, as they can, they value objectivity and seek generalizable results. They generate hypotheses and use deductive reasoning and numerical data numbers to prove or disprove their hypotheses. Replicating patterns of data to validate theories and interpretations is one way to evaluate ‘the truth’.
  • Qualitative research is generally associated with worldviews that value subjectivity. Since qualitative researchers seek to understand the interaction between person, place, history, power, gender and other elements of context, they value subjectivity (e.g. interpretivism, constructivism, critical theory). Qualitative research does not seek generalizability of findings (i.e. a universal, decontextualized result), rather it produces results that are inextricably linked to the context of the data and analysis. Data tend to take the form of words, rather than numbers, and are analyzed inductively.
  • Qualitative and quantitative research are not opposites. Qualitative and quantitative research are often marked by a set of apparently clear distinctions, but there are always nuances and exceptions. Thus, these approaches should be understood as complementary, rather than diametrically opposed.

Vignette Conclusion

Rayna smiled and read over their paper once more. They had included an explanation of qualitative research, and how it’s about depth of information and nuance of experience. The depth of data generated per participant is significant, therefore fewer people are typically recruited in qualitative studies.  Rayna also articulated that while the interpretivist approach she used for analysis isn’t focused on generalizing the results beyond a specific context, they were able to make an argument for how the results can transfer to other, similar contexts. Cal provided a bunch of margin comments in her responses to the reviews, highlighting how savvy Rayna had been in addressing the concerns of Reviewer 2 without betraying their epistemic roots. The editor certainly was right – adding more justification and explanation had made the paper stronger, and would likely help others grow to better understand qualitative work. Now the only challenge left was figuring out how to upload the revised documents to the journal submission portal…

  • Wright, S., O’Brien, B. C., Nimmon, L., Law, M., & Mylopoulos, M. (2016). Research Design Considerations. Journal of Graduate Medical Education, 8(1), 97–98. doi: 10.4300/JGME-D-15-00566.1
  • Monteiro S, Sullivan GM, Chan TM. Generalizability theory made simple (r): an introductory primer to G-studies. Journal of graduate medical education. 2019 Aug;11(4):365-70. doi: 10.4300/JGME-D-19-00464.1
  • Goldenberg MJ. On evidence and evidence-based medicine: lessons from the philosophy of science. Social science & medicine. 2006 Jun 1;62(11):2621-32. doi: 10.1016/j.socscimed.2005.11.031
  • Varpio L, MacLeod A. Philosophy of science series: Harnessing the multidisciplinary edge effect by exploring paradigms, ontologies, epistemologies, axiologies, and methodologies. Academic Medicine. 2020 May 1;95(5):686-9. doi: 10.1097/ACM.0000000000003142
  • Morse JM, Mitcham C. Exploring qualitatively-derived concepts: Inductive—deductive pitfalls. International journal of qualitative methods. 2002 Dec;1(4):28-35. https://doi.org/10.1177/160940690200100404
  • Armat MR, Assarroudi A, Rad M, Sharifi H, Heydari A. Inductive and deductive: Ambiguous labels in qualitative content analysis. The Qualitative Report. 2018;23(1):219-21. https://www.proquest.com/docview/2122314268
  • Tavakol M, Sandars J. Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part I. Medical Teacher. 2014 Sep 1;36(9):746-56. doi: 10.3109/0142159X.2014.915298
  • Tavakol M, Sandars J. Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical teacher. 2014 Oct 1;36(10):838-48. doi: 10.3109/0142159X.2014.915297

About the authors

Contributor photo

name: Renate Kahlke

institution: McMaster University

Renate Kahlke is an Assistant Professor within the Division of Education & Innovation, Department of Medicine. She is a scientist within the McMaster Education Research, Innovation and Theory (MERIT) Program.

Contributor photo

name: Jonathan Sherbino

Contributor photo

name: Sandra Monteiro

Sandra Monteiro is an Associate Professor within the Department of Medicine, Division of Education and Innovation, Faculty of Health Sciences, McMaster University. She holds a joint appointment within the Department of   Health Research Methods, Evidence, and Impact ,  Faculty of Health Sciences, McMaster University.

1-2 The Nature of Evidence: Inductive and Deductive Reasoning Copyright © 2022 by Renate Kahlke; Jonathan Sherbino; and Sandra Monteiro is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is deductive reasoning in quantitative research

Home Market Research

Inductive vs Deductive Research: Difference of Approaches

Inductive vs deductive research: Understand the differences between these two approaches to thinking to guide your research. Learn more.

The terms “inductive” and “deductive” are often used in logic, reasoning, and science. Scientists use both inductive and deductive research methods as part of the scientific method.

Famous fictional detectives like Sherlock Holmes are often associated with deduction, even though that’s not always what Holmes does (more on that later). Some writing classes include both inductive and deductive essays.

But what’s the difference between inductive vs deductive research? The difference often lies in whether the argument proceeds from the general to the specific or the specific to the general. 

Both methods are used in different types of research, and it’s not unusual to use both in one project. In this article, we’ll describe each in simple yet defined terms.

Content Index: 

What is inductive research, stages of inductive research process, what is deductive research, stages of deductive research process, difference between inductive vs deductive research.

Inductive research is a method in which the researcher collects and analyzes data to develop theories, concepts, or hypotheses based on patterns and observations seen in the data. 

It uses a “bottom-up” method in which the researcher starts with specific observations and then moves on to more general theories or ideas. Inductive research is often used in exploratory studies or when not much research has been done on a topic before.

LEARN ABOUT: Research Process Steps

The three steps of the inductive research process are:

  • Observation: 

The first step of inductive research is to make detailed observations of the studied phenomenon. This can be done in many ways, such as through surveys, interviews, or direct observation.

  • Pattern Recognition: 

The next step is to look at the data in detail once the data has been collected. This means looking at the data for patterns, themes, and relationships. The goal is to find insights and trends that can be used to make the first categories and ideas.

  • Theory Development: 

At this stage, the researcher will start to create initial categories or concepts based on the patterns and themes from the data analysis. This means putting the data into groups based on their similarities and differences to make a framework for understanding the thing being studied.

LEARN ABOUT: Data Management Framework

These three steps are often repeated in a cycle, so the researcher can improve their analysis and understand the phenomenon over time. Inductive research aims to develop new theories and ideas based on the data rather than testing existing theories, as in deductive research.

Deductive research is a type of research in which the researcher starts with a theory, hypothesis, or generalization and then tests it through observations and data collection.

It uses a top-down method in which the researcher starts with a general idea and then tests it through specific observations. Deductive research is often used to confirm a theory or test a well-known hypothesis.

The five steps in the process of deductive research are:

  • Formulation of a hypothesis: 

The first step in deductive research is to develop a hypothesis and guess how the variables are related. Most of the time, the hypothesis is built on theories or research that have already been done.

  • Design of a research study: 

The next step is designing a research study to test the hypothesis. This means choosing a research method, figuring out what needs to be measured, and figuring out how to collect and look at the data.

  • Collecting data: 

Once the research design is set, different methods, such as surveys, experiments, or observational studies, are used to gather data. Usually, a standard protocol is used to collect the data to ensure it is correct and consistent.

  • Analysis of data: 

In this step, the collected data are looked at to see if they support or disprove the hypothesis. The goal is to see if the data supports or refutes the hypothesis. You need to use statistical methods to find patterns and links between the variables to do this.

  • Drawing conclusions: 

The last step is drawing conclusions from the analysis of the data. If the hypothesis is supported, it can be used to make generalizations about the population being studied. If the hypothesis is wrong, the researcher may need to develop a new one and start the process again.

The five steps of deductive research are repeated, and researchers may need to return to earlier steps if they find new information or new ways of looking at things. In contrast to inductive research, deductive research aims to test theories or hypotheses that have already been made.

The main differences between inductive and deductive research are how the research is done, the goal, and how the data is analyzed. Inductive research is exploratory, flexible, and based on qualitative observation analysis. Deductive research, on the other hand, is about proving something and is structured and based on quantitative analysis .

Here are the main differences between inductive vs deductive research in more detail:

what is deductive reasoning in quantitative research

TopicsInductive researchTopicsDeductive research
Bottom-upapproachIn inductive research, the researcher starts with data and observations, then uses data patterns to develop theories or generalizations. 
This is a bottom-up approach in which the researcher builds from specific observations to more general theories.
Top-down approachIn deductive research researcher starts with a theory or hypothesis, then tests it through observations and gathering data.
This is a top-down approach in which the researcher tests a theory or generalization using specific observations.
Develops theories from observationsIn inductive research, theories or generalizations are made based on what has been seen and how it has been seen. 
The goal is to create theories explaining and making sense of the data.
Tests theories through observationsDeductive research aims to use real-world observations to test theories or hypotheses.
The person doing the research gathers data to prove or disprove the theory or hypothesis.
Used in exploratory studiesInductive research is often used to learn more about a phenomenon or area of interest when there is a limited amount of previous research on the subject. 
With this method, new theories and ideas can be made from the data.
Used in confirmatory studiesResearchers often use deductive research when they want to test a well-known theory or hypothesis and either prove or disprove it.
This method works best when the researcher has a clear research question and wants to test a specific hypothesis.
Flexible and adaptable to new findingsInductive research is flexible and open to new information because researchers can change their theories and hypotheses based on their findings. 
This method works best when the research question is unclear, or unexpected results arise.
Structured and systematicDeductive research is structured and methodical because it uses a research design and method that have already been decided upon.
This method starts with a clear plan for the research, making it easier to collect and analyze data more objectively and consistently.
Relies more on qualitative analysisInductive research uses more qualitative analysis, like textual or visual analysis, to find patterns and themes in the data.Relies more on quantitative analysisDeductive research uses more quantitative methods, like statistical analysis, to test and confirm the theory or hypothesis. 
This method uses numbers to test the theory or hypothesis and draw objective conclusions.

LEARN ABOUT: Theoretical Research

Inductive research and deductive research are two different types of research with different starting points, goals, methods, and ways of looking at the data.

Inductive research uses specific observations and patterns to come up with new theories. On the other hand, deductive research starts with a theory or hypothesis and tests it through observations.

Both approaches have advantages as well as disadvantages and can be used in different types of research depending on the question and goals.

QuestionPro is a responsive online platform for surveys and research that can be used for both inductive and deductive research. It has many tools and features to help you collect and analyze data, such as customizable survey templates, advanced survey logic, and real-time reporting.

With QuestionPro, researchers can do surveys, send them out, analyze the results, and draw conclusions that help them make decisions and learn more about their fields.

The platform has advanced data analysis and reporting tools that can be used with both qualitative and quantitative methods of data analysis.

Whether researchers do inductive or deductive research, QuestionPro can help them design, run, and analyze their projects completely and powerfully. So sign up now for a free trial! 

LEARN MORE         FREE TRIAL

MORE LIKE THIS

New Edit Options

Edit survey: A new way of survey building and collaboration

Oct 10, 2024

pulse surveys vs annual employee surveys

Pulse Surveys vs Annual Employee Surveys: Which to Use

Oct 4, 2024

Employee perception

Employee Perception Role in Organizational Change

Oct 3, 2024

Mixed Methods Research

Mixed Methods Research: Overview of Designs and Techniques

Oct 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Enago Academy

Inductive and Deductive Reasoning — Strategic approach for conducting research

' src=

Karl questioned his research approach before finalizing the hypothesis of his research study. He laid a plan and a procedure that consists of steps of broad assumptions to detailed methods of data collection, analysis, and interpretation, wondering how to reason with your findings!

His supervisor provided him the insights about understanding his role in driving the research question by developing a research approach. The supervisor quoted, “In all the disciplines, research plays an essential role in allowing researchers to expand their theoretical knowledge in the field of their study to verify and justify the existing theories. A well-planned research approach will assist one in understanding and building a relationship between theory and objective of the research study.”

Obediently, Karl jotted down the keywords to research on the internet, understand the reasoning, and define the research approach for his research study.

Table of Contents

What Is a Research Approach?

A research approach is a procedure selected by a researcher to collect, analyze, and interpret data. Based on the methods of data collection and data analysis, research approach methods are of three types: quantitative, qualitative, and mixed methods. However, considering the general plan and procedure for conducting a study, the research approach is divided into three categories:

1. Inductive Approach

The inductive approach begins with a researcher collecting data that is relevant to the research study. Post-data collection, a researcher will analyze this data broadly, looking for patterns in the data to develop a theory that could explain the patterns. Therefore, an inductive approach starts with a set of observations and then moves toward developing a theory.

2. Deductive Approach

The deductive approach is the reverse of the inductive approach. It always starts with a theory, such as one or more general statements or premises, and reaches a logical conclusion. Scientists use this type of reasoning approach to prove their research hypothesis .

3. Abductive Approach

This type of reasoning approach is set to answer the weakness associated with deductive and inductive approaches. While following the abductive reasoning approach, researchers start the process with surprising facts or puzzles while studying some empirical phenomena which cannot be explained with the existing theories. Abductive reasoning will assist researchers in explaining the facts or puzzles. Despite its popularity, the abductive approach is challenging to implement and researchers are advised to use traditional deductive or inductive approaches.

Inductive Vs Deductive Reasoning  

Inductive reasoning, also called induction, constructs or evaluates general prepositions derived from specific examples Deductive reasoning is the process of reasoning from general statements to reach a logical conclusion
Arguments in inductive reasoning are strong or weak.

·         Strong arguments are cogent if the premise is true.

·         Weak arguments are uncogent.

Arguments in deductive reasoning are valid or invalid.

·         If the logic is correct, then the argument is valid.

·         If there is no theory, then deductive reasoning cannot be conducted.

Conclusions may be incorrect even with strong arguments and true premises. Conclusions could be proven valid if the premises are true.
Example of Inductive Reasoning:

Most men are right-handed. John is a man. Therefore, John must be right-handed.

Example of Deductive Reasoning:

All men are mortal. John is a man. Therefore, John is mortal.

inductive and deductive reasoning

What Is Inductive Reasoning?

Inductive reasoning moves away from more specific observations to broader generalizations and theories. Usually, this is against the scientific method, an empirical method of acquiring results based on experimental findings. Inductive reasoning makes generalizations by observing patterns and drawing inferences.

Inductive reasoning is based on strong and weak arguments. When the premise is true then the conclusion of the argument is likely to be true. Such an argument is termed a strong or cogent argument. Meanwhile, weak arguments may be false even if the premises they are based upon are true. An argument is a cogent argument if it is weak or the premise is false.

Types of Inductive Reasoning

1. inductive generalization.

Inductive generalization uses observations about a sample to conclude the population from which the sample was chosen. In simple terms, you use statistical results from samples to make statements about populations. One can evaluate large samples or random sampling using inductive generalizations.

2. Statistical Generalization

Statistical generalization uses specific numbers to create statements about populations. This generalization is a subtype of inductive generalization, and it is also termed statistical syllogism.

3. Causal Reasoning

Causal reasoning links cause and effect between different aspects of the research study. A casual reasoning statement starts with a premise about two events that occur simultaneously, followed by choosing a specific direction of causality or refuting any other direction, and concluding a causal statement about the relationship between two things.

4. Sign Reasoning

Sign reasoning makes correlational connections between different things. Inductive reasoning works on a correlational relationship where nothing causes the other thing to occur. However, sign reasoning proposes that one event may be a ‘sign’ to impact another event’s occurrence.

5. Analogical Reasoning

Analogical reasoning concludes something based on its similarities to another thing. It links two things together and then concludes based on the attributes of one thing which holds for another thing. Analogical reasoning could be literal or figurative. However, literal comparison usually uses a much stronger case while reasoning.

Stages of Inductive Research Approach

  • Begin with an observation
  • Seek patterns in the observation
  • Develop a theory or preliminary conclusion based on the patterns observed

Limitations of an Inductive Approach

A conclusion drawn based on inductive reasoning cannot be proven completely, but it can be invalidated.

What Is Deductive Reasoning?

Deductive reasoning starts with one or more general statements to derive a logical conclusion. Moreover, while conducting deductive research, a researcher starts with a theory. This theory could be derived from inductive reasoning. The approach of deductive reasoning is used to test the stated theory. If the general statement or theory is true, the conclusion derived is valid and vice-versa.

Deductive reasoning produces arguments that may be valid or invalid. If the logic is correct, conclusions flow from the general statement or theory and the arguments are valid. Researchers use deductive reasoning to prove their hypotheses. However, if there is no theory yet, then one cannot conduct deductive research.

Types of Deductive Reasoning

There are three common types of deductive reasoning:

1. Syllogism

Syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. For example —

If brakes fail, the car does not stop.

If the car does not stop, it will cause an accident., therefore, brake failure causes the accident., 2. modus ponens.

Modus ponens is another type of deductive reasoning that follows a pattern that affirms the condition of the reasoning. For example —

If a person is born after 1997, then they are a Gen Z

Ryan was born in 1998., therefore, ryan belongs to gen z..

This type of reasoning affirms the previous statement. Meanwhile, the first premise sets the conditional statement to be affirmed.

3. Modus Tollens

Modus tollens is yet another type of deductive reasoning known as ‘the law of contrapositive’. It is the opposite of modus ponens because it negates the condition of the reasoning. For example —

Bruce is not a Gen Z.

Therefore, bruce was not born after 1997., stages of deductive research approach.

  • Begin with an existing theory and create a problem statement
  • Formulate a hypothesis based on the existing theory
  • Collect and analyze data to test the hypothesis
  • Decide if you could reject or accept the hypothesis

Limitations of the Deductive Approach

The conclusions drawn from deductive reasoning can only be true if the theory set in the inductive study is true and the terms are clear.

Combination of Inductive and Deductive Reasoning in Research

When researchers conduct a large research project, they begin with an inductive study. This inductive reasoning assists them in constructing an efficient working theory. Therefore, post inductive reasoning, a deductive reasoning could confirm and conclude the working theory. This helps researchers formulate a structured project and mitigates the risk of research bias in the research study.

After doing thorough research in understanding inductive and deductive reasoning, Karl concluded that:

  • Inductive reasoning is known for constructing hypotheses based on existing knowledge and predictions.
  • Deductive reasoning could be used to test an inductive research approach.
  • People tend to rely on information that is easily accessible and available in the world. While theorizing a research hypothesis , this tendency could introduce biases in the study.
  • Inductive reasoning could cause biases which can distort the proper application of inductive argument.
  • A good scientific research study must be highly focused and requires both inductive and deductive research approaches.

After a few hours of focused research, Karl understood his supervisor’s approach to creating a well-planned research hypothesis for his research study. Karl dived deeper and understood that he had only touched the tip of an iceberg, and there is much more to induce and deduce before he holds his doctorate!

Have you ever encountered a situation like Karl’s? Trying to understand which research approach to use? Did you find this blog informative? Do write to us or comment below and tell us what you feel!

Rate this article Cancel Reply

Your email address will not be published.

what is deductive reasoning in quantitative research

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

what is deductive reasoning in quantitative research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Research Methods

1.7 Deductive Approaches to Research

Researchers taking a deductive approach take the steps described for inductive research and reverse their order. They start with a social theory that they find compelling and then test its implications with data; i.e., they move from a more general level to a more specific one. A deductive approach to research is the one that people typically associate with scientific investigation. The researcher studies what others have done, reads existing theories of whatever phenomenon he or she is studying, and then tests hypotheses that emerge from those theories (Figure 1.5).

Specific level of focus Analysis Specific level of focus

Figure 1.5: Steps involved with a deductive approach to research.  This table is from Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor, and is licensed under a CC BY-NC-SA 3.0 License .

Text Attributions

This chapter has been adapted from Chapter 2.3 in Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor, and is licensed under a CC BY-NC-SA 3.0 License .

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

The potential of working hypotheses for deductive exploratory research

  • Open access
  • Published: 08 December 2020
  • Volume 55 , pages 1703–1725, ( 2021 )

Cite this article

You have full access to this open access article

what is deductive reasoning in quantitative research

  • Mattia Casula   ORCID: orcid.org/0000-0002-7081-8153 1 ,
  • Nandhini Rangarajan 2 &
  • Patricia Shields   ORCID: orcid.org/0000-0002-0960-4869 2  

76k Accesses

106 Citations

4 Altmetric

Explore all metrics

While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in the research process. The working hypothesis conceptual framework is introduced, placed in a philosophical context, defined, and applied to public administration and comparative public policy. Doing so, this article explains: the philosophical underpinning of exploratory, deductive research; how the working hypothesis informs the methodologies and evidence collection of deductive, explorative research; the nature of micro-conceptual frameworks for deductive exploratory research; and, how the working hypothesis informs data analysis when exploratory research is deductive.

Similar content being viewed by others

what is deductive reasoning in quantitative research

Reflections on Methodological Issues

what is deductive reasoning in quantitative research

Research: Meaning and Purpose

what is deductive reasoning in quantitative research

Research Design and Methodology

Avoid common mistakes on your manuscript.

1 Introduction

Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001 ). Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013 ; Bryman 2004 as cited in Pearse 2019 ). Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory qualitative research is often criticized for lack of methodological rigor and tentativeness in results (Thomas and Magilvy 2011 ). This paper focuses on the neglected topic of deductive, exploratory research and proposes working hypotheses as a useful framework for these studies.

To emphasize that certain types of applied research lend themselves more easily to deductive approaches, to address the downsides of exploratory qualitative research, and to ensure qualitative rigor in exploratory research, a significant body of work on deductive qualitative approaches has emerged (see for example, Gilgun 2005 , 2015 ; Hyde 2000 ; Pearse 2019 ). According to Gilgun ( 2015 , p. 3) the use of conceptual frameworks derived from comprehensive reviews of literature and a priori theorizing were common practices in qualitative research prior to the publication of Glaser and Strauss’s ( 1967 ) The Discovery of Grounded Theory . Gilgun ( 2015 ) coined the terms Deductive Qualitative Analysis (DQA) to arrive at some sort of “middle-ground” such that the benefits of a priori theorizing (structure) and allowing room for new theory to emerge (flexibility) are reaped simultaneously. According to Gilgun ( 2015 , p. 14) “in DQA, the initial conceptual framework and hypotheses are preliminary. The purpose of DQA is to come up with a better theory than researchers had constructed at the outset (Gilgun 2005 , 2009 ). Indeed, the production of new, more useful hypotheses is the goal of DQA”.

DQA provides greater level of structure for both the experienced and novice qualitative researcher (see for example Pearse 2019 ; Gilgun 2005 ). According to Gilgun ( 2015 , p. 4) “conceptual frameworks are the sources of hypotheses and sensitizing concepts”. Sensitizing concepts frame the exploratory research process and guide the researcher’s data collection and reporting efforts. Pearse ( 2019 ) discusses the usefulness for deductive thematic analysis and pattern matching to help guide DQA in business research. Gilgun ( 2005 ) discusses the usefulness of DQA for family research.

Given these rationales for DQA in exploratory research, the overarching purpose of this paper is to contribute to that growing corpus of work on deductive qualitative research. This paper is specifically aimed at guiding novice researchers and student scholars to the working hypothesis as a useful a priori framing tool. The applicability of the working hypothesis as a tool that provides more structure during the design and implementation phases of exploratory research is discussed in detail. Examples of research projects in public administration that use the working hypothesis as a framing tool for deductive exploratory research are provided.

In the next section, we introduce the three types of research purposes. Second, we examine the nature of the exploratory research purpose. Third, we provide a definition of working hypothesis. Fourth, we explore the philosophical roots of methodology to see where exploratory research fits. Fifth, we connect the discussion to the dominant research approaches (quantitative, qualitative and mixed methods) to see where deductive exploratory research fits. Sixth, we examine the nature of theory and the role of the hypothesis in theory. We contrast formal hypotheses and working hypotheses. Seven, we provide examples of student and scholarly work that illustrates how working hypotheses are developed and operationalized. Lastly, this paper synthesizes previous discussion with concluding remarks.

2 Three types of research purposes

The literature identifies three basic types of research purposes—explanation, description and exploration (Babbie 2007 ; Adler and Clark 2008 ; Strydom 2013 ; Shields and Whetsell 2017 ). Research purposes are similar to research questions; however, they focus on project goals or aims instead of questions.

Explanatory research answers the “why” question (Babbie 2007 , pp. 89–90), by explaining “why things are the way they are”, and by looking “for causes and reasons” (Adler and Clark 2008 , p. 14). Explanatory research is closely tied to hypothesis testing. Theory is tested using deductive reasoning, which goes from the general to the specific (Hyde 2000 , p. 83). Hypotheses provide a frame for explanatory research connecting the research purpose to other parts of the research process (variable construction, choice of data, statistical tests). They help provide alignment or coherence across stages in the research process and provide ways to critique the strengths and weakness of the study. For example, were the hypotheses grounded in the appropriate arguments and evidence in the literature? Are the concepts imbedded in the hypotheses appropriately measured? Was the best statistical test used? When the analysis is complete (hypothesis is tested), the results generally answer the research question (the evidence supported or failed to support the hypothesis) (Shields and Rangarajan 2013 ).

Descriptive research addresses the “What” question and is not primarily concerned with causes (Strydom 2013 ; Shields and Tajalli 2006 ). It lies at the “midpoint of the knowledge continuum” (Grinnell 2001 , p. 248) between exploration and explanation. Descriptive research is used in both quantitative and qualitative research. A field researcher might want to “have a more highly developed idea of social phenomena” (Strydom 2013 , p. 154) and develop thick descriptions using inductive logic. In science, categorization and classification systems such as the periodic table of chemistry or the taxonomies of biology inform descriptive research. These baseline classification systems are a type of theorizing and allow researchers to answer questions like “what kind” of plants and animals inhabit a forest. The answer to this question would usually be displayed in graphs and frequency distributions. This is also the data presentation system used in the social sciences (Ritchie and Lewis 2003 ; Strydom 2013 ). For example, if a scholar asked, what are the needs of homeless people? A quantitative approach would include a survey that incorporated a “needs” classification system (preferably based on a literature review). The data would be displayed as frequency distributions or as charts. Description can also be guided by inductive reasoning, which draws “inferences from specific observable phenomena to general rules or knowledge expansion” (Worster 2013 , p. 448). Theory and hypotheses are generated using inductive reasoning, which begins with data and the intention of making sense of it by theorizing. Inductive descriptive approaches would use a qualitative, naturalistic design (open ended interview questions with the homeless population). The data could provide a thick description of the homeless context. For deductive descriptive research, categories, serve a purpose similar to hypotheses for explanatory research. If developed with thought and a connection to the literature, categories can serve as a framework that inform measurement, link to data collection mechanisms and to data analysis. Like hypotheses they can provide horizontal coherence across the steps in the research process.

Table  1 demonstrated these connections for deductive, descriptive and explanatory research. The arrow at the top emphasizes the horizontal or across the research process view we emphasize. This article makes the case that the working hypothesis can serve the same purpose as the hypothesis for deductive, explanatory research and categories for deductive descriptive research. The cells for exploratory research are filled in with question marks.

The remainder of this paper focuses on exploratory research and the answers to questions found in the table:

What is the philosophical underpinning of exploratory, deductive research?

What is the Micro-conceptual framework for deductive exploratory research? [ As is clear from the article title we introduce the working hypothesis as the answer .]

How does the working hypothesis inform the methodologies and evidence collection of deductive exploratory research?

How does the working hypothesis inform data analysis of deductive exploratory research?

3 The nature of exploratory research purpose

Explorers enter the unknown to discover something new. The process can be fraught with struggle and surprises. Effective explorers creatively resolve unexpected problems. While we typically think of explorers as pioneers or mountain climbers, exploration is very much linked to the experience and intention of the explorer. Babies explore as they take their first steps. The exploratory purpose resonates with these insights. Exploratory research, like reconnaissance, is a type of inquiry that is in the preliminary or early stages (Babbie 2007 ). It is associated with discovery, creativity and serendipity (Stebbins 2001 ). But the person doing the discovery, also defines the activity or claims the act of exploration. It “typically occurs when a researcher examines a new interest or when the subject of study itself is relatively new” (Babbie 2007 , p. 88). Hence, exploration has an open character that emphasizes “flexibility, pragmatism, and the particular, biographically specific interests of an investigator” (Maanen et al. 2001 , p. v). These three purposes form a type of hierarchy. An area of inquiry is initially explored . This early work lays the ground for, description which in turn becomes the basis for explanation . Quantitative, explanatory studies dominate contemporary high impact journals (Twining et al. 2017 ).

Stebbins ( 2001 ) makes the point that exploration is often seen as something like a poor stepsister to confirmatory or hypothesis testing research. He has a problem with this because we live in a changing world and what is settled today will very likely be unsettled in the near future and in need of exploration. Further, exploratory research “generates initial insights into the nature of an issue and develops questions to be investigated by more extensive studies” (Marlow 2005 , p. 334). Exploration is widely applicable because all research topics were once “new.” Further, all research topics have the possibility of “innovation” or ongoing “newness”. Exploratory research may be appropriate to establish whether a phenomenon exists (Strydom 2013 ). The point here, of course, is that the exploratory purpose is far from trivial.

Stebbins’ Exploratory Research in the Social Sciences ( 2001 ), is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a “broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area of social or psychological life” (p. 3). It is science conducted in a way distinct from confirmation. According to Stebbins ( 2001 , p. 6) the goal is discovery of potential generalizations, which can become future hypotheses and eventually theories that emerge from the data. He focuses on inductive logic (which stimulates creativity) and qualitative methods. He does not want exploratory research limited to the restrictive formulas and models he finds in confirmatory research. He links exploratory research to Glaser and Strauss’s ( 1967 ) flexible, immersive, Grounded Theory. Strydom’s ( 2013 ) analysis of contemporary social work research methods books echoes Stebbins’ ( 2001 ) position. Stebbins’s book is an important contribution, but it limits the potential scope of this flexible and versatile research purpose. If we accepted his conclusion, we would delete the “Exploratory” row from Table  1 .

Note that explanatory research can yield new questions, which lead to exploration. Inquiry is a process where inductive and deductive activities can occur simultaneously or in a back and forth manner, particularly as the literature is reviewed and the research design emerges. Footnote 1 Strict typologies such as explanation, description and exploration or inductive/deductive can obscures these larger connections and processes. We draw insight from Dewey’s ( 1896 ) vision of inquiry as depicted in his seminal “Reflex Arc” article. He notes that “stimulus” and “response” like other dualities (inductive/deductive) exist within a larger unifying system. Yet the terms have value. “We need not abandon terms like stimulus and response, so long as we remember that they are attached to events based upon their function in a wider dynamic context, one that includes interests and aims” (Hildebrand 2008 , p. 16). So too, in methodology typologies such as deductive/inductive capture useful distinctions with practical value and are widely used in the methodology literature.

We argue that there is a role for exploratory, deductive, and confirmatory research. We maintain all types of research logics and methods should be in the toolbox of exploratory research. First, as stated above, it makes no sense on its face to identify an extremely flexible purpose that is idiosyncratic to the researcher and then basically restrict its use to qualitative, inductive, non-confirmatory methods. Second, Stebbins’s ( 2001 ) work focused on social science ignoring the policy sciences. Exploratory research can be ideal for immediate practical problems faced by policy makers, who could find a framework of some kind useful. Third, deductive, exploratory research is more intentionally connected to previous research. Some kind of initial framing device is located or designed using the literature. This may be very important for new scholars who are developing research skills and exploring their field and profession. Stebbins’s insights are most pertinent for experienced scholars. Fourth, frameworks and deductive logic are useful for comparative work because some degree of consistency across cases is built into the design.

As we have seen, the hypotheses of explanatory and categories of descriptive research are the dominate frames of social science and policy science. We certainly concur that neither of these frames makes a lot of sense for exploratory research. They would tend to tie it down. We see the problem as a missing framework or missing way to frame deductive, exploratory research in the methodology literature. Inductive exploratory research would not work for many case studies that are trying to use evidence to make an argument. What exploratory deductive case studies need is a framework that incorporates flexibility. This is even more true for comparative case studies. A framework of this sort could be usefully applied to policy research (Casula 2020a ), particularly evaluative policy research, and applied research generally. We propose the Working Hypothesis as a flexible conceptual framework and as a useful tool for doing exploratory studies. It can be used as an evaluative criterion particularly for process evaluation and is useful for student research because students can develop theorizing skills using the literature.

Table  1 included a column specifying the philosophical basis for each research purpose. Shifting gears to the philosophical underpinning of methodology provides useful additional context for examination of deductive, exploratory research.

4 What is a working hypothesis

The working hypothesis is first and foremost a hypothesis or a statement of expectation that is tested in action. The term “working” suggest that these hypotheses are subject to change, are provisional and the possibility of finding contradictory evidence is real. In addition, a “working” hypothesis is active, it is a tool in an ongoing process of inquiry. If one begins with a research question, the working hypothesis could be viewed as a statement or group of statements that answer the question. It “works” to move purposeful inquiry forward. “Working” also implies some sort of community, mostly we work together in relationship to achieve some goal.

Working Hypothesis is a term found in earlier literature. Indeed, both pioneering pragmatists, John Dewey and George Herbert Mead use the term working hypothesis in important nineteenth century works. For both Dewey and Mead, the notion of a working hypothesis has a self-evident quality and it is applied in a big picture context. Footnote 2

Most notably, Dewey ( 1896 ), in one of his most pivotal early works (“Reflex Arc”), used “working hypothesis” to describe a key concept in psychology. “The idea of the reflex arc has upon the whole come nearer to meeting this demand for a general working hypothesis than any other single concept (Italics added)” (p. 357). The notion of a working hypothesis was developed more fully 42 years later, in Logic the Theory of Inquiry , where Dewey developed the notion of a working hypothesis that operated on a smaller scale. He defines working hypotheses as a “provisional, working means of advancing investigation” (Dewey 1938 , pp. 142). Dewey’s definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research).

Mead ( 1899 ) used working hypothesis in a title of an American Journal of Sociology article “The Working Hypothesis and Social Reform” (italics added). He notes that a scientist’s foresight goes beyond testing a hypothesis.

Given its success, he may restate his world from this standpoint and get the basis for further investigation that again always takes the form of a problem. The solution of this problem is found over again in the possibility of fitting his hypothetical proposition into the whole within which it arises. And he must recognize that this statement is only a working hypothesis at the best, i.e., he knows that further investigation will show that the former statement of his world is only provisionally true, and must be false from the standpoint of a larger knowledge, as every partial truth is necessarily false over against the fuller knowledge which he will gain later (Mead 1899 , p. 370).

Cronbach ( 1975 ) developed a notion of working hypothesis consistent with inductive reasoning, but for him, the working hypothesis is a product or result of naturalistic inquiry. He makes the case that naturalistic inquiry is highly context dependent and therefore results or seeming generalizations that may come from a study and should be viewed as “working hypotheses”, which “are tentative both for the situation in which they first uncovered and for other situations” (as cited in Gobo 2008 , p. 196).

A quick Google scholar search using the term “working hypothesis” show that it is widely used in twentieth and twenty-first century science, particularly in titles. In these articles, the working hypothesis is treated as a conceptual tool that furthers investigation in its early or transitioning phases. We could find no explicit links to exploratory research. The exploratory nature of the problem is expressed implicitly. Terms such as “speculative” (Habib 2000 , p. 2391) or “rapidly evolving field” (Prater et al. 2007 , p. 1141) capture the exploratory nature of the study. The authors might describe how a topic is “new” or reference “change”. “As a working hypothesis, the picture is only new, however, in its interpretation” (Milnes 1974 , p. 1731). In a study of soil genesis, Arnold ( 1965 , p. 718) notes “Sequential models, formulated as working hypotheses, are subject to further investigation and change”. Any 2020 article dealing with COVID-19 and respiratory distress would be preliminary almost by definition (Ciceri et al. 2020 ).

5 Philosophical roots of methodology

According to Kaplan ( 1964 , p. 23) “the aim of methodology is to help us understand, in the broadest sense not the products of scientific inquiry but the process itself”. Methods contain philosophical principles that distinguish them from other “human enterprises and interests” (Kaplan 1964 , p. 23). Contemporary research methodology is generally classified as quantitative, qualitative and mixed methods. Leading scholars of methodology have associated each with a philosophical underpinning—positivism (or post-positivism), interpretivism or constructivist and pragmatism, respectively (Guba 1987 ; Guba and Lincoln 1981 ; Schrag 1992 ; Stebbins 2001 ; Mackenzi and Knipe 2006 ; Atieno 2009 ; Levers 2013 ; Morgan 2007 ; O’Connor et al. 2008 ; Johnson and Onwuegbuzie 2004 ; Twining et al. 2017 ). This section summarizes how the literature often describes these philosophies and informs contemporary methodology and its literature.

Positivism and its more contemporary version, post-positivism, maintains an objectivist ontology or assumes an objective reality, which can be uncovered (Levers 2013 ; Twining et al. 2017 ). Footnote 3 Time and context free generalizations are possible and “real causes of social scientific outcomes can be determined reliably and validly (Johnson and Onwuegbunzie 2004 , p. 14). Further, “explanation of the social world is possible through a logical reduction of social phenomena to physical terms”. It uses an empiricist epistemology which “implies testability against observation, experimentation, or comparison” (Whetsell and Shields 2015 , pp. 420–421). Correspondence theory, a tenet of positivism, asserts that “to each concept there corresponds a set of operations involved in its scientific use” (Kaplan 1964 , p. 40).

The interpretivist, constructivists or post-modernist approach is a reaction to positivism. It uses a relativist ontology and a subjectivist epistemology (Levers 2013 ). In this world of multiple realities, context free generalities are impossible as is the separation of facts and values. Causality, explanation, prediction, experimentation depend on assumptions about the correspondence between concepts and reality, which in the absence of an objective reality is impossible. Empirical research can yield “contextualized emergent understanding rather than the creation of testable theoretical structures” (O’Connor et al. 2008 , p. 30). The distinctively different world views of positivist/post positivist and interpretivist philosophy is at the core of many controversies in methodology, social and policy science literature (Casula 2020b ).

With its focus on dissolving dualisms, pragmatism steps outside the objective/subjective debate. Instead, it asks, “what difference would it make to us if the statement were true” (Kaplan 1964 , p. 42). Its epistemology is connected to purposeful inquiry. Pragmatism has a “transformative, experimental notion of inquiry” anchored in pluralism and a focus on constructing conceptual and practical tools to resolve “problematic situations” (Shields 1998 ; Shields and Rangarajan 2013 ). Exploration and working hypotheses are most comfortably situated within the pragmatic philosophical perspective.

6 Research approaches

Empirical investigation relies on three types of methodology—quantitative, qualitative and mixed methods.

6.1 Quantitative methods

Quantitative methods uses deductive logic and formal hypotheses or models to explain, predict, and eventually establish causation (Hyde 2000 ; Kaplan 1964 ; Johnson and Onwuegbunzie 2004 ; Morgan 2007 ). Footnote 4 The correspondence between the conceptual and empirical world make measures possible. Measurement assigns numbers to objects, events or situations and allows for standardization and subtle discrimination. It also allows researchers to draw on the power of mathematics and statistics (Kaplan 1964 , pp. 172–174). Using the power of inferential statistics, quantitative research employs research designs, which eliminate competing hypotheses. It is high in external validity or the ability to generalize to the whole. The research results are relatively independent of the researcher (Johnson & Onwuegbunzie 2004 ).

Quantitative methods depend on the quality of measurement and a priori conceptualization, and adherence to the underlying assumptions of inferential statistics. Critics charge that hypotheses and frameworks needlessly constrain inquiry (Johnson and Onwuegbunzie 2004 , p. 19). Hypothesis testing quantitative methods support the explanatory purpose.

6.2 Qualitative methods

Qualitative researchers who embrace the post-modern, interpretivist view, Footnote 5 question everything about the nature of quantitative methods (Willis et al. 2007 ). Rejecting the possibility of objectivity, correspondence between ideas and measures, and the constraints of a priori theorizing they focus on “unique impressions and understandings of events rather than to generalize the findings” (Kolb 2012 , p. 85). Characteristics of traditional qualitative research include “induction, discovery, exploration, theory/hypothesis generation and the researcher as the primary ‘instrument’ of data collection” (Johnson and Onwuegbunzie 2004 , p. 18). It also concerns itself with forming “unique impressions and understandings of events rather than to generalize findings” (Kolb 2012 , p. 85). The data of qualitative methods are generated via interviews, direct observation, focus groups and analysis of written records or artifacts.

Qualitative methods provide for understanding and “description of people’s personal experiences of phenomena”. They enable descriptions of detailed “phenomena as they are situated and embedded in local contexts.” Researchers use naturalistic settings to “study dynamic processes” and explore how participants interpret experiences. Qualitative methods have an inherent flexibility, allowing researchers to respond to changes in the research setting. They are particularly good at narrowing to the particular and on the flipside have limited external validity (Johnson and Onwuegbunzie 2004 , p. 20). Instead of specifying a suitable sample size to draw conclusions, qualitative research uses the notion of saturation (Morse 1995 ).

Saturation is used in grounded theory—a widely used and respected form of qualitative research, and a well-known interpretivist qualitative research method. Introduced by Glaser and Strauss ( 1967 ), this “grounded on observation” (Patten and Newhart 2000 , p. 27) methodology, focuses on “the creation of emergent understanding” (O’Connor et al. 2008 , p. 30). It uses the Constant Comparative method, whereby researchers develop theory from data as they code and analyze at the same time. Data collection, coding and analysis along with theoretical sampling are systematically combined to generate theory (Kolb 2012 , p. 83). The qualitative methods discussed here support exploratory research.

A close look at the two philosophies and assumptions of quantitative and qualitative research suggests two contradictory world views. The literature has labeled these contradictory views the Incompatibility Theory, which sets up a quantitative versus qualitative tension similar to the seeming separation of art and science or fact and values (Smith 1983a , b ; Guba 1987 ; Smith and Heshusius 1986 ; Howe 1988 ). The incompatibility theory does not make sense in practice. Yin ( 1981 , 1992 , 2011 , 2017 ), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

6.3 Mixed methods

Turning the “Incompatibility Theory” on its head, Mixed Methods research “combines elements of qualitative and quantitative research approaches … for the broad purposes of breadth and depth of understanding and corroboration” (Johnson et al. 2007 , p. 123). It does this by partnering with philosophical pragmatism. Footnote 6 Pragmatism is productive because “it offers an immediate and useful middle position philosophically and methodologically; it offers a practical and outcome-oriented method of inquiry that is based on action and leads, iteratively, to further action and the elimination of doubt; it offers a method for selecting methodological mixes that can help researchers better answer many of their research questions” (Johnson and Onwuegbunzie 2004 , p. 17). What is theory for the pragmatist “any theoretical model is for the pragmatist, nothing more than a framework through which problems are perceived and subsequently organized ” (Hothersall 2019 , p. 5).

Brendel ( 2009 ) constructed a simple framework to capture the core elements of pragmatism. Brendel’s four “p”’s—practical, pluralism, participatory and provisional help to show the relevance of pragmatism to mixed methods. Pragmatism is purposeful and concerned with the practical consequences. The pluralism of pragmatism overcomes quantitative/qualitative dualism. Instead, it allows for multiple perspectives (including positivism and interpretivism) and, thus, gets around the incompatibility problem. Inquiry should be participatory or inclusive of the many views of participants, hence, it is consistent with multiple realities and is also tied to the common concern of a problematic situation. Finally, all inquiry is provisional . This is compatible with experimental methods, hypothesis testing and consistent with the back and forth of inductive and deductive reasoning. Mixed methods support exploratory research.

Advocates of mixed methods research note that it overcomes the weaknesses and employs the strengths of quantitative and qualitative methods. Quantitative methods provide precision. The pictures and narrative of qualitative techniques add meaning to the numbers. Quantitative analysis can provide a big picture, establish relationships and its results have great generalizability. On the other hand, the “why” behind the explanation is often missing and can be filled in through in-depth interviews. A deeper and more satisfying explanation is possible. Mixed-methods brings the benefits of triangulation or multiple sources of evidence that converge to support a conclusion. It can entertain a “broader and more complete range of research questions” (Johnson and Onwuegbunzie 2004 , p. 21) and can move between inductive and deductive methods. Case studies use multiple forms of evidence and are a natural context for mixed methods.

One thing that seems to be missing from mixed method literature and explicit design is a place for conceptual frameworks. For example, Heyvaert et al. ( 2013 ) examined nine mixed methods studies and found an explicit framework in only two studies (transformative and pragmatic) (p. 663).

7 Theory and hypotheses: where is and what is theory?

Theory is key to deductive research. In essence, empirical deductive methods test theory. Hence, we shift our attention to theory and the role and functions of the hypotheses in theory. Oppenheim and Putnam ( 1958 ) note that “by a ‘theory’ (in the widest sense) we mean any hypothesis, generalization or law (whether deterministic or statistical) or any conjunction of these” (p. 25). Van Evera ( 1997 ) uses a similar and more complex definition “theories are general statements that describe and explain the causes of effects of classes of phenomena. They are composed of causal laws or hypotheses, explanations, and antecedent conditions” (p. 8). Sutton and Staw ( 1995 , p. 376) in a highly cited article “What Theory is Not” assert the that hypotheses should contain logical arguments for “why” the hypothesis is expected. Hypotheses need an underlying causal argument before they can be considered theory. The point of this discussion is not to define theory but to establish the importance of hypotheses in theory.

Explanatory research is implicitly relational (A explains B). The hypotheses of explanatory research lay bare these relationships. Popular definitions of hypotheses capture this relational component. For example, the Cambridge Dictionary defines a hypothesis a “an idea or explanation for something that is based on known facts but has not yet been proven”. Vocabulary.Com’s definition emphasizes explanation, a hypothesis is “an idea or explanation that you then test through study and experimentation”. According to Wikipedia a hypothesis is “a proposed explanation for a phenomenon”. Other definitions remove the relational or explanatory reference. The Oxford English Dictionary defines a hypothesis as a “supposition or conjecture put forth to account for known facts.” Science Buddies defines a hypothesis as a “tentative, testable answer to a scientific question”. According to the Longman Dictionary the hypothesis is “an idea that can be tested to see if it is true or not”. The Urban Dictionary states a hypothesis is “a prediction or educated-guess based on current evidence that is yet be tested”. We argue that the hypotheses of exploratory research— working hypothesis — are not bound by relational expectations. It is this flexibility that distinguishes the working hypothesis.

Sutton and Staw (1995) maintain that hypotheses “serve as crucial bridges between theory and data, making explicit how the variables and relationships that follow from a logical argument will be operationalized” (p. 376, italics added). The highly rated journal, Computers and Education , Twining et al. ( 2017 ) created guidelines for qualitative research as a way to improve soundness and rigor. They identified the lack of alignment between theoretical stance and methodology as a common problem in qualitative research. In addition, they identified a lack of alignment between methodology, design, instruments of data collection and analysis. The authors created a guidance summary, which emphasized the need to enhance coherence throughout elements of research design (Twining et al. 2017 p. 12). Perhaps the bridging function of the hypothesis mentioned by Sutton and Staw (1995) is obscured and often missing in qualitative methods. Working hypotheses can be a tool to overcome this problem.

For reasons, similar to those used by mixed methods scholars, we look to classical pragmatism and the ideas of John Dewey to inform our discussion of theory and working hypotheses. Dewey ( 1938 ) treats theory as a tool of empirical inquiry and uses a map metaphor (p. 136). Theory is like a map that helps a traveler navigate the terrain—and should be judged by its usefulness. “There is no expectation that a map is a true representation of reality. Rather, it is a representation that allows a traveler to reach a destination (achieve a purpose). Hence, theories should be judged by how well they help resolve the problem or achieve a purpose ” (Shields and Rangarajan 2013 , p. 23). Note that we explicitly link theory to the research purpose. Theory is never treated as an unimpeachable Truth, rather it is a helpful tool that organizes inquiry connecting data and problem. Dewey’s approach also expands the definition of theory to include abstractions (categories) outside of causation and explanation. The micro-conceptual frameworks Footnote 7 introduced in Table  1 are a type of theory. We define conceptual frameworks as the “way the ideas are organized to achieve the project’s purpose” (Shields and Rangarajan 2013 p. 24). Micro-conceptual frameworks do this at the very close to the data level of analysis. Micro-conceptual frameworks can direct operationalization and ways to assess measurement or evidence at the individual research study level. Again, the research purpose plays a pivotal role in the functioning of theory (Shields and Tajalli 2006 ).

8 Working hypothesis: methods and data analysis

We move on to answer the remaining questions in the Table  1 . We have established that exploratory research is extremely flexible and idiosyncratic. Given this, we will proceed with a few examples and draw out lessons for developing an exploratory purpose, building a framework and from there identifying data collection techniques and the logics of hypotheses testing and analysis. Early on we noted the value of the Working Hypothesis framework for student empirical research and applied research. The next section uses a masters level student’s work to illustrate the usefulness of working hypotheses as a way to incorporate the literature and structure inquiry. This graduate student was also a mature professional with a research question that emerged from his job and is thus an example of applied research.

Master of Public Administration student, Swift ( 2010 ) worked for a public agency and was responsible for that agency’s sexual harassment training. The agency needed to evaluate its training but had never done so before. He also had never attempted a significant empirical research project. Both of these conditions suggest exploration as a possible approach. He was interested in evaluating the training program and hence the project had a normative sense. Given his job, he already knew a lot about the problem of sexual harassment and sexual harassment training. What he did not know much about was doing empirical research, reviewing the literature or building a framework to evaluate the training (working hypotheses). He wanted a framework that was flexible and comprehensive. In his research, he discovered Lundvall’s ( 2006 ) knowledge taxonomy summarized with four simple ways of knowing ( Know - what, Know - how, Know - why, Know - who ). He asked whether his agency’s training provided the participants with these kinds of knowledge? Lundvall’s categories of knowing became the basis of his working hypotheses. Lundvall’s knowledge taxonomy is well suited for working hypotheses because it is so simple and is easy to understand intuitively. It can also be tailored to the unique problematic situation of the researcher. Swift ( 2010 , pp. 38–39) developed four basic working hypotheses:

WH1: Capital Metro provides adequate know - what knowledge in its sexual harassment training

WH2: Capital Metro provides adequate know - how knowledge in its sexual harassment training

WH3: Capital Metro provides adequate know - why knowledge in its sexual harassment training

WH4: Capital Metro provides adequate know - who knowledge in its sexual harassment training

From here he needed to determine what would determine the different kinds of knowledge. For example, what constitutes “know what” knowledge for sexual harassment training. This is where his knowledge and experience working in the field as well as the literature come into play. According to Lundvall et al. ( 1988 , p. 12) “know what” knowledge is about facts and raw information. Swift ( 2010 ) learned through the literature that laws and rules were the basis for the mandated sexual harassment training. He read about specific anti-discrimination laws and the subsequent rules and regulations derived from the laws. These laws and rules used specific definitions and were enacted within a historical context. Laws, rules, definitions and history became the “facts” of Know-What knowledge for his working hypothesis. To make this clear, he created sub-hypotheses that explicitly took these into account. See how Swift ( 2010 , p. 38) constructed the sub-hypotheses below. Each sub-hypothesis was defended using material from the literature (Swift 2010 , pp. 22–26). The sub-hypotheses can also be easily tied to evidence. For example, he could document that the training covered anti-discrimination laws.

WH1: Capital Metro provides adequate know - what knowledge in its sexual Harassment training

WH1a: The sexual harassment training includes information on anti-discrimination laws (Title VII).

WH1b: The sexual harassment training includes information on key definitions.

WH1c: The sexual harassment training includes information on Capital Metro’s Equal Employment Opportunity and Harassment policy.

WH1d: Capital Metro provides training on sexual harassment history.

Know-How knowledge refers to the ability to do something and involves skills (Lundvall and Johnson 1994 , p. 12). It is a kind of expertise in action. The literature and his experience allowed James Smith to identify skills such as how to file a claim or how to document incidents of sexual harassment as important “know-how” knowledge that should be included in sexual harassment training. Again, these were depicted as sub-hypotheses.

WH2: Capital Metro provides adequate know - how knowledge in its sexual Harassment training

WH2a: Training is provided on how to file and report a claim of harassment

WH2b: Training is provided on how to document sexual harassment situations.

WH2c: Training is provided on how to investigate sexual harassment complaints.

WH2d: Training is provided on how to follow additional harassment policy procedures protocol

Note that the working hypotheses do not specify a relationship but rather are simple declarative sentences. If “know-how” knowledge was found in the sexual harassment training, he would be able to find evidence that participants learned about how to file a claim (WH2a). The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure  1 captures the generic sub-hypothesis design.

figure 1

A Common structure used in the development of working hypotheses

When expected evidence is linked to the sub-hypotheses, data, framework and research purpose are aligned. This can be laid out in a planning document that operationalizes the data collection in something akin to an architect’s blueprint. This is where the scholar explicitly develops the alignment between purpose, framework and method (Shields and Rangarajan 2013 ; Shields et al. 2019b ).

Table  2 operationalizes Swift’s working hypotheses (and sub-hypotheses). The table provide clues as to what kind of evidence is needed to determine whether the hypotheses are supported. In this case, Smith used interviews with participants and trainers as well as a review of program documents. Column one repeats the sub-hypothesis, column two specifies the data collection method (here interviews with participants/managers and review of program documents) and column three specifies the unique questions that focus the investigation. For example, the interview questions are provided. In the less precise world of qualitative data, evidence supporting a hypothesis could have varying degrees of strength. This too can be specified.

For Swift’s example, neither the statistics of explanatory research nor the open-ended questions of interpretivist, inductive exploratory research is used. The deductive logic of inquiry here is somewhat intuitive and similar to a detective (Ulriksen and Dadalauri 2016 ). It is also a logic used in international law (Worster 2013 ). It should be noted that the working hypothesis and the corresponding data collection protocol does not stop inquiry and fieldwork outside the framework. The interviews could reveal an unexpected problem with Smith’s training program. The framework provides a very loose and perhaps useful ways to identify and make sense of the data that does not fit the expectations. Researchers using working hypotheses should be sensitive to interesting findings that fall outside their framework. These could be used in future studies, to refine theory or even in this case provide suggestions to improve sexual harassment training. The sensitizing concepts mentioned by Gilgun ( 2015 ) are free to emerge and should be encouraged.

Something akin to working hypotheses are hidden in plain sight in the professional literature. Take for example Kerry Crawford’s ( 2017 ) book Wartime Sexual Violence. Here she explores how basic changes in the way “advocates and decision makers think about and discuss conflict-related sexual violence” (p. 2). She focused on a subsequent shift from silence to action. The shift occurred as wartime sexual violence was reframed as a “weapon of war”. The new frame captured the attention of powerful members of the security community who demanded, initiated, and paid for institutional and policy change. Crawford ( 2017 ) examines the legacy of this key reframing. She develops a six-stage model of potential international responses to incidents of wartime violence. This model is fairly easily converted to working hypotheses and sub-hypotheses. Table  3 shows her model as a set of (non-relational) working hypotheses. She applied this model as a way to gather evidence among cases (e.g., the US response to sexual violence in the Democratic Republic of the Congo) to show the official level of response to sexual violence. Each case study chapter examined evidence to establish whether the case fit the pattern formalized in the working hypotheses. The framework was very useful in her comparative context. The framework allowed for consistent comparative analysis across cases. Her analysis of the three cases went well beyond the material covered in the framework. She freely incorporated useful inductively informed data in her analysis and discussion. The framework, however, allowed for alignment within and across cases.

9 Conclusion

In this article we argued that the exploratory research is also well suited for deductive approaches. By examining the landscape of deductive, exploratory research, we proposed the working hypothesis as a flexible conceptual framework and a useful tool for doing exploratory studies. It has the potential to guide and bring coherence across the steps in the research process. After presenting the nature of exploratory research purpose and how it differs from two types of research purposes identified in the literature—explanation, and description. We focused on answering four different questions in order to show the link between micro-conceptual frameworks and research purposes in a deductive setting. The answers to the four questions are summarized in Table  4 .

Firstly, we argued that working hypothesis and exploration are situated within the pragmatic philosophical perspective. Pragmatism allows for pluralism in theory and data collection techniques, which is compatible with the flexible exploratory purpose. Secondly, after introducing and discussing the four core elements of pragmatism (practical, pluralism, participatory, and provisional), we explained how the working hypothesis informs the methodologies and evidence collection of deductive exploratory research through a presentation of the benefits of triangulation provided by mixed methods research. Thirdly, as is clear from the article title, we introduced the working hypothesis as the micro-conceptual framework for deductive explorative research. We argued that the hypotheses of explorative research, which we call working hypotheses are distinguished from those of the explanatory research, since they do not require a relational component and are not bound by relational expectations. A working hypothesis is extremely flexible and idiosyncratic, and it could be viewed as a statement or group of statements of expectations tested in action depending on the research question. Using examples, we concluded by explaining how working hypotheses inform data collection and analysis for deductive exploratory research.

Crawford’s ( 2017 ) example showed how the structure of working hypotheses provide a framework for comparative case studies. Her criteria for analysis were specified ahead of time and used to frame each case. Thus, her comparisons were systemized across cases. Further, the framework ensured a connection between the data analysis and the literature review. Yet the flexible, working nature of the hypotheses allowed for unexpected findings to be discovered.

The evidence required to test working hypotheses is directed by the research purpose and potentially includes both quantitative and qualitative sources. Thus, all types of evidence, including quantitative methods should be part of the toolbox of deductive, explorative research. We show how the working hypotheses, as a flexible exploratory framework, resolves many seeming dualisms pervasive in the research methods literature.

To conclude, this article has provided an in-depth examination of working hypotheses taking into account philosophical questions and the larger formal research methods literature. By discussing working hypotheses as applied, theoretical tools, we demonstrated that working hypotheses fill a unique niche in the methods literature, since they provide a way to enhance alignment in deductive, explorative studies.

In practice, quantitative scholars often run multivariate analysis on data bases to find out if there are correlations. Hypotheses are tested because the statistical software does the math, not because the scholar has an a priori, relational expectation (hypothesis) well-grounded in the literature and supported by cogent arguments. Hunches are just fine. This is clearly an inductive approach to research and part of the large process of inquiry.

In 1958 , Philosophers of Science, Oppenheim and Putnam use the notion of Working Hypothesis in their title “Unity of Science as Working Hypothesis.” They too, use it as a big picture concept, “unity of science in this sense, can be fully realized constitutes an over-arching meta-scientific hypothesis, which enables one to see a unity in scientific activities that might otherwise appear disconnected or unrelated” (p. 4).

It should be noted that the positivism described in the research methods literature does not resemble philosophical positivism as developed by philosophers like Comte (Whetsell and Shields 2015 ). In the research methods literature “positivism means different things to different people….The term has long been emptied of any precise denotation …and is sometimes affixed to positions actually opposed to those espoused by the philosophers from whom the name derives” (Schrag 1992 , p. 5). For purposes of this paper, we are capturing a few essential ways positivism is presented in the research methods literature. This helps us to position the “working hypothesis” and “exploratory” research within the larger context in contemporary research methods. We are not arguing that the positivism presented here is anything more. The incompatibility theory discussed later, is an outgrowth of this research methods literature…

It should be noted that quantitative researchers often use inductive reasoning. They do this with existing data sets when they run correlations or regression analysis as a way to find relationships. They ask, what does the data tell us?

Qualitative researchers are also associated with phenomenology, hermeneutics, naturalistic inquiry and constructivism.

See Feilzer ( 2010 ), Howe ( 1988 ), Johnson and Onwuegbunzie ( 2004 ), Morgan ( 2007 ), Onwuegbuzie and Leech ( 2005 ), Biddle and Schafft ( 2015 ).

The term conceptual framework is applicable in a broad context (see Ravitch and Riggan 2012 ). The micro-conceptual framework narrows to the specific study and informs data collection (Shields and Rangarajan 2013 ; Shields et al. 2019a ) .

Adler, E., Clark, R.: How It’s Done: An Invitation to Social Research, 3rd edn. Thompson-Wadsworth, Belmont (2008)

Google Scholar  

Arnold, R.W.: Multiple working hypothesis in soil genesis. Soil Sci. Soc. Am. J. 29 (6), 717–724 (1965)

Article   Google Scholar  

Atieno, O.: An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 13 , 13–18 (2009)

Babbie, E.: The Practice of Social Research, 11th edn. Thompson-Wadsworth, Belmont (2007)

Biddle, C., Schafft, K.A.: Axiology and anomaly in the practice of mixed methods work: pragmatism, valuation, and the transformative paradigm. J. Mixed Methods Res. 9 (4), 320–334 (2015)

Brendel, D.H.: Healing Psychiatry: Bridging the Science/Humanism Divide. MIT Press, Cambridge (2009)

Bryman, A.: Qualitative research on leadership: a critical but appreciative review. Leadersh. Q. 15 (6), 729–769 (2004)

Casula, M.: Under which conditions is cohesion policy effective: proposing an Hirschmanian approach to EU structural funds, Regional & Federal Studies, https://doi.org/10.1080/13597566.2020.1713110 (2020a)

Casula, M.: Economic gowth and cohesion policy implementation in Italy and Spain, Palgrave Macmillan, Cham (2020b)

Ciceri, F., et al.: Microvascular COVID-19 lung vessels obstructive thromboinflammatory syndrome (MicroCLOTS): an atypical acute respiratory distress syndrome working hypothesis. Crit. Care Resusc. 15 , 1–3 (2020)

Crawford, K.F.: Wartime sexual violence: From silence to condemnation of a weapon of war. Georgetown University Press (2017)

Cronbach, L.: Beyond the two disciplines of scientific psychology American Psychologist. 30 116–127 (1975)

Dewey, J.: The reflex arc concept in psychology. Psychol. Rev. 3 (4), 357 (1896)

Dewey, J.: Logic: The Theory of Inquiry. Henry Holt & Co, New York (1938)

Feilzer, Y.: Doing mixed methods research pragmatically: implications for the rediscovery of pragmatism as a research paradigm. J. Mixed Methods Res. 4 (1), 6–16 (2010)

Gilgun, J.F.: Qualitative research and family psychology. J. Fam. Psychol. 19 (1), 40–50 (2005)

Gilgun, J.F.: Methods for enhancing theory and knowledge about problems, policies, and practice. In: Katherine Briar, Joan Orme., Roy Ruckdeschel., Ian Shaw. (eds.) The Sage handbook of social work research pp. 281–297. Thousand Oaks, CA: Sage (2009)

Gilgun, J.F.: Deductive Qualitative Analysis as Middle Ground: Theory-Guided Qualitative Research. Amazon Digital Services LLC, Seattle (2015)

Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine, Chicago (1967)

Gobo, G.: Re-Conceptualizing Generalization: Old Issues in a New Frame. In: Alasuutari, P., Bickman, L., Brannen, J. (eds.) The Sage Handbook of Social Research Methods, pp. 193–213. Sage, Los Angeles (2008)

Chapter   Google Scholar  

Grinnell, R.M.: Social work research and evaluation: quantitative and qualitative approaches. New York: F.E. Peacock Publishers (2001)

Guba, E.G.: What have we learned about naturalistic evaluation? Eval. Pract. 8 (1), 23–43 (1987)

Guba, E., Lincoln, Y.: Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass Publishers, San Francisco (1981)

Habib, M.: The neurological basis of developmental dyslexia: an overview and working hypothesis. Brain 123 (12), 2373–2399 (2000)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Hildebrand, D.: Dewey: A Beginners Guide. Oneworld Oxford, Oxford (2008)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Edu. Res. 17 (8), 10–16 (1988)

Hothersall, S.J.: Epistemology and social work: enhancing the integration of theory, practice and research through philosophical pragmatism. Eur. J. Social Work 22 (5), 860–870 (2019)

Hyde, K.F.: Recognising deductive processes in qualitative research. Qual. Market Res. Int. J. 3 (2), 82–90 (2000)

Johnson, R.B., Onwuegbuzie, A.J.: Mixed methods research: a research paradigm whose time has come. Educ. Res. 33 (7), 14–26 (2004)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kaplan, A.: The Conduct of Inquiry. Chandler, Scranton (1964)

Kolb, S.M.: Grounded theory and the constant comparative method: valid research strategies for educators. J. Emerg. Trends Educ. Res. Policy Stud. 3 (1), 83–86 (2012)

Levers, M.J.D.: Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open 3 (4), 2158244013517243 (2013)

Lundvall, B.A.: Knowledge management in the learning economy. In: Danish Research Unit for Industrial Dynamics Working Paper Working Paper, vol. 6, pp. 3–5 (2006)

Lundvall, B.-Å., Johnson, B.: Knowledge management in the learning economy. J. Ind. Stud. 1 (2), 23–42 (1994)

Lundvall, B.-Å., Jenson, M.B., Johnson, B., Lorenz, E.: Forms of Knowledge and Modes of Innovation—From User-Producer Interaction to the National System of Innovation. In: Dosi, G., et al. (eds.) Technical Change and Economic Theory. Pinter Publishers, London (1988)

Maanen, J., Manning, P., Miller, M.: Series editors’ introduction. In: Stebbins, R. (ed.) Exploratory research in the social sciences. pp. v–vi. Thousands Oak, CA: SAGE (2001)

Mackenzie, N., Knipe, S.: Research dilemmas: paradigms, methods and methodology. Issues Educ. Res. 16 (2), 193–205 (2006)

Marlow, C.R.: Research Methods for Generalist Social Work. Thomson Brooks/Cole, New York (2005)

Mead, G.H.: The working hypothesis in social reform. Am. J. Sociol. 5 (3), 367–371 (1899)

Milnes, A.G.: Structure of the Pennine Zone (Central Alps): a new working hypothesis. Geol. Soc. Am. Bull. 85 (11), 1727–1732 (1974)

Morgan, D.L.: Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 (1), 48–76 (2007)

Morse, J.: The significance of saturation. Qual. Health Res. 5 (2), 147–149 (1995)

O’Connor, M.K., Netting, F.E., Thomas, M.L.: Grounded theory: managing the challenge for those facing institutional review board oversight. Qual. Inq. 14 (1), 28–45 (2008)

Onwuegbuzie, A.J., Leech, N.L.: On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int. J. Soc. Res. Methodol. 8 (5), 375–387 (2005)

Oppenheim, P., Putnam, H.: Unity of science as a working hypothesis. In: Minnesota Studies in the Philosophy of Science, vol. II, pp. 3–36 (1958)

Patten, M.L., Newhart, M.: Understanding Research Methods: An Overview of the Essentials, 2nd edn. Routledge, New York (2000)

Pearse, N.: An illustration of deductive analysis in qualitative research. In: European Conference on Research Methodology for Business and Management Studies, pp. 264–VII. Academic Conferences International Limited (2019)

Prater, D.N., Case, J., Ingram, D.A., Yoder, M.C.: Working hypothesis to redefine endothelial progenitor cells. Leukemia 21 (6), 1141–1149 (2007)

Ravitch, B., Riggan, M.: Reason and Rigor: How Conceptual Frameworks Guide Research. Sage, Beverley Hills (2012)

Reiter, B.: The epistemology and methodology of exploratory social science research: Crossing Popper with Marcuse. In: Government and International Affairs Faculty Publications. Paper 99. http://scholarcommons.usf.edu/gia_facpub/99 (2013)

Ritchie, J., Lewis, J.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage, London (2003)

Schrag, F.: In defense of positivist research paradigms. Educ. Res. 21 (5), 5–8 (1992)

Shields, P.M.: Pragmatism as a philosophy of science: A tool for public administration. Res. Pub. Admin. 41995-225 (1998)

Shields, P.M., Rangarajan, N.: A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. New Forums Press (2013)

Shields, P.M., Tajalli, H.: Intermediate theory: the missing link in successful student scholarship. J. Public Aff. Educ. 12 (3), 313–334 (2006)

Shields, P., & Whetsell, T.: Public administration methodology: A pragmatic perspective. In: Raadshelders, J., Stillman, R., (eds). Foundations of Public Administration, pp. 75–92. New York: Melvin and Leigh (2017)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part I). Sotsiologicheskie issledovaniya 10 , 39–47 (2019a)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part 2). Sotsiologicheskie issledovaniya 11 , 40–51 (2019b)

Smith, J.K.: Quantitative versus qualitative research: an attempt to clarify the issue. Educ. Res. 12 (3), 6–13 (1983a)

Smith, J.K.: Quantitative versus interpretive: the problem of conducting social inquiry. In: House, E. (ed.) Philosophy of Evaluation, pp. 27–52. Jossey-Bass, San Francisco (1983b)

Smith, J.K., Heshusius, L.: Closing down the conversation: the end of the quantitative-qualitative debate among educational inquirers. Educ. Res. 15 (1), 4–12 (1986)

Stebbins, R.A.: Exploratory Research in the Social Sciences. Sage, Thousand Oaks (2001)

Book   Google Scholar  

Strydom, H.: An evaluation of the purposes of research in social work. Soc. Work/Maatskaplike Werk 49 (2), 149–164 (2013)

Sutton, R. I., Staw, B.M.: What theory is not. Administrative science quarterly. 371–384 (1995)

Swift, III, J.: Exploring Capital Metro’s Sexual Harassment Training using Dr. Bengt-Ake Lundvall’s taxonomy of knowledge principles. Applied Research Project, Texas State University https://digital.library.txstate.edu/handle/10877/3671 (2010)

Thomas, E., Magilvy, J.K.: Qualitative rigor or research validity in qualitative research. J. Spec. Pediatric Nurs. 16 (2), 151–155 (2011)

Twining, P., Heller, R.S., Nussbaum, M., Tsai, C.C.: Some guidance on conducting and reporting qualitative studies. Comput. Educ. 107 , A1–A9 (2017)

Ulriksen, M., Dadalauri, N.: Single case studies and theory-testing: the knots and dots of the process-tracing method. Int. J. Soc. Res. Methodol. 19 (2), 223–239 (2016)

Van Evera, S.: Guide to Methods for Students of Political Science. Cornell University Press, Ithaca (1997)

Whetsell, T.A., Shields, P.M.: The dynamics of positivism in the study of public administration: a brief intellectual history and reappraisal. Adm. Soc. 47 (4), 416–446 (2015)

Willis, J.W., Jost, M., Nilakanta, R.: Foundations of Qualitative Research: Interpretive and Critical Approaches. Sage, Beverley Hills (2007)

Worster, W.T.: The inductive and deductive methods in customary international law analysis: traditional and modern approaches. Georget. J. Int. Law 45 , 445 (2013)

Yin, R.K.: The case study as a serious research strategy. Knowledge 3 (1), 97–114 (1981)

Yin, R.K.: The case study method as a tool for doing evaluation. Curr. Sociol. 40 (1), 121–137 (1992)

Yin, R.K.: Applications of Case Study Research. Sage, Beverley Hills (2011)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Beverley Hills (2017)

Download references

Acknowledgements

The authors contributed equally to this work. The authors would like to thank Quality & Quantity’ s editors and the anonymous reviewers for their valuable advice and comments on previous versions of this paper.

Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. There are no funders to report for this submission.

Author information

Authors and affiliations.

Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125, Bologna, Italy

Mattia Casula

Texas State University, San Marcos, TX, USA

Nandhini Rangarajan & Patricia Shields

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mattia Casula .

Ethics declarations

Conflict of interest.

No potential conflict of interest was reported by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casula, M., Rangarajan, N. & Shields, P. The potential of working hypotheses for deductive exploratory research. Qual Quant 55 , 1703–1725 (2021). https://doi.org/10.1007/s11135-020-01072-9

Download citation

Accepted : 05 November 2020

Published : 08 December 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11135-020-01072-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Exploratory research
  • Working hypothesis
  • Deductive qualitative research
  • Find a journal
  • Publish with us
  • Track your research

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

NIHPA Author Manuscripts logo

The Brain Network for Deductive Reasoning: A Quantitative Meta-analysis of 28 Neuroimaging Studies

Jérôme prado, angad chadha, james r booth.

  • Author information
  • Article notes
  • Copyright and License information

Reprint requests should be sent to Jérôme Prado, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, or via [email protected]

Issue date 2011 Nov.

Over the course of the past decade, contradictory claims have been made regarding the neural bases of deductive reasoning. Researchers have been puzzled by apparent inconsistencies in the literature. Some have even questioned the effectiveness of the methodology used to study the neural bases of deductive reasoning. However, the idea that neuroimaging findings are inconsistent is not based on any quantitative evidence. Here, we report the results of a quantitative meta-analysis of 28 neuroimaging studies of deductive reasoning published between 1997 and 2010, combining 382 participants. Consistent areas of activations across studies were identified using the multilevel kernel density analysis method. We found that results from neuroimaging studies are more consistent than what has been previously assumed. Overall, studies consistently report activations in specific regions of a left fronto-parietal system, as well as in the left Basal Ganglia. This brain system can be decomposed into three subsystems that are specific to particular types of deductive arguments: relational, categorical, and propositional. These dissociations explain inconstancies in the literature. However, they are incompatible with the notion that deductive reasoning is supported by a single cognitive system relying either on visuospatial or rule-based mechanisms. Our findings provide critical insight into the cognitive organization of deductive reasoning and need to be accounted for by cognitive theories.

INTRODUCTION

Deductive reasoning is the process of drawing conclusions that are guaranteed to follow from given premises. Perhaps because deductions are an essential element of cognitive development ( Nunes et al., 2007 ; Halberda, 2003 , 2006 ; Markovits, Schleifer, & Fortier, 1989 ) and human thinking ( Stanovich & West, 2000 ), the study of deductive reasoning has been central to the reasoning literature for over 50 years ( Evans, 2005 ). In particular, much emphasis has been placed on identifying the mental representations that underlie standard deductive tasks, such as relational arguments in (1), categorical arguments in (2), and propositional arguments in (3).

A is to the left of B.

B is to the left of C.

Therefore, A is to the left of C.

All As are Bs.

All Bs are Cs.

Therefore, all As are Cs.

If there is an A, then there is a B.

There is an A.

Therefore, there is a B.

Psychological theories have often been interpreted as providing mutually exclusive hypotheses regarding the nature of the mental representations that support deductive reasoning ( Johnson-Laird, 1999 ). For example, researchers have been divided on whether deductive arguments such as those above rely on visuospatial or rule-based mechanisms. On the one hand, proponents of the Mental Model Theory (MMT) have argued that deductive reasoning is a nonverbal process that involves the construction and manipulation of a spatial representation of the problem premises ( Johnson-Laird, 1983 , 2001 ). On the other hand, advocates of the formal rule approach (FRA) assume that deductions call upon the retrieval and application of rules to a propositional representation of the problem premises ( Braine & O’Brien, 1998 ; Rips, 1994 ). Overall, testing between these two kinds of models has allowed significant progress to be made in the field. However, evidence for the involvement of both visuospatial and rule-based mechanisms can be found in the cognitive literature ( Johnson-Laird, 2001 ; Braine & O’Brien, 1998 ), and the exact nature of the mental representations underlying deductive reasoning remains debated.

Just as for other cognitive domains in which competing theories make divergent predictions about underlying mental representations ( Henson, 2005 ), the emergence of neuroimaging techniques two decades ago held the promise of using information about the neural correlates of deductive reasoning to inform the debate between the MMT and the FRA. For example, in the first series of experiments that investigated the neural bases of deductive reasoning ( Goel & Dolan, 2001 ; Goel, Buchel, Frith, & Dolan, 2000 ; Goel, Gold, Kapur, & Houle, 1997 , 1998 ), Goel and colleagues made a neurological hypothesis that still serves as framework for neuroimaging studies of reasoning ( Prado, Van Der Henst, & Noveck, 2010 ; Reverberi et al., 2007 ; Knauff, Mulack, Kassubek, Salih, & Greenlee, 2002 ). They posited that if deductive reasoning relies on visuospatial mechanisms (as claimed by the MMT), then the brain regions involved in visuospatial processing should be activated during deductive tasks. In other words, one should observe reasoning-related activations in parietal and/or occipital regions of the brain that are known to be engaged in tasks with a visuospatial component ( Sack, 2009 ; Kosslyn & Thompson, 2003 ). However, if deductive reasoning is a rule-based process (as claimed by the FRA), then the brain regions that are involved in rule-based syntax processing should be preferentially engaged during deductive tasks. In this case, one might expect to measure enhanced activity in the left inferior frontal gyrus (IFG), a brain region that has been claimed to be critical for rule-governed grammar processing in natural language ( Grodzinsky & Santi, 2008 ; Ullman, 2006 ; Friederici & Kotz, 2003 ). Such predictions are based on the idea that enhanced activity in the same brain region under two different conditions (e.g., deductive reasoning tasks and visuospatial tasks) implies a common cognitive function (e.g., visuospatial processing), a frequent assumption in neuroimaging studies (but see Poldrack, 2006 , for a critique of this logic).

Since the seminal studies by Goel et al. (1997 , 1998) , a growing body of data has been collected on the neural bases of deductive reasoning. To date, however, there is no consensus on whether these data support the MMT or the FRA ( Goel, 2007 ). Indeed, there seems to be enough variability in the location, extent, and strength of brain activations across studies that deductive reasoning has been alternatively associated with activations in visuospatial regions ( Knauff et al., 2002 ; Goel & Dolan, 2001 ), linguistic/syntactic regions ( Reverberi et al., 2007 , 2010 ), or neither of those ( Monti, Parsons, & Osherson, 2009 ; Kroger, Nystrom, Cohen, & Johnson-Laird, 2008 ; Monti, Osherson, Martinez, & Parsons, 2007 ). Two explanations have been advanced to explain this apparent lack of consistency. On the one hand, some have suggested that studies vary substantially in the efficiency of their experimental designs, materials, and statistical analyses. For example, some studies might have missed or incorrectly identified the critical processes involved in deduction because they used non-adequate baseline conditions, relied on overly simple arguments, exposed the subjects to too much training, or did not appropriately model brain activity ( Reverberi et al., 2007 , 2010 ; Monti et al., 2007 , 2009 ). This line of argumentation does not consider that inconsistencies in deductive reasoning studies are meaningful per se but rather that they reflect flaws in the experimental design or analysis method. On the other hand, it has also been proposed that differences in the patterns of brain activity across studies may indicate meaningful differences that are not necessarily accounted for by the main psychological theories of reasoning ( Prado, Noveck, & Van Der Henst, 2010 ; Reverberi et al., 2010 ; Goel, 2007 ). For example, studies diverge in the type of deductive argument (i.e., relational, categorical, propositional) that reasoners have to evaluate. Both the MMT and the FRA have often been interpreted as universal theories because they can account for virtually all types of reasoning ( Johnson-Laird, 1999 ; Rips, 1994 ; Hagert, 1984 ). However, reasoners may be able to use a broad range of strategies to solve reasoning problems ( Roberts & Newton, 2005 ), and some proponents of the FRA have acknowledged that certain types of arguments might preferentially trigger certain strategies ( Braine & O’Brien, 1998 , p.194). This is consistent with two recent neuroimaging studies that showed differences in the neural correlates of (1) relational and propositional arguments ( Prado, Van Der Henst et al., 2010 ) and (2) categorical and propositional arguments ( Reverberi et al., 2010 ). However, the extent to which this hypothesis can account for variability in the location of brain activations across studies remains to be tested.

The present study constitutes the first quantitative, coordinate-based meta-analysis of neuroimaging studies of deductive reasoning. It makes use of the multilevel kernel density analysis (MKDA) method ( Wager, Lindquist, Nichols, Kober, & Van Snellenberg, 2009 ) to analyze data from 28 studies conducted between 1997 and 2010, thus combining 382 participants. This study has two empirical objectives. The first one is to quantify the extent to which the patterns of brain activity associated with deductive reasoning are consistent across studies and tasks. Although some have argued that such patterns are largely inconsistent ( Monti et al., 2007 ), this claim is not based on any quantitative evidence and this variability may be more apparent than real. Specifically, the present meta-analysis statistically evaluates the consistency of the patterns of brain activity associated with deductive reasoning over and above variations in experimental factors such as differences in baseline condition, materials, practice, analysis method, and argument type. The second aim is to test whether some of the variability in neural activations can be accounted for by a fundamental difference between studies, that is, the type of deductive argument that participants are asked to evaluate. In other words, the present meta-analysis tests the specificity of certain brain regions for certain types of argument over and above differences in other experimental factors. Our study affords a unique glimpse into more than 10 years of neuroimaging research on deductive reasoning and the implications of our findings for cognitive theories of reasoning are discussed.

Study Selection

A recent qualitative review of the neuroimaging literature conducted by Goel (2007) served as the starting point for our meta-analysis. All of the studies included in this review were considered in the present meta-analysis as long as they reported the Montreal Neurological Institute (MNI) or Talairach coordinates of the activation peaks in a contrast of a reasoning condition versus a baseline condition (e.g., Goel & Dolan, 2003 , was not included because the authors do not report the results of such a contrast). However, because Goel’s review only included studies published up to April 2007, we searched for additional neuroimaging studies of deductive reasoning published from May 2007 to September 2010. This search was conducted in the PubMed and ScienceDirect databases.

Overall, 28 published, peer-reviewed fMRI and PET studies on the neural substrates of deductive reasoning were included in the present meta-analysis. Much like Goel’s review article, studies were dissociated by type of argument (relational, categorical, and propositional). However, whereas the aforementioned review was qualitative, the present meta-analysis used a quantitative approach. It is important to note that the studies included differ in aspects other than the type of deductive argument employed. For example, some categorical reasoning studies make use of arguments with an abstract content (e.g., All As are Bs, All Bs are Cs , therefore All As are Cs ), whereas others employ arguments with a concrete semantic content (e.g., All poodles are pets, All pets have names , therefore All poodles have names ). Similarly, relational reasoning studies have used both linguistic (e.g., Tom is taller than Bill, Bill is taller than John , therefore Tom is taller than John ) and nonlinguistic (e.g., A > B, B > C , therefore A > C ) materials. Finally, propositional reasoning studies report activations associated with different kinds of inferences, such as modus ponens (i.e., If P then Q ; P ; therefore Q ), modus tollens (i.e., If P then Q ; not Q ; therefore not P ), and disjunction elimination (i.e., P or Q ; not P ; therefore Q ). Studies also differ in factors such as imaging technique (e.g., PET vs. fMRI), experimental design (e.g., block vs. event-related), input modality (e.g., visual vs. auditory), period analyzed (e.g., second premise vs. conclusion), and amount of practice given. Although we acknowledge that these factors might lead to differences in the neural bases of deductive reasoning across studies, there were insufficient studies to examine each separately. Moreover, one of the main goals of the present meta-analysis was to examine the consistency of the patterns of brain activity associated with deductive reasoning over and above variations in the aforementioned factors.

Studies were included in the present meta-analysis based on three main requirements: (1) the studies’ participants were healthy adults, (2) the studies reported MNI or Talairach coordinates for a contrast of a deductive reasoning condition versus a baseline condition, and (3) the studies reported activation coordinates for the whole brain. Many studies reported activation coordinates for a contrast of reasoning versus baseline where the baseline was an unrelated task (e.g., sentence comprehension task, matching task; Goel et al., 1997 ), a reasoning argument in which the premises could not be integrated (e.g., If P then Q ; R ; therefore R ; Reverberi et al., 2007 ), or a lower-level baseline (e.g., fixation cross; Knauff, Fangmeier, Ruff, & Johnson-Laird, 2003 ). Other studies designated a simple reasoning condition as their baseline condition, thus reporting a contrast of complex argument versus simple argument ( Monti et al., 2007 ; Prado & Noveck, 2007 ). Both of these types of studies were included. From each study, contrasts corresponding most closely to a comparison between a reasoning condition and a baseline condition were selected. The studies included in the meta-analysis, together with their respective contrasts, are listed in Table 1 .

Studies and Contrasts Included in the Meta-analysis

Study Type of Argument Scanning Method Stimuli Modality n No. of Coord. Contrasts Included
Categorical PET Visual, linguistic 10 3 1. Deduction versus baseline
Categorical PET Visual, linguistic 12 4 1. Syllogism versus baseline
Relational PET Visual, linguistic 12 8 1. Spatial relational versus baseline
2. Nonspatial relational versus baseline
Categorical PET Visual, linguistic 10 8 1. Logic versus meaning
Categorical fMRI Visual, linguistic 11 31 1. Main effect of reasoning
2. Content versus preparation
3. No-content versus preparation
Propositional PET Visual, nonlinguistic 8 19 1. Posttest versus pretest
Propositional PET Visual, linguistic 10 24 1. Deduction versus probabilistic reasoning
Relational fMRI Visual, linguistic 14 36 1. Main effect of reasoning
2. Concrete reasoning versus concrete baseline
3. Abstract reasoning versus abstract baseline
Relational fMRI Visual, nonlinguistic 15 30 1. Transitive inference task versus height comparison
2. Transitive inference task versus passive visual task
Propositional and relational fMRI Auditory, linguistic 12 18 1. Relational versus baseline or conditional reasoning versus baseline
Relational fMRI Auditory, linguistic 12 26 1. Visuospatial versus rest interval
2. Visual versus rest interval
3. Spatial versus rest interval
4. Control versus rest interval
5. All inferences versus rest interval
Categorical fMRI Visual, linguistic 16 25 1. Main effect of reasoning
2. Deductive reasoning versus baseline
Relational fMRI Visual linguistic 14 19 1. Familiar environment reasoning versus familiar environment baseline
2.Unfamiliar environment reasoning versus unfamiliar environment baseline
Relational fMRI Visual, nonlinguistic 16 17 1. Inference-by-sequence interaction
2. Simple effects of transitive inference
3. Transitive inference network
Propositional fMRI Visual, linguistic 16 10 1. Modus Ponens versus baseline
2. versus baseline
Propositional fMRI Visual, linguistic 12 41 1. Descriptive reasoning versus baseline
2. Social exchange reasoning versus baseline
Relational fMRI Visual, nonlinguistic 12 24 Reasoning problems:
1. Premise processing phase
2. Premise integration phase
3. Reasoning validation phase
Experiment 1 Propositional fMRI Visual, linguistic 10 42 1. Complex versus simple deductions (block and pseudoword trials)
2. Complex versus simple deductions (block trials)
3. Complex versus simple deductions (pseudoword trials)
Experiment 2 Propositional fMRI Visual, linguistic 12 35 1. Complex versus simple deductions (house and face trials)
2. Complex versus simple deductions (house trials)
3. Complex versus simple deductions (face trials)
Propositional fMRI Visual, linguistic 14 8 1. (Integrable versus nonintegrable) and (Integrable versus nonintegrable) and [(Integrable versus nonintegrable) disjunctive versus (Integrable versus nonintegrable)
Propositional fMRI Visual, linguistic 20 16 1. Verification task (2-mismatch versus 1-mismatch versus 0-mismatch)
2. Falsification task (2-mismatch versus 1-mismatch versus 0-mismatch)
Categorical fMRI Visual, linguistic 16 24 Logic problems versus math problems:
1. Type of problem
2. Level of difficulty
3. Type × Difficulty interaction
Categorical fMRI Visual and auditory, linguistic 11 14 1. Premise 2: Reasoning versus control
2. Conclusion: Reasoning versus control
Relational fMRI Auditory, nonlinguistic 12 18 Reasoning problems:
1. Premise processing phase
2. Premise integration phase
3. Reasoning validation phase
Relational fMRI Visual linguistic 17 10 1. Main effect of reasoning (reasoning versus baseline)
Propositional fMRI Visual, linguistic 15 26 1. Inference versus grammar for logic arguments
Categorical fMRI Visual, linguistic 26 9 1. Integration effect for syllogistic problems
Propositional fMRI Visual, linguistic 26 4 1. Integration effect for conditional problems
Relational fMRI Visual, linguistic 13 12 1. Relational syllogism (integration effect)
Propositional fMRI Visual, linguistic 13 12 1. (integration effect)
Relational fMRI Visual, nonlinguistic 16 11 1. Transitive inference (inference versus direct)
2. Relational encoding (specific relations versus general relations)

n = number of subjects; No. of Coord. = number of coordinates.

This article did not make it possible to report coordinates separately for propositional and relational reasoning. Coordinates from this article were, thus, only included in the overall density analysis.

To uncover the brain regions that are consistently activated across reasoning studies, we employed the MKDA method ( Wager et al., 2009 ). This density analysis method has been successfully used in other meta-analyses of brain imaging studies ( Wang, Conder, Blitzer, & Shinkareva, 2010 ; Kober et al., 2008 ). Its statistical indicator is the probability of activation of a given voxel in the brain with the null hypothesis being that all of the reported activation coordinates are randomly distributed through the gray matter of the brain. Therefore, a significant result indicates that more reported activation coordinates lie near the specified voxel than would be expected by chance. The MKDA technique also allows for nested analysis of data: Multiple activation coordinates are nested within a contrast, and multiple contrasts are nested within a study. This method precludes the results being driven by studies that report a large number of activation peaks or have a large number of contrasts. Additionally, MKDA allows for weighting of studies with respect to their sample size and effect (fixed vs. random). Specifically, studies with a large number of participants and random effects designs are given more weight than studies with fewer participants or fixed effects designs.

The analyses were performed in Matlab 2009 with the MKDA tool package developed by Wager et al. (2009; psych.colorado.edu/~tor/). Peaks from each study were convolved with a spherical smoothing kernel with a radius of 10 mm. The MKDA statistic at each voxel, P , represents the proportion of contrasts that contained activated voxels within 10 mm of the specified voxel:

Each study was weighted by the number of participants ( N ) and the type of analysis (δ). c is the index factor, ranging from 1 to the number of comparison maps I . A δ value (weight) of 1.00 is assigned to studies that use random effect analysis, whereas studies that use fixed effects analysis are assigned a δ value of 0.75 ( Kober et al., 2008 ). A Monte Carlo simulation with 5000 iterations was used to determine the threshold for statistical significance, that is, the proportion that exceeds the whole-brain maximum in 95% of the Monte Carlo maps. Correction for multiple comparisons was performed using the family-wise error rate at p < .05.

In what follows, we first present the results of the overall density analysis identifying the regions that are the most consistently reported in neuroimaging studies of deductive reasoning across all types of deductive arguments. We then report the results of density analyses performed separately for each type of deductive arguments, together with direct contrasts comparing each argument with one another. All analyses are conducted at a voxelwise significance threshold of p < .05, FWE corrected across the whole brain.

All Deductive Arguments

Despite the apparent variability in the location of activated peaks across studies ( Figure 1A ), the density analysis performed on the entire corpus of neuroimaging studies revealed that several brain regions were consistently activated across studies (see Figure 1B and Table 2 ). Significant clusters of activation were located in the left IFG, left medial frontal gyrus (MeFG), bilateral middle frontal gyrus (MFG), bilateral precentral gyrus (PG), bilateral posterior parietal cortex (PPC), and left Basal Ganglia (BG).

Figure 1

Overall density analysis. (A) Location of the activation peaks from the 28 studies included in the meta-analysis. (B) MKDA map representing the regions most consistently activated in neuroimaging studies of deductive reasoning (irrespective of the type of deductive argument). Peaks and activations are overlaid on 3-D renderings and slices of the MNI-normalized anatomical brain.

Regions Consistently Activated across All Neuroimaging Studies of Deductive Reasoning

Anatomical Location ~BA Talairach Coordinates Volume (mm )
X Y Z
L. IFG 46 −45 35 10 24
L. putamen −16 1 11 120
L. IFG 46 −44 26 13 8
L. IFG 45 −49 22 16 16
L. IFG 9 −48 10 24 344
L. IFG 9 −46 15 23 16
L. IFG 9 −44 5 33 96
L. angular gyrus 39 −37 −59 38 3904
L. angular gyrus 39 −39 −57 36 3240
L. precuneus 7 −24 −71 40 664
L. middle frontal gyrus 9 −44 12 35 8
L. middle frontal gyrus 8 −42 10 42 408
R. middle frontal gyrus 9 39 10 40 72
L. precentral gyrus 6 −42 −3 39 8
R. precuneus 7 21 −69 38 32
L. precentral gyrus 6 −39 −6 50 1136
L. precentral gyrus 6 −40 −6 48 776
L. precentral gyrus 6 −33 −8 53 280
L. middle frontal gyrus 6 −35 −5 55 80
L. MeFG 6 −5 3 51 1904
L. middle frontal gyrus 6 −44 4 45 8
R. precuneus 7 20 −69 41 8
R. superior parietal lobule 7 24 −64 42 56
L. precentral gyrus 4 −33 −14 49 8
L. precentral gyrus 4 −33 −14 53 8
R. middle frontal gyrus 6 24 −9 58 456

L. = left; R. = right; ~BA = approximate Brodmann’s area.

Relational Arguments

We found that studies employing relational arguments display consistent activation in three brain regions: the right MFG, left MeFG, and bilateral PPC (see Figure 2A and Table 3 ). Direct contrasts between studies revealed that only the bilateral PPC and right MFG are both (1) more consistently associated with relational arguments than categorical arguments ( Figure 3A ) and (2) more consistently associated with relational arguments than propositional arguments ( Figure 3B ). Indeed, a contrast of relational arguments versus categorical arguments showed activation in bilateral PPC (BA 39; left: x = −33, y = −61, z = 36; right: x = 22, y = −62, z = 40) and right MFG (BA 6; x = 22, y = −5, z = 60). A contrast of relational arguments versus propositional arguments also showed activation in bilateral PPC (BA 7; x = −24, y = −69, z = 41; x = 20, y = −69, z = 41) and right MFG (BA 6; x = 22, y = −7, z = 56). It is important to note that, of the 11 studies included in the meta-analysis, 5 used nonlinguistic materials as stimuli (see Table 1 ). To ensure that the activation of the PPC observed in our meta-analysis was not driven by these three studies, we conducted the same density analyses without these five studies. The bilateral PPC was still significantly activated even when these studies were eliminated.

Figure 2

Density analyses performed separately for studies employing relational, categorical, and propositional arguments. (A) MKDA map representing the regions most consistently activated in neuroimaging studies employing relational arguments. (B) MKDA map representing the regions most consistently activated in neuroimaging studies employing categorical arguments. (C) MKDA map representing the regions most consistently activated in neuroimaging studies employing propositional arguments. Activations are overlaid on 3-D renderings and slices of the MNI-normalized anatomical brain.

Regions Consistently Activated in Studies Employing Relational Arguments, Categorical Arguments, and Propositional Arguments

Anatomical Location ~BA Talairach Coordinates Volume (mm )
X Y Z
R. middle frontal gyrus 6 22 −7 56 1184
L. MeFG 6 −2 −1 53 144
L. intraparietal sulcus 40 −37 −56 38 304
L. angular gyrus 39 −33 −61 38 152
R. superior parietal lobule 7 24 −62 40 120
L. precuneus 7 −24 −69 41 360
R. precuneus 7 20 −69 41 8
L. precuneus 7 −9 −67 41 8
L. IFG 9/44 −47 12 23 936
L. precentral gyrus 4 −39 −15 47 88
R. caudate head 8 6 3 1296
L. putamen −16 7 7 40
L. putamen −12 0 9 104
L. angular gyrus 39 −37 −57 34 992
L. precentral gyrus 6 −42 −6 46 904
L. MeFG 6 −2 1 53 8

Figure 3

Density analyses for the contrasts of relational arguments versus categorical arguments and relational arguments versus propositional arguments. (A) MKDA map representing the regions more consistently activated in studies that employ relational arguments than in studies that employ categorical arguments. (B) MKDA map representing the regions more consistently activated in studies that employ relational arguments than in studies that employ propositional arguments. Activations are overlaid on 3-D renderings and slices of the MNI-normalized anatomical brain.

Categorical Arguments

When only studies employing categorical arguments are included in the density analysis, three brain regions are found to be consistently activated in the literature: left IFG, left PG, and bilateral BG (see Figure 2B and Table 3 ). However, direct contrasts between studies revealed that only the left IFG and bilateral BG are both (1) more consistently associated with categorical arguments than relational arguments ( Figure 4A ) and (2) more consistently associated with categorical arguments than propositional arguments ( Figure 4B ). Indeed, the contrast of categorical arguments versus relational arguments revealed activation of the left IFG (BA 9/44; x = −48, y = 10, z = 24), bilateral BG (left: x = −16, y = 7, z = 7; right: x = 8, y = 10, z = 2), and left PG (BA 4; x = −41, y = −15, z = 47). The contrast of categorical arguments versus propositional arguments revealed activation of the left IFG (BA 9/44; x = −46, y = 10, z = 22) and bilateral BG (left: x = 12, y = 6, z = 1; right: x = −21, y = 11, z = 7).

Figure 4

Density analyses for the contrasts of categorical arguments versus relational arguments and categorical arguments versus propositional arguments. (A) MKDA map representing the regions more consistently activated in studies that employ categorical arguments than in studies that employ relational arguments. (B) MKDA map representing the regions more consistently activated in studies that employ categorical arguments than in studies that employ propositional arguments. Activations are overlaid on 3-D renderings and slices of the MNI-normalized anatomical brain.

Propositional Arguments

A density analysis conducted on studies employing propositional arguments revealed activation of the left PPC, left PG, and MeFG (see Figure 2C and Table 3 ). Direct contrasts between studies revealed that only the left PG is both (1) more consistently associated with propositional arguments than categorical arguments ( Figure 5A ) and (2) more consistently associated with propositional arguments than relational arguments ( Figure 5B ). A contrast between studies employing propositional arguments versus studies employing categorical arguments revealed activations in the left PPC (BA 39; x = −39, y = −59, z = 32) and left PG (BA 6; x = −46, y = −4, z = 45). A contrast between studies employing propositional arguments versus studies employing relational arguments only revealed activation in the left PG (BA 6; x = −44, y = −4, z = 46).

Figure 5

Density analyses for the contrasts of propositional arguments versus categorical arguments and propositional arguments versus relational arguments. (A) MKDA map representing the regions more consistently activated in studies that employ propositional arguments than in studies that employ categorical arguments. (B) MKDA map representing the regions more consistently activated in studies that employ propositional arguments than in studies that employ relational arguments. Activations are overlaid on 3-D renderings and slices of the MNI-normalized anatomical brain.

Using the MKDA method ( Wager et al., 2009 ), the present meta-analysis combines data from 28 functional neuroimaging studies to uncover the brain regions that are consistently activated during deductive reasoning. It has been argued that neuroimaging studies of deductive reasoning have generated mostly inconsistent results that are somewhat difficult to interpret ( Monti et al., 2007 ). Although this has led some to question the methodology used to investigate the neural bases of deduction ( Monti et al., 2007 ; Reverberi et al., 2007 ), this claim is only based on a qualitative survey of the literature. The present quantitative meta-analysis demonstrates that the results gathered from the neuroimaging literature are far more consistent than what has been assumed. Over and above differences in type of deductive argument and methodology used, deductive reasoning studies consistently report activation in a mostly left-lateralized brain system, which includes the left lateral (IFG, MFG, PG) and medial (MeFG) frontal cortices, the left parietal cortex (PPC), and the left BG. We further show that this left hemisphere brain system can be broken down into several subsystems that are specific to particular types of deductive arguments, for example, PPC for relational arguments, IFG for categorical arguments, and PG for propositional arguments. Finally, we demonstrate that two other cortical regions located in the right hemisphere (i.e., PPC and MFG) are also engaged in deductive reasoning, but only consistently in studies employing relational arguments as materials. Overall, our results provide evidence that the brain system that underlies deductive reasoning is dependent upon the type of deductive argument. We argue that these findings provide critical insight into the cognitive organization of deductive reasoning and need to be accounted for by cognitive theories.

Deductive Reasoning Mainly Engages a Left-lateralized Fronto-parieto BG Brain System

Across all types of deductive arguments, the meta-analysis reveals that five left-lateralized (IFG, MFG, PG, PPC, and BG) and one medial (MeFG) brain regions are consistently activated in deductive reasoning studies. Patient studies have long supported a left hemisphere dominance for reasoning, whether the brain lesions are in the Prefrontal Cortex ( Goel et al., 2007 ), temporal cortex ( Langdon & Warrington, 2000 ; Read, 1981 ), or widespread throughout the entire hemisphere ( Golding, 1981 ). For example, Goel et al. showed that patients with damage to the left Prefrontal Cortex are less accurate in evaluating the validity of determinate arguments than normal controls and patients with damage to the right Prefrontal Cortex. The present meta-analysis indicates that the neuroimaging literature is in keeping with this longstanding neuropsychological literature. As noted elsewhere ( Goel, 2007 ), the involvement of the left hemisphere in reasoning is broadly consistent with Gazzaniga’s “left brain interpreter” hypothesis ( Roser & Gazzaniga, 2006 ). According to this hypothesis, the general role of the left hemisphere is to construct a coherent representation of reality by generating hypotheses about missing information in the environment ( Roser & Gazzaniga, 2006 ). For example, when confronted with the premises of a deductive argument, the left hemisphere might recognize its logical structure and generate a hypothesis regarding its conclusion ( Goel, 2007 ). Although this provides a relatively parsimonious explanation of the involvement of left brain regions in deductive reasoning, there are undoubtedly intrahemi-spheric differences in the functions supported by these regions. In fact, our meta-analysis revealed that this left hemisphere brain system could be broken down into different subsystems that are specific to different types of deductive arguments. Below, we describe these subsystems (and additional systems in the right hemisphere), discuss the potential role of each region in deductive reasoning, and interpret the relevance of these findings for cognitive theories of reasoning.

Relational Arguments Are Associated with Activations in Bilateral PPC and Right MFG

We found that studies that use relational arguments consistently show activations in the bilateral PPC and right MFG. Furthermore, relational arguments are more consistently associated with activations in these regions than propositional or categorical arguments (although the right MFG cluster was only reliable in the contrast of relational vs. propositional arguments). The parietal cortex is a functionally heterogeneous structure ( Culham & Kanwisher, 2001 ), but there is a consensus that the PPC is predominantly involved in spatial cognition ( Marshall & Fink, 2001 ; Colby & Goldberg, 1999 ). Specifically, although bilateral activations of the PPC are often observed in neuroimaging studies of visuospatial tasks, there is right hemisphere dominance for visuospatial processing. Indeed, TMS studies show that disruptions of the right PPC, but not the left, are associated with deficits in visuospatial cognition ( Muri et al., 2002 ; Sack, Hubl, et al., 2002 ) and visuospatial imagery ( Sack, Camprodon, Pascual-Leone, & Goebel, 2005 ; Sack, Sperling, et al., 2002 ). The lack of left IFG activation and the consistent involvement of the right PPC in tasks utilizing relational arguments are not easily accounted for by the FRA. Rather, this finding seems to lend support for the MMT, which posits that deductive reasoning is a nonverbal process that requires a visuospatial representation of the premises ( Johnson-Laird, 2001 ).

The visuospatial nature of relational reasoning might be explained by the relative ease with which linear orderings can be mapped onto a single, analogical dimension. For example, the premises John is older than Tom and Tom is older than Bill can be easily mapped onto a linear continuum that represents the characters’ ages (i.e., John–Tom–Bill). Consistent with this hypothesis, studies have shown that the number of premises that need to be considered to evaluate the conclusion of a relational argument is inversely related to the difficulty of the problem. For example, given the problem A is larger than B, B is larger than C, C is larger than D , and D is larger than E , participants take longer to evaluate the conclusion B is larger than C than the conclusion B is larger than D ( Prado, Van der Henst, & Noveck, 2008 ; Potts, 1972 , 1974 ). This “distance” effect is consistent with the claim that participants construct an integrated representation of the problem premises (i.e., A–B–C–D–E): Two items that are close on this representation (e.g., BC) are less easily distinguishable than two items that are further away (e.g., BD). We have recently found that items at the beginning of a relational ordering (e.g., A, B) are automatically associated with the left side of space, whereas items at the end of the ordering (e.g., D, E) are automatically associated with the right side of space ( Prado et al., 2008 ). This further supports the idea that the mental representations that underlie relational arguments are strongly visuospatial. The results of the present meta-analysis, associating relational arguments with the PPC, are consistent with this behavioral research.

Categorical Arguments Are Associated with Activation of the Left IFG and Left BG

Unlike relational reasoning, categorical reasoning was not found consistently linked to the right PPC across studies. Rather, the meta-analysis associates two other brain regions with categorical reasoning: the left IFG (BA 9/44) and the left BG (putamen and caudate nucleus). We show that these regions are consistently activated in studies that use categorical arguments and that their engagement is more often observed for categorical arguments than relational or propositional arguments. A large body of lesion and neuroimaging studies suggests that the left IFG supports rule-governed syntax processing in natural language ( Grodzinsky & Santi, 2008 ; Ullman, 2006 ; Friederici & Kotz, 2003 ). Furthermore, this region is often found to be coactivated with the BG in studies that investigate grammar processing ( Friederici, 2002 ; Moro et al., 2001 ; Embick, Marantz, Miyashita, O’Neil, & Sakai, 2000 ; Ni et al., 2000 ) and musical syntax ( Tillmann, Janata, & Bharucha, 2003 ; Maess, Koelsch, Gunter, & Friederici, 2001 ). The consistent involvement of these regions in studies that use categorical arguments (and the lack of right PPC engagement) seems more consistent with the idea that reasoning is a linguistic/syntactic process than with the claim that it is a visuospatial process. In other words, this finding seems more consistent with the FRA than the MMT.

Why would categorical arguments involve a different type of mental representation than relational arguments? It is possible that categorical arguments are more difficult to represent with visuospatial models than relational arguments. Unlike relational premises, categorical premises contain items that represent sets of objects rather than single elements. Such items cannot be mapped onto a single analogical dimension (e.g., age for the relational premise John is older than Tom ) and would, thus, require more elaborate visuospatial representations. Furthermore, as noted by Favrel and Barrouillet, categorical premises are often ambiguous because they can be compatible with different mental models. For example, the premise All As are Bs might be represented with a model in which As are identical to Bs or with a model in which As are included in Bs. Overall, it might be more difficult to construct a visuospatial representation of a categorical premise than a visuospatial representation of a relational premise. Most reasoners might, thus, choose to rely on a propositional representation of the premises when faced with a categorical argument. This hypothesis is consistent with behavioral research. Unlike relational arguments, the number of premises that need to be considered to evaluate the conclusion of a categorical argument is positively related to the difficulty of the problem. For example, given the problem All As are Bs, All Bs are Cs, All Cs are Ds , and All Ds are Es , participants take longer to evaluate the conclusion All Bs are Ds than the conclusion All Bs are Cs ( Favrel & Barrouillet, 2000 ; Barrouillet, 1996 ). This “reverse” distance effect is inconsistent with the idea that participants form a unified spatial representation of the premises. It rather suggests that they apply sequential rules of inference to an atomic representation of the premises (i.e., the more premises reasoners have to consider, the more rules they have to apply; Favrel & Barrouillet, 2000 ). Our present finding that categorical arguments are consistently associated with the activation of regions involved in rule-based processing in natural language (i.e., the left IFG and BG) is consistent with this claim.

Propositional Arguments Are Associated with Activation of the Left PPC, Left PG, and MeFG

Three brain regions are associated with propositional arguments in our meta-analysis: the left PPC, the left PG, and MeFG. However, only the left PG is more consistently found in studies employing propositional arguments than in studies employing categorical or relational argument. It is difficult to determine whether the engagement of these three regions in propositional reasoning supports the MMT or the FRA. First, as mentioned in the previous section, one might argue that activation of the PPC is broadly consistent with the MMT, especially in the absence of left IFG activation as is the case here. However, the PPC cluster associated with propositional arguments is left-lateralized and centered around the angular gyrus, a region that has been linked to verbal (although not syntactic) processing ( Booth, Coch, Fischer, & Dawson, 2007 ; Fiez & Petersen, 1998 ). Second, the MeFG has been implicated in the maintenance of abstract rules in memory ( Bunge, Kahn, Wallis, Miller, & Wagner, 2003 ) and activations along the medial wall of the frontal cortex have been tentatively linked to the FRA in previous studies ( Monti et al., 2007 , 2009 ). Third, the engagement of the left PG might reflect some combination of attentional and motor processes that are involved in the reasoning tasks ( Acuna, Eliassen, Donoghue, & Sanes, 2002 ) but does not seem to speak to the debate between the MMT and FRA per se. Overall, there seems to be some heterogeneity in the neural processes that are engaged across propositional reasoning studies. This might be explained by the heterogeneity of the propositional reasoning tasks themselves. For example, neuroimaging studies have used arguments that contained conditional propositions ( Prado, Van Der Henst, et al., 2010 ; Reverberi et al., 2007 , 2010 ; Prado & Noveck, 2007 ; Noveck, Goel, & Smith, 2004 ; Houde et al., 2000 ), disjunctive propositions ( Reverberi et al., 2007 ), or a combination of both ( Monti et al., 2007 , 2009 ). Even within conditional reasoning tasks, studies vary in the type of arguments participants are presented with: Some studies employ the modus ponens form ( If P than Q ; P; therefore Q ; Reverberi et al., 2007 ; Noveck et al., 2004 ), whereas others focus on the more difficult modus tollens form ( If P than Q ; not Q; therefore not P ; Prado, Van Der Henst, et al., 2010 ; Reverberi et al., 2010 ; Monti et al., 2007 ; Noveck et al., 2004 ). Although both the FRA and the MMT can account for virtually all forms of propositional reasoning ( Braine & O’Brien, 1998 ; Rips, 1994 ; Johnson-Laird, Byrne, & Schaeken, 1992 ), it is possible that some propositional arguments preferentially involve syntactic processes, whereas others are more strongly associated with visuospatial processes. For example, a previous study has shown that modus ponens is associated with the left PPC, whereas modus tollens engages the left IFG ( Noveck et al., 2004 ). Unfortunately, there are not enough studies in the literature to allow us to investigate different forms of propositional reasoning in our meta-analysis. Nonetheless, our study shows that propositional arguments are consistently associated with both parietal (left PPC) and frontal (left PG and MeFG) regions. Such a pattern could be consistent with both the FRA (i.e., activation of the MeFG) and the MMT (i.e., activation of the left PPC). Future studies are needed to determine whether the contribution of these regions can be teased apart, thus providing a clearer picture on the role of rule-based and visuospatial processes in propositional reasoning.

Deductive Reasoning Relies on a Fractionated Neural System

It is important to note that most of our interpretations so far rely on the idea that a cognitive process (e.g., visuospatial processing) can be inferred from activation in a particular brain region (e.g., the right PPC). This logic, based on the idea that brain organization is modular to some extent, can provide useful insights into the cognitive processes that are involved in a given task ( Poldrack, 2006 ). However, the logic is undermined by the fact that there is rarely a one-to-one mapping between a brain region and a cognitive function. For example, because the left PPC has been linked to both spatial and non-spatial processes ( Husain & Nachev, 2007 ), it is difficult to know whether activation of this region provides evidence for one or the other of these processes. For this reason, our interpretations of the patterns of brain activation observed in this study (as is the case for many neuroimaging studies) are limited in that they cannot provide definitive evidence for either visuospatial or syntactic processing in deductive reasoning.

However, our findings allow us to make a much stronger conclusion that is highly relevant to cognitive theories of reasoning: There is consistent evidence in the neuroimaging literature that deductive reasoning does not rely on a unitary brain system. Rather, our meta-analysis demonstrates clear dissociations between the neural representations of relational, categorical, and propositional arguments. This is inconsistent with any cognitive theory that would posit that the same cognitive mechanism underlies these three forms of reasoning, such as the MMT ( Johnson-Laird, 1999 ) or a parsimonious interpretation of the FRA ( Rips, 1994 ; Hagert, 1984 ). Instead, the neuroimaging literature is consistent with the notion that visuospatial representations and rules of inference are both available to reasoners. The engagement of one or the other of these mechanisms is likely to depend upon intraindividual as well as interindividual factors ( Roberts & Newton, 2005 ). Our meta-analysis suggests that the type of deductive argument is a critical intraindividual factor. Importantly, evidence for such a claim is not unique to neuroimaging studies but can be found in the cognitive literature as well. For example, studies have found that relational reasoning performance is affected by taxing the visuospatial working memory resources ( Vandierendonck & De Vooght, 1997 ). In contrast, propositional reasoning performance is correlated with measures of verbal, but not visuospatial, working memory ( Handley, Capon, Copp, & Harper, 2002 ). This suggests that relational and propositional reasoning rely on visuospatial and linguistic mechanisms, respectively. Furthermore, participants differ in the strategies they use to make deductions ( Roberts & Newton, 2005 ). For example, when evaluating categorical arguments, some participants report using spatial strategies whereas others indicate using verbal strategies ( Bacon, Handley, & Newstead, 2005 ; Ford, 1995 ). In line with this observation, the engagement of the PPC in deductive tasks has been found to depend upon interindividual differences in visuospatial skills ( Fangmeier, Knauff, Ruff, & Sloutsky, 2006 ; Ruff, Knauff, Fangmeier, & Spreer, 2003 ). Overall, there is evidence in the cognitive literature that deductive behavior is not easily explained by a parsimonious mechanism involving either rule-based or visuospatial processes ( Roberts & Newton, 2005 ). The neuroimaging literature is in line with such an observation.

Relationship of the Present Work to Previous Accounts on the Neural Bases of Deductive Reasoning

The present study constitutes the first quantitative meta-analysis of neuroimaging studies of deductive reasoning. However, there have been some previous attempts to review and interpret the findings from this literature, sometimes with conflicting conclusions. For example, although both Goel (2007) and Monti et al. (2007) noted a high degree of variability in the location of the activated regions across studies, these authors differ substantially in their interpretation of these discrepancies. On the one hand, Goel concludes that there might be no unitary neural system for reasoning, but instead “a fractionated system that is dynamically configured in response to certain task and environmental cues” (p. 440). On the other hand, Monti et al. (2007 , 2009) defend the idea that, despite the large variability in neural activation across studies, there are two “core” regions of deduction: the left rostro-lateral pFC (RLPFC; BA 10) and the medial superior frontal gyrus (MeFG; BA 8). Our study reveals that (1) the brain network for reasoning depends upon the type of deductive task and (2) neither the left RLPFC nor the MeFG is consistently activated across studies. These results seem more consistent with Goel’s account than with Monti et al.’s account. It is interesting to note that Monti et al. demonstrate the engagement of the left RLPFC and MeFG in two relatively complex propositional tasks. In these tasks, participants had to evaluate arguments in which disjunctions or conjunctions were embedded in conditional sentences (e.g., If the block is either red or square then it is not large; The block is not large; therefore The block is not red ). Such arguments are likely to require the joint consideration of multiple logical relations (e.g., the conditional relation If (p or q) then r; not r; therefore not (p or q) and the disjunction not (p or q); therefore not p ). Studies have found that the left RLPFC is particularly activated when distinct mental representations have to be integrated and considered simultaneously ( Ramnani & Owen, 2004 ; Christoff et al., 2001 ). Rather than constituting a generic system for deductive reasoning, the left RLPFC might thus be specifically engaged when several logical relations have to be considered at the same time. Future studies might further investigate this possibility.

Our meta-analysis shows that deductive reasoning is subserved by several neural systems located in both cortical (frontal and parietal cortices) and subcortical (BG) structures. We demonstrate that these systems are highly sensitive of the type of deductive argument processed: bilateral PPC and right MFG for relational arguments, left IFG and BG for categorical arguments, and left PG for propositional arguments. This is inconsistent with the idea that deductive reasoning is a unitary cognitive mechanism that relies either on visuospatial or rule-based representations ( Johnson-Laird, 1999 ). Instead, our findings suggest that reasoners can make use of both kinds of representations depending on the type of argument they are presented with. Our meta-analysis indicates that the neural regions that underlie deductive reasoning are sensitive to the type of argument, but their engagement is likely to be modulated by other factors as well. As Goel (2007) pointed out, neuroimaging studies have also shown that the neural bases of deduction depend upon factors such as semantic content of the premises, absence/ presence of conflicting information in the argument, or degree of certainty of the conclusion. In addition to such intraindividual factors, brain imaging research also suggests that there are some interindividual differences in the degree to which visuospatial or rule-based representations are recruited. In summary, more than a decade of neuroimaging research suggests that it may be time to move beyond the question of whether deductive reasoning is a visuospatial or rule-based process because both representations are likely to be available to reasoners. Future behavioral and neuroimaging studies might instead focus on understanding how these representations are selected and how this selection is influenced by interindividual and intraindividual factors.

Acknowledgments

This research was supported by a grant from the National Institute of Child Health and Human Development (HD059177) to James R. Booth.

  • Acuna B, Eliassen J, Donoghue J, Sanes J. Frontal and parietal lobe activation during transitive inference in humans. Cerebral Cortex. 2002;12:1312–1321. doi: 10.1093/cercor/12.12.1312. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Bacon A, Handley SJ, Newstead SE. Verbal and spatial strategies in reasoning. In: Roberts MJ, Newton EJ, editors. Methods of thought: Individual differences in reasoning strategies. Hove, UK: Psychology Press; 2005. pp. 80–105. [ Google Scholar ]
  • Barrouillet P. Transitive inferences from set inclusion relations and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1996;6:1408–1422. [ Google Scholar ]
  • Booth J, Coch D, Fischer K, Dawson G. Brain bases of learning and development of language and reading. Human behavior, learning and the developing brain. 2007:279–300. [ Google Scholar ]
  • Braine M, O’Brien D. Mental logic 1998 [ Google Scholar ]
  • Bunge SA, Kahn I, Wallis JD, Miller EK, Wagner AD. Neural circuits subserving the retrieval and maintenance of abstract rules. Journal of Neurophysiology. 2003;90:3419–3428. doi: 10.1152/jn.00910.2002. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Canessa N, Gorini A, Cappa SF, Piattelli-Palmarini M, Danna M, Fazio F, et al. The effect of social content on deductive reasoning: An fMRI study. Human Brain Mapping. 2005;26:30–43. doi: 10.1002/hbm.20114. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Christoff K, Prabhakaran V, Dorfman J, Zhao Z, Kroger JK, Holyoak KJ, et al. Rostrolateral prefrontal cortex involvement in relational integration during reasoning. Neuroimage. 2001;14:1136–1149. doi: 10.1006/nimg.2001.0922. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Colby CL, Goldberg ME. Space and attention in parietal cortex. Annual Review of Neuroscience. 1999;22:319–349. doi: 10.1146/annurev.neuro.22.1.319. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Culham JC, Kanwisher NG. Neuroimaging of cognitive functions in human parietal cortex. Current Opinion in Neurobiology. 2001;11:157–163. doi: 10.1016/s0959-4388(00)00191-4. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Embick D, Marantz A, Miyashita Y, O’Neil W, Sakai KL. A syntactic specialization for Broca’s area. Proceedings of the National Academy of Sciences, USA. 2000;97:6150–6154. doi: 10.1073/pnas.100098897. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Evans JSBT. Deductive reasoning. In: Holyoak KJ, Morrison RJ, editors. The Cambridge handbook of thinking and reasoning. New York: Cambridge University Press; 2005. pp. 169–184. [ Google Scholar ]
  • Fangmeier T, Knauff M. Neural correlates of acoustic reasoning. Brain Research. 2009;1249:181–190. doi: 10.1016/j.brainres.2008.10.025. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Fangmeier T, Knauff M, Ruff CC, Sloutsky V. FMRI evidence for a three-stage model of deductive reasoning. Journal of Cognitive Neuroscience. 2006;18:320–334. doi: 10.1162/089892906775990651. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Favrel J, Barrouillet P. On the relation between representations constructed from text comprehension and transitive inference production. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2000;26:187–203. doi: 10.1037//0278-7393.26.1.187. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Fiez JA, Petersen SE. Neuroimaging studies of word reading. Proceedings of the National Academy of Sciences, USA. 1998;95:914–921. doi: 10.1073/pnas.95.3.914. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ford M. Two modes of mental representation and problem solution in syllogistic reasoning. Cognition. 1995;54:1–71. doi: 10.1016/0010-0277(94)00625-u. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Friederici AD. Towards a neural basis of auditory sentence processing. Trends in Cognitive Sciences. 2002;6:78–84. doi: 10.1016/s1364-6613(00)01839-8. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Friederici AD, Kotz SA. The brain basis of syntactic processes: Functional imaging and lesion studies. Neuroimage. 2003;20(Suppl. 1):S8–S17. doi: 10.1016/j.neuroimage.2003.09.003. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V. Anatomy of deductive reasoning. Trends in Cognitive Sciences. 2007;11:435–441. doi: 10.1016/j.tics.2007.09.003. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Buchel C, Frith C, Dolan RJ. Dissociation of mechanisms underlying syllogistic reasoning. Neuroimage. 2000;12:504–514. doi: 10.1006/nimg.2000.0636. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Dolan RJ. Functional neuroanatomy of three-term relational reasoning. Neuropsychologia. 2001;39:901–909. doi: 10.1016/s0028-3932(01)00024-0. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Dolan RJ. Explaining modulation of reasoning by belief. Cognition. 2003;87:B11–B22. doi: 10.1016/s0010-0277(02)00185-3. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Dolan RJ. Differential involvement of left prefrontal cortex in inductive and deductive reasoning. Cognition. 2004;93:B109–B121. doi: 10.1016/j.cognition.2004.03.001. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Gold B, Kapur S, Houle S. The seats of reason? An imaging study of deductive and inductive reasoning. NeuroReport. 1997;8:1305–1310. doi: 10.1097/00001756-199703240-00049. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Gold B, Kapur S, Houle S. Neuroanatomical correlates of human reasoning. Journal of Cognitive Neuroscience. 1998;10:293–302. doi: 10.1162/089892998562744. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Makale M, Grafman J. The hippocampal system mediates logical reasoning about familiar spatial environments. Journal of Cognitive Neuroscience. 2004;16:654–664. doi: 10.1162/089892904323057362. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Goel V, Stollstorff M, Nakic M, Knutson K, Grafman J. A role for right ventrolateral prefrontal cortex in reasoning about indeterminate relations. Neuropsychologia. 2009;47:2790–2797. doi: 10.1016/j.neuropsychologia.2009.06.002. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Goel V, Tierney M, Sheesley L, Bartolo A, Vartanian O, Grafman J. Hemispheric specialization in human prefrontal cortex for resolving certain and uncertain inferences. Cerebral Cortex. 2007;17:2245–2250. doi: 10.1093/cercor/bhl132. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Golding E. The effect of unilateral brain lesion on reasoning. Cortex. 1981;17:31–40. doi: 10.1016/s0010-9452(81)80004-4. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Grodzinsky Y, Santi A. The battle for Broca’s region. Trends in Cognitive Sciences. 2008;12:474–480. doi: 10.1016/j.tics.2008.09.001. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Hagert G. Modeling mental models: Experiments in cognitive modeling spatial reasoning. In: O’Shea T, editor. Advances in artificial intelligence. Amsterdam: North-Holland; 1984. pp. 389–398. [ Google Scholar ]
  • Halberda J. The development of a word-learning strategy. Cognition. 2003;87:B23–B34. doi: 10.1016/s0010-0277(02)00186-5. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Halberda J. Is this a dax which I see before me? Use of the logical argument disjunctive syllogism supports word-learning in children and adults. Cognitive Psychology. 2006;53:310–344. doi: 10.1016/j.cogpsych.2006.04.003. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Handley SJ, Capon A, Copp C, Harper C. Conditional reasoning and the Tower of Hanoi: The role of spatial and verbal working memory. British Journal of Psychology. 2002;93:501–518. doi: 10.1348/000712602761381376. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Heckers S, Zalesak M, Weiss A, Ditman T, Titone D. Hippocampal activation during transitive inference in humans. Hippocampus. 2004;14:153–162. doi: 10.1002/hipo.10189. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Henson R. What can functional neuroimaging tell the experimental psychologist? Quarterly Journal of Experimental Psychology A. 2005;58:193–233. doi: 10.1080/02724980443000502. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Houde O, Zago L, Mellet E, Moutier S, Pineau A, Mazoyer B, et al. Shifting from the perceptual brain to the logical brain: The neural impact of cognitive inhibition training. Journal of Cognitive Neuroscience. 2000;12:721–728. doi: 10.1162/089892900562525. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Husain M, Nachev P. Space and the parietal cortex. Trends in Cognitive Sciences. 2007;11:30–36. doi: 10.1016/j.tics.2006.10.011. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Johnson-Laird PN. Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press; 1983. [ Google Scholar ]
  • Johnson-Laird PN. Formal rules verses mental models in reasoning. In: Sternberg RJ, editor. The nature of cognition. Cambridge, MA: MIT Press; 1999. [ Google Scholar ]
  • Johnson-Laird PN. Mental models and deduction. Trends in Cognitive Sciences. 2001;5:432–442. doi: 10.1016/s1364-6613(00)01751-4. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Johnson-Laird PN, Byrne RM, Schaeken W. Propositional reasoning by model. Psychological Review. 1992;99:418–439. doi: 10.1037/0033-295x.99.3.418. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Knauff M, Fangmeier T, Ruff CC, Johnson-Laird PN. Reasoning, models, and images: Behavioral measures and cortical activity. Journal of Cognitive Neuroscience. 2003;15:559–573. doi: 10.1162/089892903321662949. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Knauff M, Mulack T, Kassubek J, Salih HR, Greenlee MW. Spatial imagery in deductive reasoning: A functional MRI study. Brain Research, Cognitive Brain Research. 2002;13:203–212. doi: 10.1016/s0926-6410(01)00116-1. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Kober H, Barrett LF, Joseph J, Bliss-Moreau E, Lindquist K, Wager TD. Functional grouping and cortical-subcortical interactions in emotion: A meta-analysis of neuroimaging studies. Neuroimage. 2008;42:998–1031. doi: 10.1016/j.neuroimage.2008.03.059. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kosslyn SM, Thompson WL. When is early visual cortex activated during visual mental imagery? Psychological Bulletin. 2003;129:723–746. doi: 10.1037/0033-2909.129.5.723. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Kroger JK, Nystrom LE, Cohen JD, Johnson-Laird PN. Distinct neural substrates for deductive and mathematical processing. Brain Research. 2008;1243:86–103. doi: 10.1016/j.brainres.2008.07.128. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Langdon D, Warrington EK. The role of the left hemisphere in verbal and spatial reasoning tasks. Cortex. 2000;36:691–702. doi: 10.1016/s0010-9452(08)70546-x. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Maess B, Koelsch S, Gunter TC, Friederici AD. Musical syntax is processed in Broca’s area: An MEG study. Nature Neuroscience. 2001;4:540–545. doi: 10.1038/87502. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Markovits H, Schleifer M, Fortier L. Development of elementary deductive reasoning in young children. Developmental Psychology. 1989;25:787–793. [ Google Scholar ]
  • Marshall JC, Fink GR. Spatial cognition: Where we were and where we are. Neuroimage. 2001;14:S2–S7. doi: 10.1006/nimg.2001.0834. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Monti MM, Osherson DN, Martinez MJ, Parsons LM. Functional neuroanatomy of deductive inference: A language-independent distributed network. Neuroimage. 2007;37:1005–1016. doi: 10.1016/j.neuroimage.2007.04.069. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Monti MM, Parsons LM, Osherson DN. The boundaries of language and thought in deductive inference. Proceedings of the National Academy of Sciences, USA. 2009;106:12554–12559. doi: 10.1073/pnas.0902422106. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Moro A, Tettamanti M, Perani D, Donati C, Cappa SF, Fazio F. Syntax and the brain: Disentangling grammar by selective anomalies. Neuroimage. 2001;13:110–118. doi: 10.1006/nimg.2000.0668. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Muri RM, Buhler R, Heinemann D, Mosimann UP, Felblinger J, Schlaepfer TE, et al. Hemispheric asymmetry in visuospatial attention assessed with transcranial magnetic stimulation. Experimental Brain Research. 2002;143:426–430. doi: 10.1007/s00221-002-1009-9. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Ni W, Constable RT, Mencl WE, Pugh KR, Fulbright RK, Shaywitz SE, et al. An event-related neuroimaging study distinguishing form and content in sentence processing. Journal of Cognitive Neuroscience. 2000;12:120–133. doi: 10.1162/08989290051137648. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Noveck IA, Goel V, Smith KW. The neural basis of conditional reasoning with arbitrary content. Cortex. 2004;40:613–622. doi: 10.1016/s0010-9452(08)70157-6. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Nunes T, Bryant P, Evans D, Bell D, Gardner S, Gardner A, et al. The contribution of logical reasoning to the learning of mathematics in primary school. British Journal of Developmental Psychology. 2007;25:147–166. [ Google Scholar ]
  • Osherson D, Perani D, Cappa S, Schnur T, Grassi F, Fazio F. Distinct brain loci in deductive versus probabilistic reasoning. Neuropsychologia. 1998;36:369–376. doi: 10.1016/s0028-3932(97)00099-7. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Parsons LM, Osherson DN. New evidence for distinct right and left brain systems for deductive versus probabilistic reasoning. Cerebral Cortex. 2001;11:954–965. doi: 10.1093/cercor/11.10.954. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Poldrack RA. Can cognitive processes be inferred from neuroimaging data? Trends in Cognitive Sciences. 2006;10:59–63. doi: 10.1016/j.tics.2005.12.004. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Potts G. Information processing stragies used in the encoding of linear ordering. Journal of Verbal Learning and Verbal Behavior. 1972;11:727–740. [ Google Scholar ]
  • Potts G. Storing and retrieving information about ordered relationship. Journal of Experimental Psychology: General. 1974;103:431–439. [ Google Scholar ]
  • Prado J, Noveck IA. Overcoming perceptual features in logical reasoning: A parametric functional magnetic resonance imaging study. Journal of Cognitive Neuroscience. 2007;19:642–657. doi: 10.1162/jocn.2007.19.4.642. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Prado J, Noveck IA, Van Der Henst J-B. Overlapping and distinct neural representations of numbers and verbal transitive series. Cerebral Cortex. 2010;20:720–729. doi: 10.1093/cercor/bhp137. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Prado J, Van der Henst J-B, Noveck IA. Spatial associations in relational reasoning: Evidence for a SNARC-like effect. Quarterly Journal of Experimental Psychology (Colchester) 2008;61:1143–1150. doi: 10.1080/17470210801954777. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Prado J, Van Der Henst J-B, Noveck IA. Recomposing a fragmented literature: How conditional and relational arguments engage different neural systems for deductive reasoning. Neuroimage. 2010;51:1213–1221. doi: 10.1016/j.neuroimage.2010.03.026. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Ramnani N, Owen AM. Anterior prefrontal cortex: Insights into function from anatomy and neuroimaging. Nature Reviews Neuroscience. 2004;5:184–194. doi: 10.1038/nrn1343. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Read DE. Solving deductive-reasoning problems after unilateral temporal lobectomy. Brain and Language. 1981;12:116–127. doi: 10.1016/0093-934x(81)90008-0. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Reverberi C, Cherubini P, Frackowiak RSJ, Caltagirone C, Paulesu E, Macaluso E. Conditional and syllogistic deductive tasks dissociate functionally during premise integration. Human Brain Mapping. 2010 doi: 10.1002/hbm.20947. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reverberi C, Cherubini P, Rapisarda A, Rigamonti E, Caltagirone C, Frackowiak RSJ, et al. Neural basis of generation of conclusions in elementary deduction. Neuroimage. 2007;38:752–762. doi: 10.1016/j.neuroimage.2007.07.060. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Rips L. The psychology of proof. Cambridge, MA: MIT Press; 1994. [ Google Scholar ]
  • Roberts MJ, Newton EJ. Methods of thought: Individual differences in reasoning strategies. Hove, UK: Psychology Press; 2005. [ Google Scholar ]
  • Rodriguez-Moreno D, Hirsch J. The dynamics of deductive reasoning: An fMRI investigation. Neuropsychologia. 2009;47:949–961. doi: 10.1016/j.neuropsychologia.2008.08.030. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Roser ME, Gazzaniga MS. The interpreter in human psychology. In: Preuss TM, Kaas JH, editors. The evolution of primate nervous systems. Oxford, UK: Academic Press; 2006. pp. 503–508. [ Google Scholar ]
  • Ruff CC, Knauff M, Fangmeier T, Spreer J. Reasoning and working memory: Common and distinct neuronal processes. Neuropsychologia. 2003;41:1241–1253. doi: 10.1016/s0028-3932(03)00016-2. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Sack AT. Parietal cortex and spatial cognition. Behavioural Brain Research. 2009;202:153–161. doi: 10.1016/j.bbr.2009.03.012. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Sack AT, Camprodon JA, Pascual-Leone A, Goebel R. The dynamics of interhemispheric compensatory processes in mental imagery. Science. 2005;308:702–704. doi: 10.1126/science.1107784. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Sack AT, Hubl D, Prvulovic D, Formisano E, Jandl M, Zanella FE, et al. The experimental combination of rTMS and fMRI reveals the functional relevance of parietal cortex for visuospatial functions. Brain Research, Cognitive Brain Research. 2002;13:85–93. doi: 10.1016/s0926-6410(01)00087-8. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Sack AT, Sperling JM, Prvulovic D, Formisano E, Goebel R, Di Salle F, et al. Tracking the mind’s image in the brain: II. Transcranial magnetic stimulation reveals parietal asymmetry in visuospatial imagery. Neuron. 2002;35:195–204. doi: 10.1016/s0896-6273(02)00745-6. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Stanovich K, West R. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences. 2000;23:645–665. doi: 10.1017/s0140525x00003435. discussion 665-726. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Tillmann B, Janata P, Bharucha JJ. Activation of the inferior frontal cortex in musical priming. Brain Research, Cognitive Brain Research. 2003;16:145–161. doi: 10.1016/s0926-6410(02)00245-8. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Ullman MT. Is Broca’s area part of a basal ganglia thalamocortical circuit? Cortex. 2006;42:480–485. doi: 10.1016/s0010-9452(08)70382-4. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Vandierendonck A, De Vooght G. Working memory constraints on linear reasoning with spatial and temporal contents. Quarterly Journal of Experimental Psychology A. 1997;50:803–820. doi: 10.1080/713755735. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Wager TD, Lindquist MA, Nichols TE, Kober H, Van Snellenberg JX. Evaluating the consistency and specificity of neuroimaging data using meta-analysis. Neuroimage. 2009;45(1 Suppl):S210–S221. doi: 10.1016/j.neuroimage.2008.10.061. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wang J, Conder JA, Blitzer DN, Shinkareva SV. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Human Brain Mapping. 2010;31:1459–1468. doi: 10.1002/hbm.20950. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wendelken C, Bunge SA. Transitive inference: Distinct contributions of rostrolateral prefrontal cortex and the hippocampus. Journal of Cognitive Neuroscience. 2010;22:837–847. doi: 10.1162/jocn.2009.21226. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • View on publisher site
  • PDF (1.1 MB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

IMAGES

  1. Deductive Reasoning: Examples, Definition, Types and the difference

    what is deductive reasoning in quantitative research

  2. 15 Deductive Reasoning Examples (2024)

    what is deductive reasoning in quantitative research

  3. PPT

    what is deductive reasoning in quantitative research

  4. Stages of deductive reasoning. Adapted from 'Deduction and Induction

    what is deductive reasoning in quantitative research

  5. What is Deductive Reasoning? (A Detailed Explanation)

    what is deductive reasoning in quantitative research

  6. Deductive Approach (Deductive Reasoning)

    what is deductive reasoning in quantitative research

VIDEO

  1. Deductive Reasoning

  2. Inductive and Deductive Research approach

  3. hnbgu Entrane MCQ||Types Of Research for UGC NET/PhD entrance exam||अनुसंधान के प्रकार

  4. Why We Forget Why We Walked Into A Room

  5. Critical Reasoning

  6. Quantitative reasoning lecture 1 II semester system II BSN

COMMENTS

  1. What Is Deductive Reasoning?

    Deductive reasoning in research Deductive reasoning is commonly used in scientific research, and it's especially associated with quantitative research . In research, you might have come across something called the hypothetico-deductive method .

  2. Inductive vs. Deductive Research Approach

    Deductive research approach. When conducting deductive research, you always start with a theory. This is usually the result of inductive research. Reasoning deductively means testing these theories. Remember that if there is no theory yet, you cannot conduct deductive research. The deductive research approach consists of four stages:

  3. Deductive Approach (Deductive Reasoning ...

    Generally, studies using deductive approach follow the following stages: Deducing hypothesis from theory. Formulating hypothesis in operational terms and proposing relationships between two specific variables Testing hypothesis with the application of relevant method(s). These are quantitative methods such as regression and correlation analysis, mean, mode and median and others.

  4. Deductive Reasoning

    Deductive reasoning is a logical process in which a conclusion is drawn from a set of premises or propositions that are assumed or known to be true. The process of deductive reasoning starts with a general statement or premise, and then moves towards a specific conclusion that logically follows from the initial statement.

  5. Inductive Vs Deductive Research

    Deductive: Effective for testing existing theories or hypotheses. Conclusion. Both inductive and deductive research approaches are crucial in the development and testing of theories. The choice between them depends on the research goal: inductive for exploring and generating new theories, and deductive for testing existing ones.

  6. What is Deductive Research? Meaning, Stages & Examples

    Deductive research is a scientific approach that is used to test a theory or hypothesis through observations and empirical evidence. This research method is commonly used in disciplines such as physics, biology, psychology, and sociology, among others. ... Deductive research uses deductive reasoning. It starts with a theory, makes predictions ...

  7. Inductive vs Deductive Reasoning

    Deductive research approach. When conducting deductive research, you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research. The deductive research approach consists of four stages:

  8. Deductive / Quantitative research approach

    The deductive or quantitative research approach is concerned with situations in which data can be analyzed in terms of numbers. The researcher primarily uses post positivist claims for developing knowledge (i.e., cause and effect thinking, ...

  9. Inductive and Deductive Reasoning

    Deductive reasoning is the opposite of inductive reasoning. As the word 'deduct' itself describes that it deducts a specific idea from the general hypothesis. It makes you dependent on the present theory and doesn't allow you to reason as per your perception because it is already a proven theory which you use to study in detail.

  10. How do you use deductive reasoning in research?

    How do you use deductive reasoning in research? Deductive reasoning is commonly used in scientific research, and it's especially associated with quantitative research.. In research, you might have come across something called the hypothetico-deductive method.It's the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

  11. Conducting and Writing Quantitative and Qualitative Research

    Quantitative research as a deductive hypothesis-testing design. ... uses variable-based models from individual cases and findings are stated in quantified sentences derived by deductive reasoning.24. In quantitative research, a phenomenon is investigated in terms of the relationship between an independent variable and a dependent variable which ...

  12. 1-2 The Nature of Evidence: Inductive and Deductive Reasoning

    While quantitative research is generally thought to be deductive, quantitative researchers often do a bit of inductive reasoning to find meaning in data that hold surprises. Conversely, qualitative data are stereotypically analyzed inductively, making meaning from the data rather than proving a hypothesis.

  13. Inductive vs Deductive Research: Difference of Approaches

    The terms "inductive" and "deductive" are often used in logic, reasoning, and science. Scientists use both inductive and deductive research methods as part of the scientific method. ... Relies more on quantitative analysis: Deductive research uses more quantitative methods, like statistical analysis, to test and confirm the theory or ...

  14. Inductive and Deductive Reasoning

    Based on the methods of data collection and data analysis, research approach methods are of three types: quantitative, qualitative, and mixed methods. However, considering the general plan and procedure for conducting a study, the research approach is divided into three categories: ... Combination of Inductive and Deductive Reasoning in Research.

  15. 1.7 Deductive Approaches to Research

    Researchers taking a deductive approach take the steps described for inductive research and reverse their order. They start with a social theory that they find compelling and then test its implications with data; i.e., they move from a more general level to a more specific one. A deductive approach to research is the one that people typically ...

  16. PDF Compare and Contrast Inductive and Deductive Research Approaches By L

    Inductive and Deductive Research Approaches 9 perspectives. The researcher often focuses on a single phenomenon to gather as much information as possible about that particular phenomena (Creswell & Plano Clark, 2007). How Data are Collected In quantitative research, data can be collected from many participants at many research sites.

  17. The potential of working hypotheses for deductive exploratory research

    Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001).Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013; Bryman 2004 as cited in Pearse 2019).Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory ...

  18. How different are qualitative and quantitative research?

    Of course there are some differences between quantitative and qualitative research. First, they are based on different kinds of reasoning. Quantitative research is based on deductive reasoning. The researcher formulates a hypothesis and then conducts experiments to test that hypothesis and so reach (or deduce) a conclusion.

  19. The Brain Network for Deductive Reasoning: A Quantitative Meta-analysis

    The Brain Network for Deductive Reasoning: A Quantitative Meta-analysis of 28 Neuroimaging Studies. Jérôme Prado, Angad ... more than a decade of neuroimaging research suggests that it may be time to move beyond the question of whether deductive reasoning is a visuospatial or rule-based process because both representations are likely to be ...