Grad Coach

Quantitative Data Analysis 101

The lingo, methods and techniques, explained simply.

By: Derek Jansen (MBA)  and Kerryn Warren (PhD) | December 2020

Quantitative data analysis is one of those things that often strikes fear in students. It’s totally understandable – quantitative analysis is a complex topic, full of daunting lingo , like medians, modes, correlation and regression. Suddenly we’re all wishing we’d paid a little more attention in math class…

The good news is that while quantitative data analysis is a mammoth topic, gaining a working understanding of the basics isn’t that hard , even for those of us who avoid numbers and math . In this post, we’ll break quantitative analysis down into simple , bite-sized chunks so you can approach your research with confidence.

Quantitative data analysis methods and techniques 101

Overview: Quantitative Data Analysis 101

  • What (exactly) is quantitative data analysis?
  • When to use quantitative analysis
  • How quantitative analysis works

The two “branches” of quantitative analysis

  • Descriptive statistics 101
  • Inferential statistics 101
  • How to choose the right quantitative methods
  • Recap & summary

What is quantitative data analysis?

Despite being a mouthful, quantitative data analysis simply means analysing data that is numbers-based – or data that can be easily “converted” into numbers without losing any meaning.

For example, category-based variables like gender, ethnicity, or native language could all be “converted” into numbers without losing meaning – for example, English could equal 1, French 2, etc.

This contrasts against qualitative data analysis, where the focus is on words, phrases and expressions that can’t be reduced to numbers. If you’re interested in learning about qualitative analysis, check out our post and video here .

What is quantitative analysis used for?

Quantitative analysis is generally used for three purposes.

  • Firstly, it’s used to measure differences between groups . For example, the popularity of different clothing colours or brands.
  • Secondly, it’s used to assess relationships between variables . For example, the relationship between weather temperature and voter turnout.
  • And third, it’s used to test hypotheses in a scientifically rigorous way. For example, a hypothesis about the impact of a certain vaccine.

Again, this contrasts with qualitative analysis , which can be used to analyse people’s perceptions and feelings about an event or situation. In other words, things that can’t be reduced to numbers.

How does quantitative analysis work?

Well, since quantitative data analysis is all about analysing numbers , it’s no surprise that it involves statistics . Statistical analysis methods form the engine that powers quantitative analysis, and these methods can vary from pretty basic calculations (for example, averages and medians) to more sophisticated analyses (for example, correlations and regressions).

Sounds like gibberish? Don’t worry. We’ll explain all of that in this post. Importantly, you don’t need to be a statistician or math wiz to pull off a good quantitative analysis. We’ll break down all the technical mumbo jumbo in this post.

Need a helping hand?

data analysis strategies in quantitative research

As I mentioned, quantitative analysis is powered by statistical analysis methods . There are two main “branches” of statistical methods that are used – descriptive statistics and inferential statistics . In your research, you might only use descriptive statistics, or you might use a mix of both , depending on what you’re trying to figure out. In other words, depending on your research questions, aims and objectives . I’ll explain how to choose your methods later.

So, what are descriptive and inferential statistics?

Well, before I can explain that, we need to take a quick detour to explain some lingo. To understand the difference between these two branches of statistics, you need to understand two important words. These words are population and sample .

First up, population . In statistics, the population is the entire group of people (or animals or organisations or whatever) that you’re interested in researching. For example, if you were interested in researching Tesla owners in the US, then the population would be all Tesla owners in the US.

However, it’s extremely unlikely that you’re going to be able to interview or survey every single Tesla owner in the US. Realistically, you’ll likely only get access to a few hundred, or maybe a few thousand owners using an online survey. This smaller group of accessible people whose data you actually collect is called your sample .

So, to recap – the population is the entire group of people you’re interested in, and the sample is the subset of the population that you can actually get access to. In other words, the population is the full chocolate cake , whereas the sample is a slice of that cake.

So, why is this sample-population thing important?

Well, descriptive statistics focus on describing the sample , while inferential statistics aim to make predictions about the population, based on the findings within the sample. In other words, we use one group of statistical methods – descriptive statistics – to investigate the slice of cake, and another group of methods – inferential statistics – to draw conclusions about the entire cake. There I go with the cake analogy again…

With that out the way, let’s take a closer look at each of these branches in more detail.

Descriptive statistics vs inferential statistics

Branch 1: Descriptive Statistics

Descriptive statistics serve a simple but critically important role in your research – to describe your data set – hence the name. In other words, they help you understand the details of your sample . Unlike inferential statistics (which we’ll get to soon), descriptive statistics don’t aim to make inferences or predictions about the entire population – they’re purely interested in the details of your specific sample .

When you’re writing up your analysis, descriptive statistics are the first set of stats you’ll cover, before moving on to inferential statistics. But, that said, depending on your research objectives and research questions , they may be the only type of statistics you use. We’ll explore that a little later.

So, what kind of statistics are usually covered in this section?

Some common statistical tests used in this branch include the following:

  • Mean – this is simply the mathematical average of a range of numbers.
  • Median – this is the midpoint in a range of numbers when the numbers are arranged in numerical order. If the data set makes up an odd number, then the median is the number right in the middle of the set. If the data set makes up an even number, then the median is the midpoint between the two middle numbers.
  • Mode – this is simply the most commonly occurring number in the data set.
  • In cases where most of the numbers are quite close to the average, the standard deviation will be relatively low.
  • Conversely, in cases where the numbers are scattered all over the place, the standard deviation will be relatively high.
  • Skewness . As the name suggests, skewness indicates how symmetrical a range of numbers is. In other words, do they tend to cluster into a smooth bell curve shape in the middle of the graph, or do they skew to the left or right?

Feeling a bit confused? Let’s look at a practical example using a small data set.

Descriptive statistics example data

On the left-hand side is the data set. This details the bodyweight of a sample of 10 people. On the right-hand side, we have the descriptive statistics. Let’s take a look at each of them.

First, we can see that the mean weight is 72.4 kilograms. In other words, the average weight across the sample is 72.4 kilograms. Straightforward.

Next, we can see that the median is very similar to the mean (the average). This suggests that this data set has a reasonably symmetrical distribution (in other words, a relatively smooth, centred distribution of weights, clustered towards the centre).

In terms of the mode , there is no mode in this data set. This is because each number is present only once and so there cannot be a “most common number”. If there were two people who were both 65 kilograms, for example, then the mode would be 65.

Next up is the standard deviation . 10.6 indicates that there’s quite a wide spread of numbers. We can see this quite easily by looking at the numbers themselves, which range from 55 to 90, which is quite a stretch from the mean of 72.4.

And lastly, the skewness of -0.2 tells us that the data is very slightly negatively skewed. This makes sense since the mean and the median are slightly different.

As you can see, these descriptive statistics give us some useful insight into the data set. Of course, this is a very small data set (only 10 records), so we can’t read into these statistics too much. Also, keep in mind that this is not a list of all possible descriptive statistics – just the most common ones.

But why do all of these numbers matter?

While these descriptive statistics are all fairly basic, they’re important for a few reasons:

  • Firstly, they help you get both a macro and micro-level view of your data. In other words, they help you understand both the big picture and the finer details.
  • Secondly, they help you spot potential errors in the data – for example, if an average is way higher than you’d expect, or responses to a question are highly varied, this can act as a warning sign that you need to double-check the data.
  • And lastly, these descriptive statistics help inform which inferential statistical techniques you can use, as those techniques depend on the skewness (in other words, the symmetry and normality) of the data.

Simply put, descriptive statistics are really important , even though the statistical techniques used are fairly basic. All too often at Grad Coach, we see students skimming over the descriptives in their eagerness to get to the more exciting inferential methods, and then landing up with some very flawed results.

Don’t be a sucker – give your descriptive statistics the love and attention they deserve!

Examples of descriptive statistics

Branch 2: Inferential Statistics

As I mentioned, while descriptive statistics are all about the details of your specific data set – your sample – inferential statistics aim to make inferences about the population . In other words, you’ll use inferential statistics to make predictions about what you’d expect to find in the full population.

What kind of predictions, you ask? Well, there are two common types of predictions that researchers try to make using inferential stats:

  • Firstly, predictions about differences between groups – for example, height differences between children grouped by their favourite meal or gender.
  • And secondly, relationships between variables – for example, the relationship between body weight and the number of hours a week a person does yoga.

In other words, inferential statistics (when done correctly), allow you to connect the dots and make predictions about what you expect to see in the real world population, based on what you observe in your sample data. For this reason, inferential statistics are used for hypothesis testing – in other words, to test hypotheses that predict changes or differences.

Inferential statistics are used to make predictions about what you’d expect to find in the full population, based on the sample.

Of course, when you’re working with inferential statistics, the composition of your sample is really important. In other words, if your sample doesn’t accurately represent the population you’re researching, then your findings won’t necessarily be very useful.

For example, if your population of interest is a mix of 50% male and 50% female , but your sample is 80% male , you can’t make inferences about the population based on your sample, since it’s not representative. This area of statistics is called sampling, but we won’t go down that rabbit hole here (it’s a deep one!) – we’ll save that for another post .

What statistics are usually used in this branch?

There are many, many different statistical analysis methods within the inferential branch and it’d be impossible for us to discuss them all here. So we’ll just take a look at some of the most common inferential statistical methods so that you have a solid starting point.

First up are T-Tests . T-tests compare the means (the averages) of two groups of data to assess whether they’re statistically significantly different. In other words, do they have significantly different means, standard deviations and skewness.

This type of testing is very useful for understanding just how similar or different two groups of data are. For example, you might want to compare the mean blood pressure between two groups of people – one that has taken a new medication and one that hasn’t – to assess whether they are significantly different.

Kicking things up a level, we have ANOVA, which stands for “analysis of variance”. This test is similar to a T-test in that it compares the means of various groups, but ANOVA allows you to analyse multiple groups , not just two groups So it’s basically a t-test on steroids…

Next, we have correlation analysis . This type of analysis assesses the relationship between two variables. In other words, if one variable increases, does the other variable also increase, decrease or stay the same. For example, if the average temperature goes up, do average ice creams sales increase too? We’d expect some sort of relationship between these two variables intuitively , but correlation analysis allows us to measure that relationship scientifically .

Lastly, we have regression analysis – this is quite similar to correlation in that it assesses the relationship between variables, but it goes a step further to understand cause and effect between variables, not just whether they move together. In other words, does the one variable actually cause the other one to move, or do they just happen to move together naturally thanks to another force? Just because two variables correlate doesn’t necessarily mean that one causes the other.

Stats overload…

I hear you. To make this all a little more tangible, let’s take a look at an example of a correlation in action.

Here’s a scatter plot demonstrating the correlation (relationship) between weight and height. Intuitively, we’d expect there to be some relationship between these two variables, which is what we see in this scatter plot. In other words, the results tend to cluster together in a diagonal line from bottom left to top right.

Sample correlation

As I mentioned, these are are just a handful of inferential techniques – there are many, many more. Importantly, each statistical method has its own assumptions and limitations.

For example, some methods only work with normally distributed (parametric) data, while other methods are designed specifically for non-parametric data. And that’s exactly why descriptive statistics are so important – they’re the first step to knowing which inferential techniques you can and can’t use.

Remember that every statistical method has its own assumptions and limitations,  so you need to be aware of these.

How to choose the right analysis method

To choose the right statistical methods, you need to think about two important factors :

  • The type of quantitative data you have (specifically, level of measurement and the shape of the data). And,
  • Your research questions and hypotheses

Let’s take a closer look at each of these.

Factor 1 – Data type

The first thing you need to consider is the type of data you’ve collected (or the type of data you will collect). By data types, I’m referring to the four levels of measurement – namely, nominal, ordinal, interval and ratio. If you’re not familiar with this lingo, check out the video below.

Why does this matter?

Well, because different statistical methods and techniques require different types of data. This is one of the “assumptions” I mentioned earlier – every method has its assumptions regarding the type of data.

For example, some techniques work with categorical data (for example, yes/no type questions, or gender or ethnicity), while others work with continuous numerical data (for example, age, weight or income) – and, of course, some work with multiple data types.

If you try to use a statistical method that doesn’t support the data type you have, your results will be largely meaningless . So, make sure that you have a clear understanding of what types of data you’ve collected (or will collect). Once you have this, you can then check which statistical methods would support your data types here .

If you haven’t collected your data yet, you can work in reverse and look at which statistical method would give you the most useful insights, and then design your data collection strategy to collect the correct data types.

Another important factor to consider is the shape of your data . Specifically, does it have a normal distribution (in other words, is it a bell-shaped curve, centred in the middle) or is it very skewed to the left or the right? Again, different statistical techniques work for different shapes of data – some are designed for symmetrical data while others are designed for skewed data.

This is another reminder of why descriptive statistics are so important – they tell you all about the shape of your data.

Factor 2: Your research questions

The next thing you need to consider is your specific research questions, as well as your hypotheses (if you have some). The nature of your research questions and research hypotheses will heavily influence which statistical methods and techniques you should use.

If you’re just interested in understanding the attributes of your sample (as opposed to the entire population), then descriptive statistics are probably all you need. For example, if you just want to assess the means (averages) and medians (centre points) of variables in a group of people.

On the other hand, if you aim to understand differences between groups or relationships between variables and to infer or predict outcomes in the population, then you’ll likely need both descriptive statistics and inferential statistics.

So, it’s really important to get very clear about your research aims and research questions, as well your hypotheses – before you start looking at which statistical techniques to use.

Never shoehorn a specific statistical technique into your research just because you like it or have some experience with it. Your choice of methods must align with all the factors we’ve covered here.

Time to recap…

You’re still with me? That’s impressive. We’ve covered a lot of ground here, so let’s recap on the key points:

  • Quantitative data analysis is all about  analysing number-based data  (which includes categorical and numerical data) using various statistical techniques.
  • The two main  branches  of statistics are  descriptive statistics  and  inferential statistics . Descriptives describe your sample, whereas inferentials make predictions about what you’ll find in the population.
  • Common  descriptive statistical methods include  mean  (average),  median , standard  deviation  and  skewness .
  • Common  inferential statistical methods include  t-tests ,  ANOVA ,  correlation  and  regression  analysis.
  • To choose the right statistical methods and techniques, you need to consider the  type of data you’re working with , as well as your  research questions  and hypotheses.

data analysis strategies in quantitative research

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Qualitative content analysis 101

73 Comments

Oddy Labs

Hi, I have read your article. Such a brilliant post you have created.

Derek Jansen

Thank you for the feedback. Good luck with your quantitative analysis.

Abdullahi Ramat

Thank you so much.

Obi Eric Onyedikachi

Thank you so much. I learnt much well. I love your summaries of the concepts. I had love you to explain how to input data using SPSS

Lumbuka Kaunda

Amazing and simple way of breaking down quantitative methods.

Charles Lwanga

This is beautiful….especially for non-statisticians. I have skimmed through but I wish to read again. and please include me in other articles of the same nature when you do post. I am interested. I am sure, I could easily learn from you and get off the fear that I have had in the past. Thank you sincerely.

Essau Sefolo

Send me every new information you might have.

fatime

i need every new information

Dr Peter

Thank you for the blog. It is quite informative. Dr Peter Nemaenzhe PhD

Mvogo Mvogo Ephrem

It is wonderful. l’ve understood some of the concepts in a more compréhensive manner

Maya

Your article is so good! However, I am still a bit lost. I am doing a secondary research on Gun control in the US and increase in crime rates and I am not sure which analysis method I should use?

Joy

Based on the given learning points, this is inferential analysis, thus, use ‘t-tests, ANOVA, correlation and regression analysis’

Peter

Well explained notes. Am an MPH student and currently working on my thesis proposal, this has really helped me understand some of the things I didn’t know.

Jejamaije Mujoro

I like your page..helpful

prashant pandey

wonderful i got my concept crystal clear. thankyou!!

Dailess Banda

This is really helpful , thank you

Lulu

Thank you so much this helped

wossen

Wonderfully explained

Niamatullah zaheer

thank u so much, it was so informative

mona

THANKYOU, this was very informative and very helpful

Thaddeus Ogwoka

This is great GRADACOACH I am not a statistician but I require more of this in my thesis

Include me in your posts.

Alem Teshome

This is so great and fully useful. I would like to thank you again and again.

Mrinal

Glad to read this article. I’ve read lot of articles but this article is clear on all concepts. Thanks for sharing.

Emiola Adesina

Thank you so much. This is a very good foundation and intro into quantitative data analysis. Appreciate!

Josyl Hey Aquilam

You have a very impressive, simple but concise explanation of data analysis for Quantitative Research here. This is a God-send link for me to appreciate research more. Thank you so much!

Lynnet Chikwaikwai

Avery good presentation followed by the write up. yes you simplified statistics to make sense even to a layman like me. Thank so much keep it up. The presenter did ell too. i would like more of this for Qualitative and exhaust more of the test example like the Anova.

Adewole Ikeoluwa

This is a very helpful article, couldn’t have been clearer. Thank you.

Samih Soud ALBusaidi

Awesome and phenomenal information.Well done

Nūr

The video with the accompanying article is super helpful to demystify this topic. Very well done. Thank you so much.

Lalah

thank you so much, your presentation helped me a lot

Anjali

I don’t know how should I express that ur article is saviour for me 🥺😍

Saiqa Aftab Tunio

It is well defined information and thanks for sharing. It helps me a lot in understanding the statistical data.

Funeka Mvandaba

I gain a lot and thanks for sharing brilliant ideas, so wish to be linked on your email update.

Rita Kathomi Gikonyo

Very helpful and clear .Thank you Gradcoach.

Hilaria Barsabal

Thank for sharing this article, well organized and information presented are very clear.

AMON TAYEBWA

VERY INTERESTING AND SUPPORTIVE TO NEW RESEARCHERS LIKE ME. AT LEAST SOME BASICS ABOUT QUANTITATIVE.

Tariq

An outstanding, well explained and helpful article. This will help me so much with my data analysis for my research project. Thank you!

chikumbutso

wow this has just simplified everything i was scared of how i am gonna analyse my data but thanks to you i will be able to do so

Idris Haruna

simple and constant direction to research. thanks

Mbunda Castro

This is helpful

AshikB

Great writing!! Comprehensive and very helpful.

himalaya ravi

Do you provide any assistance for other steps of research methodology like making research problem testing hypothesis report and thesis writing?

Sarah chiwamba

Thank you so much for such useful article!

Lopamudra

Amazing article. So nicely explained. Wow

Thisali Liyanage

Very insightfull. Thanks

Melissa

I am doing a quality improvement project to determine if the implementation of a protocol will change prescribing habits. Would this be a t-test?

Aliyah

The is a very helpful blog, however, I’m still not sure how to analyze my data collected. I’m doing a research on “Free Education at the University of Guyana”

Belayneh Kassahun

tnx. fruitful blog!

Suzanne

So I am writing exams and would like to know how do establish which method of data analysis to use from the below research questions: I am a bit lost as to how I determine the data analysis method from the research questions.

Do female employees report higher job satisfaction than male employees with similar job descriptions across the South African telecommunications sector? – I though that maybe Chi Square could be used here. – Is there a gender difference in talented employees’ actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – Is there a gender difference in the cost of actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – What practical recommendations can be made to the management of South African telecommunications companies on leveraging gender to mitigate employee turnover decisions?

Your assistance will be appreciated if I could get a response as early as possible tomorrow

Like

This was quite helpful. Thank you so much.

kidane Getachew

wow I got a lot from this article, thank you very much, keep it up

FAROUK AHMAD NKENGA

Thanks for yhe guidance. Can you send me this guidance on my email? To enable offline reading?

Nosi Ruth Xabendlini

Thank you very much, this service is very helpful.

George William Kiyingi

Every novice researcher needs to read this article as it puts things so clear and easy to follow. Its been very helpful.

Adebisi

Wonderful!!!! you explained everything in a way that anyone can learn. Thank you!!

Miss Annah

I really enjoyed reading though this. Very easy to follow. Thank you

Reza Kia

Many thanks for your useful lecture, I would be really appreciated if you could possibly share with me the PPT of presentation related to Data type?

Protasia Tairo

Thank you very much for sharing, I got much from this article

Fatuma Chobo

This is a very informative write-up. Kindly include me in your latest posts.

naphtal

Very interesting mostly for social scientists

Boy M. Bachtiar

Thank you so much, very helpfull

You’re welcome 🙂

Dr Mafaza Mansoor

woow, its great, its very informative and well understood because of your way of writing like teaching in front of me in simple languages.

Opio Len

I have been struggling to understand a lot of these concepts. Thank you for the informative piece which is written with outstanding clarity.

Eric

very informative article. Easy to understand

Leena Fukey

Beautiful read, much needed.

didin

Always greet intro and summary. I learn so much from GradCoach

Mmusyoka

Quite informative. Simple and clear summary.

Jewel Faver

I thoroughly enjoyed reading your informative and inspiring piece. Your profound insights into this topic truly provide a better understanding of its complexity. I agree with the points you raised, especially when you delved into the specifics of the article. In my opinion, that aspect is often overlooked and deserves further attention.

Shantae

Absolutely!!! Thank you

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Quantitative Data Analysis: A Comprehensive Guide

By: Ofem Eteng Published: May 18, 2022

Related Articles

data analysis strategies in quantitative research

A healthcare giant successfully introduces the most effective drug dosage through rigorous statistical modeling, saving countless lives. A marketing team predicts consumer trends with uncanny accuracy, tailoring campaigns for maximum impact.

Table of Contents

These trends and dosages are not just any numbers but are a result of meticulous quantitative data analysis. Quantitative data analysis offers a robust framework for understanding complex phenomena, evaluating hypotheses, and predicting future outcomes.

In this blog, we’ll walk through the concept of quantitative data analysis, the steps required, its advantages, and the methods and techniques that are used in this analysis. Read on!

What is Quantitative Data Analysis?

Quantitative data analysis is a systematic process of examining, interpreting, and drawing meaningful conclusions from numerical data. It involves the application of statistical methods, mathematical models, and computational techniques to understand patterns, relationships, and trends within datasets.

Quantitative data analysis methods typically work with algorithms, mathematical analysis tools, and software to gain insights from the data, answering questions such as how many, how often, and how much. Data for quantitative data analysis is usually collected from close-ended surveys, questionnaires, polls, etc. The data can also be obtained from sales figures, email click-through rates, number of website visitors, and percentage revenue increase. 

Quantitative Data Analysis vs Qualitative Data Analysis

When we talk about data, we directly think about the pattern, the relationship, and the connection between the datasets – analyzing the data in short. Therefore when it comes to data analysis, there are broadly two types – Quantitative Data Analysis and Qualitative Data Analysis.

Quantitative data analysis revolves around numerical data and statistics, which are suitable for functions that can be counted or measured. In contrast, qualitative data analysis includes description and subjective information – for things that can be observed but not measured.

Let us differentiate between Quantitative Data Analysis and Quantitative Data Analysis for a better understanding.

Data Preparation Steps for Quantitative Data Analysis

Quantitative data has to be gathered and cleaned before proceeding to the stage of analyzing it. Below are the steps to prepare a data before quantitative research analysis:

  • Step 1: Data Collection

Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as interviews, focus groups, surveys, and questionnaires.

  • Step 2: Data Cleaning

Once the data is collected, begin the data cleaning process by scanning through the entire data for duplicates, errors, and omissions. Keep a close eye for outliers (data points that are significantly different from the majority of the dataset) because they can skew your analysis results if they are not removed.

This data-cleaning process ensures data accuracy, consistency and relevancy before analysis.

  • Step 3: Data Analysis and Interpretation

Now that you have collected and cleaned your data, it is now time to carry out the quantitative analysis. There are two methods of quantitative data analysis, which we will discuss in the next section.

However, if you have data from multiple sources, collecting and cleaning it can be a cumbersome task. This is where Hevo Data steps in. With Hevo, extracting, transforming, and loading data from source to destination becomes a seamless task, eliminating the need for manual coding. This not only saves valuable time but also enhances the overall efficiency of data analysis and visualization, empowering users to derive insights quickly and with precision

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Now that you are familiar with what quantitative data analysis is and how to prepare your data for analysis, the focus will shift to the purpose of this article, which is to describe the methods and techniques of quantitative data analysis.

Methods and Techniques of Quantitative Data Analysis

Quantitative data analysis employs two techniques to extract meaningful insights from datasets, broadly. The first method is descriptive statistics, which summarizes and portrays essential features of a dataset, such as mean, median, and standard deviation.

Inferential statistics, the second method, extrapolates insights and predictions from a sample dataset to make broader inferences about an entire population, such as hypothesis testing and regression analysis.

An in-depth explanation of both the methods is provided below:

  • Descriptive Statistics
  • Inferential Statistics

1) Descriptive Statistics

Descriptive statistics as the name implies is used to describe a dataset. It helps understand the details of your data by summarizing it and finding patterns from the specific data sample. They provide absolute numbers obtained from a sample but do not necessarily explain the rationale behind the numbers and are mostly used for analyzing single variables. The methods used in descriptive statistics include: 

  • Mean:   This calculates the numerical average of a set of values.
  • Median: This is used to get the midpoint of a set of values when the numbers are arranged in numerical order.
  • Mode: This is used to find the most commonly occurring value in a dataset.
  • Percentage: This is used to express how a value or group of respondents within the data relates to a larger group of respondents.
  • Frequency: This indicates the number of times a value is found.
  • Range: This shows the highest and lowest values in a dataset.
  • Standard Deviation: This is used to indicate how dispersed a range of numbers is, meaning, it shows how close all the numbers are to the mean.
  • Skewness: It indicates how symmetrical a range of numbers is, showing if they cluster into a smooth bell curve shape in the middle of the graph or if they skew towards the left or right.

2) Inferential Statistics

In quantitative analysis, the expectation is to turn raw numbers into meaningful insight using numerical values, and descriptive statistics is all about explaining details of a specific dataset using numbers, but it does not explain the motives behind the numbers; hence, a need for further analysis using inferential statistics.

Inferential statistics aim to make predictions or highlight possible outcomes from the analyzed data obtained from descriptive statistics. They are used to generalize results and make predictions between groups, show relationships that exist between multiple variables, and are used for hypothesis testing that predicts changes or differences.

There are various statistical analysis methods used within inferential statistics; a few are discussed below.

  • Cross Tabulations: Cross tabulation or crosstab is used to show the relationship that exists between two variables and is often used to compare results by demographic groups. It uses a basic tabular form to draw inferences between different data sets and contains data that is mutually exclusive or has some connection with each other. Crosstabs help understand the nuances of a dataset and factors that may influence a data point.
  • Regression Analysis: Regression analysis estimates the relationship between a set of variables. It shows the correlation between a dependent variable (the variable or outcome you want to measure or predict) and any number of independent variables (factors that may impact the dependent variable). Therefore, the purpose of the regression analysis is to estimate how one or more variables might affect a dependent variable to identify trends and patterns to make predictions and forecast possible future trends. There are many types of regression analysis, and the model you choose will be determined by the type of data you have for the dependent variable. The types of regression analysis include linear regression, non-linear regression, binary logistic regression, etc.
  • Monte Carlo Simulation: Monte Carlo simulation, also known as the Monte Carlo method, is a computerized technique of generating models of possible outcomes and showing their probability distributions. It considers a range of possible outcomes and then tries to calculate how likely each outcome will occur. Data analysts use it to perform advanced risk analyses to help forecast future events and make decisions accordingly.
  • Analysis of Variance (ANOVA): This is used to test the extent to which two or more groups differ from each other. It compares the mean of various groups and allows the analysis of multiple groups.
  • Factor Analysis:   A large number of variables can be reduced into a smaller number of factors using the factor analysis technique. It works on the principle that multiple separate observable variables correlate with each other because they are all associated with an underlying construct. It helps in reducing large datasets into smaller, more manageable samples.
  • Cohort Analysis: Cohort analysis can be defined as a subset of behavioral analytics that operates from data taken from a given dataset. Rather than looking at all users as one unit, cohort analysis breaks down data into related groups for analysis, where these groups or cohorts usually have common characteristics or similarities within a defined period.
  • MaxDiff Analysis: This is a quantitative data analysis method that is used to gauge customers’ preferences for purchase and what parameters rank higher than the others in the process. 
  • Cluster Analysis: Cluster analysis is a technique used to identify structures within a dataset. Cluster analysis aims to be able to sort different data points into groups that are internally similar and externally different; that is, data points within a cluster will look like each other and different from data points in other clusters.
  • Time Series Analysis: This is a statistical analytic technique used to identify trends and cycles over time. It is simply the measurement of the same variables at different times, like weekly and monthly email sign-ups, to uncover trends, seasonality, and cyclic patterns. By doing this, the data analyst can forecast how variables of interest may fluctuate in the future. 
  • SWOT analysis: This is a quantitative data analysis method that assigns numerical values to indicate strengths, weaknesses, opportunities, and threats of an organization, product, or service to show a clearer picture of competition to foster better business strategies

How to Choose the Right Method for your Analysis?

Choosing between Descriptive Statistics or Inferential Statistics can be often confusing. You should consider the following factors before choosing the right method for your quantitative data analysis:

1. Type of Data

The first consideration in data analysis is understanding the type of data you have. Different statistical methods have specific requirements based on these data types, and using the wrong method can render results meaningless. The choice of statistical method should align with the nature and distribution of your data to ensure meaningful and accurate analysis.

2. Your Research Questions

When deciding on statistical methods, it’s crucial to align them with your specific research questions and hypotheses. The nature of your questions will influence whether descriptive statistics alone, which reveal sample attributes, are sufficient or if you need both descriptive and inferential statistics to understand group differences or relationships between variables and make population inferences.

Pros and Cons of Quantitative Data Analysis

1. Objectivity and Generalizability:

  • Quantitative data analysis offers objective, numerical measurements, minimizing bias and personal interpretation.
  • Results can often be generalized to larger populations, making them applicable to broader contexts.

Example: A study using quantitative data analysis to measure student test scores can objectively compare performance across different schools and demographics, leading to generalizable insights about educational strategies.

2. Precision and Efficiency:

  • Statistical methods provide precise numerical results, allowing for accurate comparisons and prediction.
  • Large datasets can be analyzed efficiently with the help of computer software, saving time and resources.

Example: A marketing team can use quantitative data analysis to precisely track click-through rates and conversion rates on different ad campaigns, quickly identifying the most effective strategies for maximizing customer engagement.

3. Identification of Patterns and Relationships:

  • Statistical techniques reveal hidden patterns and relationships between variables that might not be apparent through observation alone.
  • This can lead to new insights and understanding of complex phenomena.

Example: A medical researcher can use quantitative analysis to pinpoint correlations between lifestyle factors and disease risk, aiding in the development of prevention strategies.

1. Limited Scope:

  • Quantitative analysis focuses on quantifiable aspects of a phenomenon ,  potentially overlooking important qualitative nuances, such as emotions, motivations, or cultural contexts.

Example: A survey measuring customer satisfaction with numerical ratings might miss key insights about the underlying reasons for their satisfaction or dissatisfaction, which could be better captured through open-ended feedback.

2. Oversimplification:

  • Reducing complex phenomena to numerical data can lead to oversimplification and a loss of richness in understanding.

Example: Analyzing employee productivity solely through quantitative metrics like hours worked or tasks completed might not account for factors like creativity, collaboration, or problem-solving skills, which are crucial for overall performance.

3. Potential for Misinterpretation:

  • Statistical results can be misinterpreted if not analyzed carefully and with appropriate expertise.
  • The choice of statistical methods and assumptions can significantly influence results.

This blog discusses the steps, methods, and techniques of quantitative data analysis. It also gives insights into the methods of data collection, the type of data one should work with, and the pros and cons of such analysis.

Gain a better understanding of data analysis with these essential reads:

  • Data Analysis and Modeling: 4 Critical Differences
  • Exploratory Data Analysis Simplified 101
  • 25 Best Data Analysis Tools in 2024

Carrying out successful data analysis requires prepping the data and making it analysis-ready. That is where Hevo steps in.

Want to give Hevo a try? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the amazing Hevo price , which will assist you in selecting the best plan for your requirements.

Share your experience of understanding Quantitative Data Analysis in the comment section below! We would love to hear your thoughts.

Ofem Eteng

Ofem is a freelance writer specializing in data-related topics, who has expertise in translating complex concepts. With a focus on data science, analytics, and emerging technologies.

No-code Data Pipeline for your Data Warehouse

  • Data Analysis
  • Data Warehouse
  • Quantitative Data Analysis

Continue Reading

Boudhayan Ghosh

Cloud Migration- Process, Types, and Strategy

What is a data stack: everything you need to know, 10 best cloud data services tools for 2024, i want to read this e-book.

data analysis strategies in quantitative research

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Culture
  • Music and Media
  • Music and Religion
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business History
  • Business Ethics
  • Business Strategy
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Methodology
  • Economic History
  • Economic Systems
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry

  • < Previous chapter
  • Next chapter >

15 Data Analysis I: Overview of Data Analysis Strategies

Julia Brannen is Professor of Sociology of the Family, Thomas Coram Research Unit, Institute of Education, University of London. Her substantive research has focused on the family lives of parents, children and young people in Britain and Europe, working families and food, and intergenerational relations. She is an academician of the Academy of Social Science in the UK. She has a particular interest in methodology including mixed methods, biographical and narrative approaches and comparative research. Co-founder of The International Journal of Social Research Methodology, she coedited the journal for 17 years with Rosalind Edwards and she is an Associate editor of The Journal of Mixed Methods. An early exponent of MMR, in 1992 she edited Mixing Methods: Qualitative and Quantitative Research (London: Gower). She has written many books journal articles and contributions to methodological texts. Recent books include: The Handbook of Social Research Methods (Sage 2010) Work, Family and Organizations in Transition: A European Perspective (Policy 2009), Transitions to Parenthood in Europe: A comparative life course perspective (Policy 2012).

Rebecca O’Connell is a Senior Research Officer at the Thomas Coram Research Unit (TCRU), Institute of Education, University of London, UK. She is a social anthropologist whose research interests focus on the intersection of care and work, particularly foodwork and childcare. She is currently Principal Investigator on two studies: ‘Families, Food and Work: taking a long view’, a multi-methods longitudinal study funded by the Department of Health and the Economic and Social Research Council (ESRC); and ‘Families and Food in Hard Times, a subproject of the ESRC's National Centre for Research Methods ‘Novella’ node, which is led by Professor Ann Phoenix. Professor Julia Brannen is co-investigator on both studies. From May 2014 Rebecca leads a study of ‘Families and food poverty in three European countries in an age of austerity’, a five-year project funded by the European Research Council. Rebecca is also co-convenor of the British Sociological Association Food Study Group.

  • Published: 19 January 2016
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter enables the reader to consider issues that are likely to affect the analysis of mixed method research (MMMR). It identifies the ways in which data from multimethod and mixed methods research can be integrated in principle and gives detailed examples of different strategies in practice. Furthermore, it examines a particular type of MMMR and discusses an exemplar study in which national survey data are analyzed alongside a longitudinal qualitative study whose sample is drawn from the same national survey. By working through three analytic issues, it shows the complexities and challenges involved in integrating qualitative and quantitative data: issues about linking data sets, similarity (or not) of units of analysis, and concepts and meaning. It draws some conclusions and sets out some future directions for MMMR.

Introduction

This chapter begins with a consideration of the conditions under which integration is possible (or not). A number of factors that need to be considered before a researcher can decide that integration is possible are briefly discussed. This discussion is followed by a consideration of Caracelli and Greene’s (1993) analysis strategies. Examples of mixed method studies that involve these strategies are described, including the ways they attempt to integrate different data, in particular data transformation, examine typologies and outlier cases, and merge data sets. It is shown that these strategies are not always standalone but can merge into each other. The chapter concludes with a discussion of an extended example of the ways in which a study we carried out called Families, Food and Work (2009–2014) sought to employ analysis of relevant questions from different large-scale data sets with data from a qualitative study of how working parents and children negotiate food and eating ( O’Connell & Brannen, 2015 ).

Issues to Consider Before Conducting Mixed Method Research and Analysis

The researcher should consider a number of issues that need to be revisited during the analysis of the data before embarking on multimethod and mixed methods research (MMMR).

The first concerns the ontological and epistemological assumptions underpinning the choice of methods used to generate the data. Working from the principle that the choice of method is not made in a philosophical void, the data should be thought about in relation to epistemological assumptions underpinning the aspect of the research problem/question being addressed (see, e.g., Barbour, 1999 ). Thus in terms of best practice, researchers may be well advised to consider what kind of knowledge they seek to generate. Most multimethod and mixed methods researchers, while not necessarily thinking of themselves as pragmatists in a philosophical sense, adopt a pragmatic approach ( Bryman, 2008 ). Pragmatism dominates in MMMR ( Omwuegbuzie & Leech, 2005 ), especially among those from more applied fields of the social sciences (in which MMMR has been most widespread). However, pragmatism in this context connotes its common-sense meaning, sidelining philosophical issues so that MMMR strategies are employed as a matter of pragmatics ( Bryman, 2008 ). Some might argue that if different questions are addressed in a study that require different types of knowledge, then the data cannot be integrated unproblematically in the analysis phase. However, it depends on what one means by “integration,” as we later discuss.

The second issue concerns the level of reality under study. Some research questions are about understanding social phenomena at the micro level while others are concerned with social phenomena at the macro level. Thus researchers in the former group emphasize the agency of those they study through an emphasis on studying individuals’ subjective interpretations and perspectives and have allegiances to interpretivist and postmodernist epistemologies. Those working at the macro level are concerned with identifying larger scale patterns and trends and seek to hypothesize or create structural explanations, which may call on realist epistemologies. However, all researchers aim to focus to some extent on the relation between individuals and society. If one is to transcend conceptually the micro and the macro levels, then methods must be developed to reflect this transcendence ( Kelle, 2001 ). For example, in qualitative research that focuses on individuals’ perspectives, it is important to set those perspectives in their social structural and historical contexts. Whether those who apply a paradigm of rationality will apply both qualitative and quantitative methods will depend on the extent to which they seek to produce different levels and types of explanation. This will mean interrogating the linkages between the data analyzes made at these levels.

The third issue relates to the kinds of human experience and social action that the study’s research questions are designed to address. For example, if one is interested in life experiences over long periods of time, researchers will employ life story or other narrative methods. In this case, they need to take into account the way stories are framed, and in particular how temporal perspectives, the purposes of the narrator, and the way stories are told influence the stories. The data the researchers will collect are therefore narrative data. Hence how these stories fit, for example, with quantitative data collected as part of a MMMR approach will require close interrogation in the analysis of the two data sets, taking into account both interpretive and realist historical approaches.

The fourth issue to consider is whether the data are primary or secondary and, in the latter case, whether they are subjected to secondary analysis. Secondary data are by definition collected by other people, although access to them may not be straightforward. If the data have already been coded and the original data are not available, the types of secondary analysis possible will be limited. Moreover, the preexistence of these data may influence the timetabling of the MMMR project and may also shape the questions that are framed in any subsequent qualitative phase and in the data analysis. Depending on the nature and characteristics of the data, one data set may prove intrinsically more interesting; thus more time and attention may be given to its analysis. A related issue therefore concerns the possibilities for operationalizing the concepts employed in relation to the different parts of the MMMR inquiry. Preexisting data, especially those of a quantitative type, may make it difficult to reconceptualize the problem. At a practical level, the questions asked in a survey may poorly relate to those that fit the MMMR inquiry, as we later illustrate. Since one does not know what one does not know, it may be only at later stages that researchers working across disciplines and methodologies may come to realize which questions cannot be addressed and which data are missing.

The fifth issue relates to the environments in which researchers are located. For example, are the research and the researcher operating within the same research setting, for example the same discipline, the same theoretical and methodological tradition, or the same policy social context? MMMR fits with the political currency accorded to “practical inquiry” that speaks to policy and policymakers and that informs practice, as distinct from scientific research ( Hammersley, 2000 ). However, with respect to policy, this has to be set in the context of the continued policy importance afforded to large-scale data but also the increased scale of these data sets and the growth in the availability of official administrative data. In turn, these trends have been matched by the increased capacity of computing power to manage and analyze these data ( Brannen & Moss, 2013 ) and the increased pressure on social scientists to specialize in high-level quantitative data analysis. As more such data accrue, the apparent demand for quantitative analysis increases ( Brannen & Moss, 2013 ). However, MMMR strategies are also often employed alongside such quantitative analysis, especially in policy-driven research. For example, in cross-national research, governmental organizations require comparative data to assess how countries are doing in a number of different fields, a process that has become an integral part of performance monitoring. But, equally, there is a requirement for policy analysis and inquiries into how policies work in particular local conditions. Such micro-level analysis will require methods like documentary analysis, discourse analysis, case study designs, and intensive research approaches. Furthermore, qualitative data are thought useful to “bring alive” research for policy and practitioner audiences ( O’Cathain, 2009 ).

Another aspect of environment relates to the sixth issue concerning the constitution of the research team and the extent to which it is inter or transdisciplinary . Research teams can be understood as “communities of practice” ( Denscombe, 2008 ). While paradigms are pervasive ways of dividing social science research, as Morgan (2007) argues, we need to think in terms of shared beliefs within communities of researchers. This requires an ethic of “precarity” to prevail ( Ettlinger, 2007 , p. 319), through which researchers are open to others’ ideas and can relinquish entrenched positions. However, the success of communities of practice will depend on the political context, their composition, and whether they are democratic ( Hammersley, 2005 ). Thus in the analysis of MMMR, it is important to be cognizant of the power relations with such communities of practice since they will influence the researcher’s room for maneuvering in determining directions and outputs of the data analysis. At the same time, these political issues affect analysis and dissemination in research teams in which members share disciplinary approaches.

Finally, there are the methodological preferences, skills, and specialisms of the researcher, all of which have implications for the quality of the data and the data analysis. MMMR offers the opportunity to learn about a range of methods and thus to be open to new ways of addressing research questions. Broadening one’s methodological repertoire mitigates against “trained incapacities” as Reiss (1968) termed the issue—the entrenchment of researchers in particular types of research paradigms, as well as questions, research methods, and types of analysis.

The Context of Inquiry: Research Questions and Research Design

The rationale for MMMR must be clear both in the phase of the project’s research design (the context of the inquiry) and in the analysis phase (the context of justification). At the research design phase, researchers wrestle with such fundamental methodological questions as to what kinds of knowledge they seek to generate, such as whether to describe and understand a social phenomenon or seek to explain it. Do we wish to do both, that is, to understand and explain? In the latter case, the research strategy will typically translate itself into employing a mix of qualitative and quantitative methods, which some argue is the defining characteristic of mixed method research (MMR) ( Tashakorri & Creswell, 2007 ).

If a MMR strategy is employed, this generally implies that a number of research questions will address a substantive issue. MMMR is also justified in terms of its capacity to address different aspects of a research question. This is turns leads researchers to consider how to frame their research questions and how these determine the methods chosen. Typically research questions are formulated in the research proposal. However, they should also be amenable to adaptation ( Harrits, 2011 , citing Dewey, 1991 ); adaptations may be necessary as researchers respond to the actual conditions of the inquiry. According to Law (2004) , research is an “assemblage,” that is, something not fixed in shape but incorporating tacit knowledge, research skills, resources, and political agenda that are “constructed” as they are woven together (p. 42). Methodology should be rebuilt during the research process in a way that responds to research needs and the conditions encountered—what Seltzer-Kelly, Westwood, and Pena-Guzman (2012) term “a constructivist stance at the methodological level” (p. 270). This can also happen at the phase when data are analyzed.

Developing a coherent methodology with a close link between the research question and the research strategy holds out the best hope for answering a project’s objectives and questions ( Woolley, 2009 , p. 8). Thus Yin (2006) would say that to carry out an MMMR analysis it is essential to have an integrated set of research questions. However, it is not easy to determine what constitutes coherence. For example, the research question concerning the link between the quality of children’s diet in the general population and whether mothers are in paid employment may be considered a very different and not necessarily complementary question to the research question about the conditions under which the children of working mothers are fed. Thus we have to consider here how tightly or loosely the research questions interconnect.

The framing of the research question influences the method chosen that, in turn, influences the choice of analytic method. Thus in our study of children’s food that examined the link between children’s diet and maternal employment, we examined a number of large-scale data sets and carried out statistical analyzes on these, while in studying the conditions under which children in working families get fed, we carried out qualitative case analysis on a subset of households selected from one of the large-scale data sets.

The Context of Justification: The Analysis Phase

In the analysis phase of MMMR, the framing of the research questions becomes critical, affecting when, to what extent, and in what ways data from different methods are integrated. So, for example, we have to consider the temporal ordering of methods. For example, quantitative data on a research topic may be available and the results already analyzed. This analysis may influence the questions to be posed in the qualitative phase of inquiry.

Thus it is also necessary to consider the compatibility between the units of analysis in the quantitative phase and the qualitative phase of the study, for example, between variables studied in a survey and the analytic units studied in a qualitative study. Are we seeking analytic units that are equivalent (but not similar), or are we seeking to analyze a different aspect of a social phenomenon? If the latter, how do the two analyses relate? This may become more critical if the same population is covered in both the qualitative and quantitative phases. What happens when a nested or integrated sampling strategy is employed, as in the case of a large-scale survey analysis and a qualitative analysis based on a subsample of the survey?

A number of frameworks have been suggested for integrating data produced by quantitative and qualitative methods ( Brannen, 1992 ; Caracelli & Greene, 1993 ; Greene, Caracelli, & Graham, 1989 ). While these may provide a guide to the variety of ways to integrate data, they should not be used as fixed templates. Indeed, they may provide a basis for reflection after the analysis has been completed.

Corroboration —in which one set of results based on one method are confirmed by those gained through the application of another method.

Elaboration or expansion —in which qualitative data analysis may exemplify how patterns based on quantitative data analysis apply in particular cases. Here the use of one type of data analysis adds to the understanding gained by another.

Initiation —in which the use of a first method sparks new hypotheses or research questions that can be pursued using a different method.

Complementarity —in which qualitative and quantitative results are regarded as different beasts but are meshed together so that each data analysis enhances the other ( Mason, 2006 ). The data analyses from the two methods are juxtaposed and generate complementary insights that together create a bigger picture.

Contradiction —in which qualitative data and quantitative findings conflict. Exploring contradictions between different types of data assumed to reflect the same phenomenon may lead to an interrogation of the methods and to discounting one method in favor of another (in terms of assessments of validity or reliability). Alternatively, the researcher may simply juxtapose the contradictions for others to explore in further research. More commonly, one type of data may be presented and assumed to be “better,” rather than seeking to explain the contradictions in relation to some ontological reality ( Denzin & Lincoln, 2005 ; Greene et al., 1989 ).

As Hammersley (2005) points out, all these ways of combining different data analyses to some extent make assumptions that there is some reality out there to be captured, despite the caveats expressed about how each method constructs the data differently. Thus, just as seeking to corroborate data may not lead us down the path of “validation,” so too the complementarity rationale for mixing methods may not complete the picture either. There may be no meeting point between epistemological positions. As Hammersley (2008) suggests, there is a need for a dialogue between them in the recognition that absolute certainty is never justified and that “we must treat knowledge claims as equally doubtful or that we should judge them on grounds other than their likely truth” (p. 51).

Multimethod and Mixed Methods Research Analysis Strategies: Examples of Studies

Caracelli and Greene (1993) suggest analysis strategies for integrating qualitative and quantitative data. In practice these strategies are not always standalone but blur into each other. Moreover, as Bryman (2008) has observed, it is relatively rare for mixed method researchers to give full rationales for MMMR designs. They can involve data transformation in which, for example, qualitative data are treated quantitatively. They may involve typology development in which cases are categorized in patterns and outlier cases are scrutinized. They may involve data merging in which both data sets are treated in similar ways, for instance, by creating similar variables or equivalent units of analysis across data sets. In this section, drawing on the categorization of Caracelli and Greene, we give some examples of studies in which qualitative and quantitative data are integrated in these different ways (Table 15.1 ). These are not intended to be exhaustive, nor are the studies pure examples of these strategies.

Qualitative Data Are Transformed into Quantitative Data or Vice Versa

In survey research, in order to test how respondents understand questions it is commonplace to transform qualitative data into quantitative data. This is termed cognitive testing . The aim here is to find a fit between responses given in both the survey and the qualitative testing. For example, most personality scales are based on prior clinical research. An example of data transformation on a larger scale is taken from a program of research on the wider benefits of adult learning ( Hammond, 2005 ). The rationale for the study was that the research area was underresearched and the research questions relatively unformulated (p. 241). Qualitative research was carried out to identify variables to test on an existing national longitudinal data set. The qualitative phase involved biographical interviews with adult learners. The quantitative data consisted of data from an existing UK cohort study (the 1958 National Child Development Study). A main justification for using these latter data concerned the further exploitation of data that are expensive to collect. The qualitative component was conceived as a “mapping” exercise carried out to inform the research design and the implementation of the quantitative phase, that is, the identification of variables for quantitative analysis ( Hammond, 2005 , p. 243). This approach has parallels with qualitative pilot work carried out as a prologue to a survey, although the qualitative material was also analyzed in its own right. However, while the qualitative data were used with the aim of finding common measures that fit with the quantitative inquiry, Hammond also insisted that the qualitative data not be used to explain quantitatively derived outcomes but to interrogate them further ( Hammond, 2005 , p. 244). Inevitably, contradictions between the respective findings arose. For example, Hammond reported that the effect of adult learning on life satisfaction (the transformed measure) found in the National Child Development Study cohort analysis was greater for men than for women, while women reported themselves in the biographical interview responses to be positive about the courses they had taken. On this issue, the biographical interviews were regarded as being “more sensitive” than the quantitative measure. Hammond also suggested that the interview data showed that an improved sense of well-being (another transformed measure) experienced by the respondents in the present was not necessarily incompatible with having a negative view of the future. The quantitative data conflated satisfaction with “life so far” and with “life in the future.” Contradictions were also explained in terms of the lack of representativeness of the qualitative study (the samples did not overlap). In addition, it is possible that priority was given by the researcher to the biographical interviews and may have placed more trust in this approach. Another possibly relevant factor was that the researcher had no stake in creating or shaping the quantitative data set. In any event, the biographical interviews were conducted before the quantitative analyses and were used to influence the decisions about which analyses to focus on in the quantitative analysis. Hence the qualitative data threw up hypotheses that the quantitative data were used to reject or support. What is interesting about using qualitative data to test on quantitative evidence is the opportunity it offers to pose or initiate new lines of questioning ( Greene et al., 1989 )—a result not necessarily anticipated at the outset of this research.

Typologies, Deviant, Negative, or Outlier Cases Are Subjected to Further Scrutiny Later or in Another Data Set

A longitudinal or multilayered design provides researchers with opportunities to examine the strength of the conclusions that can be drawn about the cases and the phenomena under study ( Nilsen & Brannen, 2010 ). For an example of this strategy, we turn to the classic study carried by Glueck and Glueck (1950 , 1968 ). The study Five Hundred Criminal Careers was based on longitudinal research of delinquents and nondelinquents (1949–1965). The Gluecks studied the two groups at three age points; 14, 25, and 32. The study had a remarkably high (92%) response rate when adjusted for mortality at the third wave. The Gluecks collected very rich data on a variety of dimensions and embedded open-ended questions within a survey framework. Interviews with respondents and their families, as well as key informants (social workers, school teachers, employers, and neighbors), were carried out, together with home observations and the study of official records and criminal histories. Some decades later, Laub and Sampson (1993 , 1998) reanalyzed these data longitudinally (the Gluecks’ original analyzes were cross-sectional).

Laub and Sampson (1998) note that the Gluecks’ material “represents the comparison, reconciliation and integration of these multiple sources of data” (p. 217) although the Gluecks did not treat the qualitative data in their own right. The original study was firmly grounded in a quantitative logic where the purpose was to arrive at causal explanations and the ability to predict criminal behavior . However, the Gluecks were carrying out their research in a pre-computer age, a fact that facilitated reanalysis of the material. When Laub and Sampson came to recode the raw data many years later, they rebuilt the Gluecks’ original data set and used their coding schemes to validate their original analyzes. Laub and Sampson then constructed the criminal histories of the sample, drawing on and integrating the different kinds of data available. This involved merging data.

Next they purposively selected a subset of cases for intensive qualitative analysis in order to explore consistencies and inconsistencies between the original findings and the original study’s predictions for the delinquents’ future criminal careers—what happened to them some decades later. They examined “off diagonal” and “negative cases” that did not fit the quantitative results and predictions. In particular, they selected individuals who, on the basis of their earlier careers, were expected to follow a life of crime but did not and those expected to cease criminality but did not.

Citing Jick (1979) , Laub and Sampson (1998) suggest how divergence can become an opportunity for enriching explanations (p. 223). By examining deviant cases on the basis of one data analysis and interrogating these in a second data analysis, they demonstrated complex processes of individual pathways into and out of crime, including identified pathways, that take place over long time periods ( Laub & Sampson, 1998 , p. 222). They argued that “without qualitative data, discussions of continuity often mask complex and rich qualitative processes” (Sampson & Laub, 1997, quoted in Laub & Sampson 1998 , p. 229).

In addition they supported a biographical approach that enables the researcher to interpret data in historical context, in this case to understand criminality in relation to the type and level of crime prevalent at the time. Laub and Sampson (1998) selected a further subsample of the original sample of delinquents, having managed to trace them after 50 years ( Laub & Sampson, 1993 ) and asked them to review their past lives. The researchers were particularly interested in identifying turning points to understand what had shaped the often unexpected discontinuities and continuities in the careers of these one-time delinquents.

This is an exemplar study of the analytic strategy of subjecting typologies to deeper scrutiny. It also afforded an opportunity to theorize about the conditions concerning cases that deviated from predicted trajectories.

Data Merging: The Same Set of Variables Is Created Across Quantitative and Qualitative Data Sets

Here assumptions are made that the phenomena under study are similar in both the qualitative and quantitative parts of an inquiry, a strategy exemplified in the following two studies. The treatment of the data in both parts of the study was seamless, so that one type of data is transformed into the other. In a longitudinal study, Blatchford (2005) examined the relationship between classroom size and pupils’ educational achievement. Blatchford justifies using a mixed method strategy in terms of the power of mixed methods to reconciling inconsistencies found in previous research. The rationale given for using qualitative methods was the need to assess the relationships between the same variables but in particular case studies. Blatchford notes that “priorities had to be set and some areas of investigation received more attention than others” (p. 204). The dominance of the quantitative analysis occurred despite the collection of “fine grained data on classroom processes” that could have lent themselves to other kinds of analysis, such as understanding how students learn in different classroom environments. The qualitative data were in fact put to limited use and were merged with the quantitative data.

Sammons et al. (2005) similarly employed a longitudinal quantitative design to explore the effects of preschool education on children’s attainment and development at entry to school. Using a purposive rationale, they selected a smaller number of early education centers from their original sample on the basis of their contrasting profiles. Sammons et al. coded the qualitative data in such a way that the “reduced data” (p. 219) were used to provide statistical explanations for the outcomes produced in the quantitative longitudinal study. Thus, again, the insights derived from the qualitative data analysis were merged with the quantitative variables, which were correlated with outcome variables on children’s attainment. The researchers in question could have drawn on both the qualitative and quantitative data for different insights, as is required in case study research ( Yin, 2006 ) and as suggested in their purposive choice of preschool centers.

Using Quantitative and Qualitative Data: A Longitudinal Study of Working Families and Food

In this final part of the chapter we take an example from our own work in which we faced a number of methodological issues in integrating and meshing different types of data. In this section we discuss some of the challenges involved in the collection and analysis of such data.

The study we carried out is an example of designing quantitative and qualitative constituent parts to address differently framed questions. Its questions were, and remain, currently highly topical in the Western world and concern the influences of health policy on healthy eating, including in childhood, and its implications for obesity. 1 Much of the health evidence is unable to explain why it is that families appear to ignore advice and continue to eat in unhealthy ways. The project arose in the context of some existing research that suggests an association between parental (maternal) employment and children’s (poor) diet ( Hawkins, Cole, & Law, 2009 ). We pursued these issues by framing the research phenomenon in different ways and through the analysis of different data sets.

The project was initiated in a policy context in which we tendered successfully for a project that enabled us to exploit a data set commissioned by government to examine the nation’s diet. Somewhat of a landmark study in the UK, the project is directly linked to the National Diet and Nutrition Survey (NDNS) funded by the UK’s Food Standards Agency and Department of Health, a study largely designed by those from public health and nutritionist perspectives. These data, from the first wave of the new rolling survey, were unavailable to others at that time. We were also facilitated in selecting a subsample of households with children from the NDNS that we subjected to a range of qualitative methods. The research team worked closely with the UK government to gain access to the data collected and managed by an independent research agency in the identification of a subsample to meet the research criteria and in seeking the consent of the survey subsample participants.

Applying anthropological and sociological lenses, the ethnographically trained researchers in the team sought to explore inductively parents’ experiences of negotiating the demands of “work” and “home” and domestic food provisioning in families. We therefore sought to understand the contextual and embodied meanings of food practices and their situatedness in different social contexts (inside and outside the home). We also assumed that children are agents in their own lives, and therefore we included children in the study and examined the ways in which children reported food practices and attributed meaning to food. The main research questions (RQ) for the study were:

What is the relationship between parental employment and the diets of children (aged 1.5 to 10 years)?

How does food fit into working family life and how do parents experience the demands of “work” and “home” in managing food provisioning?

How do parents and children negotiate food practices?

What foods do children of working parents eat in different contexts—home, childcare, and school—and how do children negotiate food practices?

The study not only employed a MMMR strategy but was also longitudinal, a design that is rarely discussed in the MMMR literature. We conducted a follow-up study (Wave 2) approximately two years later, which repeated some questions and additionally asked about social change, the division of food work, and the social practice of family meals. The first research question was to be addressed through the survey data while RQ 2, 3, and 4 were addressed through the qualitative study. In the qualitative study, a variety of ethnographic methods were to be deployed with both parents and children ages 2 to 10 years. The ethnographic methods included a range of interactive research tools, which were used flexibly with the children since their age span is wide: interviews, drawing methods, and, with some children, photo elicitation interviews in which children photographed foods and meals consumed within and outside the home and discussed these with the researcher at a later visit. Semistructured interviews were carried out with parents who defined themselves as the main food providers and sometimes with an additional parent or care-provider who was involved in food work and also wished to participate.

In the context of existing research that suggests an association between parental (maternal) employment and household income with children’s (poor) diet ( Hawkins et al., 2009 ) carried out on a different UK data set and also supported by some US research (e.g. Crepinsek & Burstein, 2004 ; McIntosh et al., 2008 ), it was important to investigate whether this association was born out elsewhere. In addition and in parallel, we therefore carried out secondary analysis on the NDNS Year 1 (2008/2009) data and on two other large-scale national surveys, the Health Survey for England (surveys, 2007, 2008) and the Avon Longitudinal Study of Parents and Children (otherwise known as “Children of the Nineties”) (data sweeps 1995/1996, 1996/1997, and 1997/1999) to examine the first research question. This part of the work was not straightforward. First we found that, contrary to a previous NDNS (1997) survey that had classified mothers’ working hours as full or part-time, neither mothers’ hours of work nor full/part-time status had been collected in the new rolling NDNS survey. Rather, this information was limited in most cases to whether a mother was or was not in paid employment. Thus it was not possible to disentangle the effects of mothers working full-time from those doing part-time hours on children’s diets. This was unfortunate since the NDNS provided very detailed data on children’s nutrition based on food diaries, unlike the Millennium Cohort Study, which collected only mothers’ reports of children’s snacking between meals at home ( Hawkins et al., 2009 ). While the Millennium Cohort Study analysis found a relationship between long hours of maternal employment and children’s dietary intake, no association between mothers’ employment and children’s dietary intake was found in the NDNS ( O’Connell, Brannen, Mooney, Knight, & Simon, 2011 ; Simon et al., forthcoming ). However, it is possible that a relationship might have been found if we had been able to disaggregate women’s employment by hours.

In the following we describe three instances of data analysis in this longitudinal MMMR study in relation to some of the key analytic issues set out in the research questions described previously (see Table 15.2 ).

Studying children’s diets in a MMMR design

Examining the division of household food work in a MMMR design

Making sense of family meals in a MMMR design

Linking Data in a Longitudinal Multimethod and Mixed Methods Research Design: Studying Children’s Diets

The research problem.

Together with drawing a sample for qualitative study from the national survey, we aimed to carry out secondary analysis on the NDNS data in order to generate patterns of “what” is eaten by children and parents and to explore associations with a range of independent variables, notably mothers’ employment. The NDNS diet data were based on four-day unweighed food diaries that recorded detailed information about quantities of foods and drinks consumed, as well as where, when, and with whom foods were eaten ( Bates, Lennox, Bates, & Swan, 2011 ). On behalf of the NDNS survey, the diaries were subjected by researchers at Human Nutrition Research, Cambridge University, to an analysis of nutrient intakes using specialist dietary recording and analysis software ( Bates et al., 2011 ; Data In Nutrients Out [DINO]).

Note: NDNS = National Diet and Nutrition Survey; HSE = Health Survey for England; ALSPAC = Avon Longitudinal Study of Pregnancy and Childhood.

Methodological challenges

These nutritional data proved challenging for us as social scientists to use, and they involved discussion with nutrition experts from within and outside Human Nutrition Research who created different dietary measures for the use of the team in the secondary analysis, thereby involving some interesting cross-disciplinary discussion. Working with nutritionists, we developed a unique diet quality index that compared intakes for children in different age ranges to national guidelines, giving an overall diet “score”—a composite measure that could be used to sample children from the survey and also as an outcome measure in the regression analysis described earlier, which set out to answer the first research question on the relationship between maternal employment and children’s dietary intakes ( Simon, O’Connell, & Stephen, 2012 ). While the usefulness of the diet data was constrained by the fact that no data had been collected about mothers’ working hours (nor indeed maternal education, an important cofounder), an important impact of our study has been to have these added to the annual survey from 2015 to increase the study’s usefulness to social science and social policy.

As noted, another aim of using the NDNS was to help us draw a purposive subsample of children ( N = 48) in which cases of children with healthier and less healthy diets were equally represented (as well as to select the sample on demographic variables). However, we encountered a challenge because of the small number of children in the age range in which we were interested; we thus had to include a wider age range of children (1.5–10 years) than would have been ideal.

We also sought to link the quantitative diary data from NDNS with the ethnographic and interview data from parents and children concerning their reported food practices. However, while the NDNS dietary data and the diary method used are considered the “gold standard” in dietary surveys ( Stephen, 2007 ), they were less useful for us at the level of the qualitative sample, and in practice this proved not to be feasible. First, the scores were based on dietary data collected over a single brief period of time (four days; Bates et al., 2011 ). Also, the whole survey was conducted over an extended time period (one year), with a mixture of weekdays and weekend days surveyed, which was unproblematic at the aggregate level. However, at the individual level, it was clear that these four days were not generalizable to dietary intakes over a longer time period. One parent in the qualitative study, for example, said the diary had been collected over a weekend break in Scotland where the family had indulged in holiday eating, including plenty of chips. In addition, since the data we had was about nutrient intakes—we did not have the resources (time or expertise) to examine the raw diary data, which could potentially have been provided—we had no idea what children were actually eating. Furthermore, there was a time lag between when the diet data were collected and when we did the fieldwork for the qualitative study (around six months later). We could have waited for all the NDNS data to be cleaned and analyzed, which would have given us additional information about children’s food intakes (e.g., their consumption of fruit and vegetables), but this would have caused a far greater time delay in starting the qualitative study. Given the rapidity with which children in the younger age range change their preferences and habits, the diet data would then have been “out of date” by the time we conducted qualitative interviews. Our decision to construct a diet quality index was therefore a pragmatic one, largely determined in practice by the data available to us within a reasonable time from the diary data collection—those provided as feedback to the study participants after the NDNS visits. 2

As we were also interested in the foods children were eating, we asked parents and children in the qualitative interviews to describe what they ate on the last weekday and on the last weekend day and about typicality, rather than repeating the diet diaries, which would have been highly resource intensive. Mothers were also asked to assess their children’s diets. We could not compare mothers’ descriptions and assessments of their child’s diet with diaries since we did not have access to the latter. However, in comparing these assessments with the child’s NDNS diet score there appeared in some cases to be corroboration, while others appeared to bear no relation. Indeed, some of the apparently “worst” cases, according to mothers’ assessments, did not necessarily have scores suggesting poor diets. Although hours of employment were asked in the qualitative study, no patterns were found in these data between hours of employment or other characteristics such as social class. 3 In analyzing other research questions about patterns of family meals and child versus adult control, for example, diet scores did not generally appear to be related to patterns found in the qualitative data. This may have been explained by the small sample or by lack of validity of the diet data at the individual level or by changes in children’s diet between the survey and qualitative study. In a follow-up qualitative study we are conducting with families two years later we will be able to compare analyses over two time points using the same questions put to parents and children about children’s food and eating.

In terms of linking dietary data in a MMMR design, the NDNS survey data suffered from a number of limitations. Even though we set aside our particular epistemological assumptions, theoretical interests, and research objectives, which were different from those of the survey, these affected the integration of the data. The usefulness of the NDNS data for addressing the research questions at the aggregate and individual level was questionable, notably the lack of data on the working hours of mothers and the difficulties of accessing detailed diary data collected at one time point as an individual measure of nutrition. The NDNS had rather small numbers of the groups in which we were interested, which compounded the selection of the qualitative subsample. There were also issues concerning the employment of different methods at different time points; this was especially challenging methodologically given the focus on younger children’s food tastes and diets, which can change dramatically within a short period of time. A further issue concerned conceptualization across different data sets, in particular relating to issues around food practices such as healthy eating. As noted, in the case of the survey data composite, measures of children’s nutrient intake at a particular moment in time were created using food diary data, and the measurements were then compared to national guidelines. In contrast, in the qualitative study latitude was given to parents to make their own judgments about their child’s diet from their perspectives while we were able to compare what parents reported at two time points and so take a longer term view. Therefore in integrating and interpreting both sets of data, we wrestled with the epistemological and ontological assumptions that underpinned the study’s main research questions concerning the meaning and significance of food and our own expectations about the kinds of “essential” sociodemographic data that we would expect any survey to collect.

Nonetheless, we had to overcome these tensions and demonstrate the societal impact of research that focused on an emotive and politically sensitive topic—maternal employment and diets of young children. In practice, the study’s outputs remained divided by approach, with papers drawing on mainly qualitative ( Brannen, O’Connell, & Mooney, 2013 ; Knight, O’Connell, & Brannen, 2014 ; O’Connell & Brannen, 2014 ) or quantitative ( Simon et al., forthcoming ) findings or describing methodological approaches (e.g., Brannen & Moss, 2013 ; O’Connell, 2013 ; Simon et al., 2012 ).

Similar Units of Analysis in a Multimethod and Mixed Methods Research Longitudinal Design: Examining the Division of Household Food Work

Mothers’ employment is only one small part of the picture of how food and work play out in households in which there are children. UK evidence suggests that men are more likely to cook regularly and share responsibility for feeding the family when women are absent, usually because of paid employment. However, this research was conducted some time ago (e.g., Warde & Hetherington, 1994 ).

The analysis

The more recent evidence that we analyzed is from the NDNS and the 40,000 UK Household Panel study called Understanding Society. The NDNS (Year 1: 2008/2009) survey findings suggest that mothers are the “main food providers” in 93% of families with a child 18 months to 10 years, with no significant differences according to work status or social class. Data from Understanding Society (Wave 2: 2010/2011) provide data on parental hours of work (10,236 couples with a child age zero to 14 years). Our secondary analysis of these data suggests that mothers working part-time are significantly less likely to share cooking with their partners, compared with mothers working full-time (but not those working 48 or more hours per week). 4 Complementing this, the secondary analysis also found that, in general, the longer the hours worked by a father, the less likely he was to share cooking with his spouse or partner.

In the qualitative study we asked questions (mainly to mothers) about who took charge of the food work, including cooking and food shopping. At the follow-up study (Wave 2) we asked about whether this was the same/had changed, whether children were encouraged to do more as they got older, and how the participants felt about how food work was shared. In their responses, the participants, usually mothers, mentioned other aspects of food work such as planning for meals, washing up, and loading the dishwasher. In classifying the cases, we drew on DeVault’s (1991) concept of “domestic food provisioning” and did not limit food work to cooking but also included shopping, cleaning up, and less visible aspects such as meal planning and feeling “responsible” for children’s diets. (At Wave 2 we asked a question about which parent worried more about the target child’s diet, thus eliciting responses about “responsibility.”)

The methodological challenges

The treatment of the households as cases was intrinsic to the way we approached the qualitative analysis. We plotted the households according to the level of fathers’ involvement in food provisioning on a continuum. This resulted in a more refined analysis compared with the quantitative data analysis (in the UK panel study Understanding Society). It enabled us to identify features of family life, not only mothers’ and fathers’ working hours—albeit these were mentioned most often—which were important in explaining the domestic division of food work ( Metcalfe, Dryden, Johnson, Owen, & Shipton, 2009 , pp. 109–111). Moreover, because we investigated the division of food work over time (two years), we were also able to explore continuities and discontinuities at the household level. Parents accounted for changes in the division of food work accordingly: a mother becoming ill, moving house, the birth of an additional child, loss of energy, children being older and easier to cook for, the loss of other help, and health concerns. We found, therefore, that patterns within households do change, with some fathers doing more food work and some doing less in response to circumstances in their lives (within and beyond the household), albeit only a minority do equal amounts or more. The conceptual approach that was adopted included a focus on routine practices and on accounting for practices to help shift the gaze away from a narrow behavioral “working hours perspective” toward understanding how family (food) practices are influenced by the interpenetration of public and private spheres (home and workplace) and how people make sense of (and thus reproduce or redefine) patterns of paid and unpaid work. Food practices, like other family practices, are shaped by gendered cultural expectations about motherhood and fatherhood—what it means to be “a good mother” and “a good father”—as well as by material constraints of working hours.

In addition to providing a more refined analysis than the quantitative data, the qualitative data also provided a way of examining outliers or cases that did not fit the general pattern shown in the survey results (according to Caracelli & Greene’s [1993] categories described earlier). Although the general trend was for a man to share cooking more when his spouse worked longer hours and to do less sharing when he worked longer hours, the qualitative data provided examples of where this did not fit (as well as where it did). For example, in one case, a father worked fewer hours than his wife but did almost no food work as he was said by his wife to lack competence, while another father who worked longer hours took most responsibility for food work as this fitted with his and his wife’s shift patterns.

In addressing the research question of how parental employment influences the division of food work, the use of both the survey data and the qualitative material together proved relatively successful since the unit of analysis in both referred to behavior. To some extent the MMMR approach provided corroborating evidence while the qualitative material refined and elaborated on the quantitative analysis. Broadly, the results were comparable and complementary, albeit the research questions relating to each method were somewhat different; notably in the qualitative study, there was a concern to understand the respondents’ accounts for the division of food work and to examine food work in the context of the families more holistically and the meaning of mothering and fathering more generally. By contrast, in the survey, food work was conceptualized behaviorally and broken down into constituent “tasks” such as cooking (cf. DeVault, 1991 ).

Concepts and Meaning in a Multimethod and Mixed Methods Research Longitudinal Design: The Study of Family Meals

Studies have identified an association between frequency of “family meals” and children’s and adolescents’ body mass index, nutritional status, social well-being, and physical and mental health. These studies suggest that children who eat fewer family meals have poorer health, nutrition, and behavioral outcomes than those who eat more meals with their families (e.g., Neumark-Sztainer, Hannon, Story, Croll, & Perry, 2003 ). Some longitudinal research implies causality rather than mere association and that family meals are “protective” against a range of less optimal nutritional and psychosocial outcomes, especially for girls (e.g., Neumark-Sztainer et al., 2003 ). There is widespread agreement about the reduced frequency of eating dinner as a family as children age (e.g., Gilman et al., 2000). Some studies also find an association with socioeconomic status and mothers’ paid employment (e.g., Neumark-Stzainer et al., 2003 ). In Wave 1 of the study we wanted to examine via the qualitative data set the relationship between children’s participation in family meals and their parents’ employment to establish whether maternal employment seemed important in explaining the social dimension of children’s eating practices (in this case meals). We asked about eating patterns on the previous work and nonwork day and their typicality. We also asked about the presence of different family members on different days of the week.

These questions enabled us to develop a typology of eating patterns in the working week: eating together most days, the modified family meal in which children ate with one parent, and a third situation in which eating together never occurred. In addition we asked what participants understood by the term family meal and whether family meals were important to them. Most people suggested that family meals were important, but fewer managed to eat them on most working days. We drew on the concept of synchronicity to shed light on how meals and mealtimes were coordinated in family life and the facilitators and constraints on coordination ( Brannen et al., 2013 ). We found that whether families ate together during the week, on the weekend only, or more rarely was not only influenced by parents’ work-time schedules but also by children’s timetables relating to their age and bodily tempos, their childcare regimes, their extracurricular activities, and the problem of coordinating different food preferences and tastes. While we did not report it, as the numbers were small, there was very little difference between the average diet score of children in each group (meals, no meals, and modified meals), which, as explained previously, is perhaps to be expected given that the differences within each group meant there were many factors involved.

At Wave 2 we aimed to extend this analysis by examining quantitatively the relationship between children eating family meals and sociodemographic variables (e.g., child age, maternal employment, social class) and nutritional intake at the aggregate level. To do so we aimed to explore a unique aspect of the archived NDNS data set, which has currently only been analyzed in one other study ( Mak et al., 2012 ). These data are “contextual” in relation to the food and nutrition data in that participants were asked as part of the food diaries to record, in relation to each eating occasion, not only what was eaten but also the time of eating, where, with whom, and whether the television was on ( Bates et al., 2011 ).

The main advantage of using these data was that, in contrast to dietary surveys that include a measure of “family meal frequency” and take the meaning of family meal for granted, these data were not collected by retrospective self-reports. Given the normative status of “the family meal,” as set out in the sociological literature (e.g., Jackson, Olive, & Smith, 2009 ; Murcott, 1997 , 2010 ) and our own qualitative study (Wave 1), we thought that these data were advantageous. 5 In addition, since the NDNS contains information about overall dietary intake, we could link family meal frequency to diet quality using our score. However, there were also disadvantages, namely that the sociodemographic data (especially maternal education and hours of employment) had not been collected. There were also other methodological challenges for the team, specifically in designing an operationalizable definition of a family meal. In short, we had to define “what is a meal?” and “what is a family?” in relation to the data available. This was limited by the following factors, among others. First, in relation to the variable “who eaten with,” we found little within-group variation in that most children were not reported as eating alone. While we thought it feasible to create a dichotomous variable—family/not family—decisions about how to do this were tricky given the data had been coded into categories that were not mutually exclusive (“alone,” “family,” “friend’s,” “parent(s)/care-provider,” “siblings,” “parent(s)/care-provider & siblings,” “care-provider & other children,” “other”). Second, the number of possible eating occasions throughout the day was considerable, involving consideration of which “time slot” to examine for a possible “family meal.” We opted to look at the family evening meal, as this is implied if not explicitly spelled out in popular understanding and in research about family meals, and, furthermore, we had established that 5 pm to 8 pm was the most common time slot for children’s eating in the NDNS data. However, we were aware that this might exclude some younger children who might eat earlier.

Other problems not limited to this data set were that we could not know from these data whether those present were actually eating with the child or whether they were eating the same foods (both of which are thought to be potentially important in explaining any association between family meal frequency and children’s and adolescents’ overall dietary intakes; (e.g., Centre for Research on Families and Relationships, 2012 ; Skafida, 2013 ).

In operationalizing family meals in the NDNS data set, we therefore recognized that we had moved away somewhat from the idea of a family meal as held by most people. While we were avoiding the problem of asking participants to define this themselves, we were creating new problems in that any conclusions would not relate to family meals as they are popularly understood but would rather relate to very particular practices—a child eating between 5 pm and 8 pm with an adult member of his or her family present (or not).

Thus in examining the topic of family meals via a MMMR design, we sought through the qualitative data to explore similar issues to those examined in the quantitative data, namely to determine the frequency of parental presence and child presence at family mealtimes. However, we were also able to tease out whether children and parents were both eating and whether they were eating the same foods and the conditions under which they did and did not do so ( Brannen et al., 2013 ). The qualitative study also provided insight into the symbolic and moral aspects surrounding the concept of family meals as well as practices of eating together, while in the analysis of the quantitative data set the onus was on us to define what constituted eating together.

Given the risk inherent in the experimental nature of our analysis of NDNS data and the political appetite for statistical findings, we sought also to analyze participation in family meal frequency and sociodemographic factors in two other UK large-scale surveys that have asked about children and adults eating together: the Millennium Cohort Survey and Understanding Society. Since these were not dietary surveys, we could not examine associations of self-reported family meal frequency with diet outcomes, but we could examine the relationship with factors such as hours of maternal employment. Albeit the data were limited by their reliance on self-report based on the assumption of a shared understanding of the concept of family meal as outlined earlier, in combining results with findings from our complementary analyses of the qualitative data and the NDNS “contextual data” we hope to foreground the importance of methodology and highlight the complexities of measuring children’s participation in family meals and any association with sociodemographic factors or health and behavioral outcomes.

In disrupting common sense and taken for granted assumptions about the association between family meals and other factors such as mothers’ work and children’s overall diets, our findings based on applying a MMMR approach—while unsettling in the sense of raising methodological uncertainties—speak directly to political and policy concerns in that they caution against the idea that family meals are some sort of “magic bullet,” although they are a convenient way for politicians to (dis)place responsibility for children’s food intake onto parents ( O’Connell & Simon, 2013 ; Owen, 2013 ).

We hope we have demonstrated some of the benefits as well as the methodological issues in multimethod research. In particular it is important to take into account that quantitative and qualitative methods each suffer from their own biases and limitations. Survey diary data, while advantageous in measuring behaviors (e.g., what children ate), have the disadvantage that they do not address issues of meaning (in the previous example concerning the study of family meals). Qualitative methods, while providing contextual and symbolic meanings (about food and meals, for example), may not provide detailed information about what is eaten and how much.

However, the combination of these methods may not provide a total solution either, as we have demonstrated with particular reference to our own study of food and families. Qualitative and quantitative methods may not together succeed in generating the knowledge that the researcher is seeking. In researching everyday, taken-for-granted practices, both methods can suffer from similar disadvantages. In surveys, practices may not easily be open to accurate recall or reflection. Likewise, qualitative methods, even when a narrative approach is adopted, may not produce recall but instead provoke normative accounts or justifications. While survey data can provide a “captive” sample for a qualitative study and the opportunity to analyze extensive contextual data about that sample, the two data samples may not be sufficiently comparable. Survey data conducted at one moment in time may not connect with qualitative data when they are collected at another time point (this is critical, for example, in studying children’s diets). Thus it may be necessary to build resources into the qualitative phase of the inquiry for reassessing children’s diet using a method based on that adopted in the survey. This may prove costly and require bringing in the help of nutritionists.

Many social practices are clothed in moral discourses and are thereby difficult to study by whatever method. Surveys are renowned for generating socially acceptable answers, but in interviews respondents may not want to admit that they do not measure up to normative ideals. In discussing methodological issues in MMMR, it is all too easy to segment qualitative and quantitative approaches artificially ( Schwandt, 2005 ). As already stressed, MMMR can provide an articulation between different theoretical levels as in macro, meso, and micro contexts. However, these theoretical levels typically draw on different logics of interpretation and explanation, making it necessary to show how different logics can be integrated ( Kelle, 2001 ). Moreover, we should not adopt a relativist approach but continue to subject our findings to scrutiny in order to draw fewer false conclusions.

Translating research questions across different methods of data collection may involve moving between different epistemologies and logics of inquiry and is likely to affect the data and create problems of interpretation, as has been discussed in the example of families and food. Quantitative and qualitative analyzes do not necessarily map on to each other readily; they may be based on different forms of explanation. While sometimes they may complement one another, in other instances analyzes are dissonant. However, we should also expect that findings generated by different methods (or conducted at different points in time) do not match up. It is, for example, one thing to respond to an open-ended question in a face-to-face interview context and quite another to tick an item from a limited set of alternatives in a self-completion questionnaire.

Given the complexities of linking quantitative and qualitative data sets, we suggest a narrative approach be adopted in reporting the data analyses. By this we mean that when writing up their results, researchers should give attention to the ways in which the data have been integrated, the issues that arise in interpreting the different data both separately and in combination, and how the use of different methods have benefited or complicated the process. This is particularly important where MMMR is carried out in a policy context so that policymakers and other stakeholder groups may be enlightened about the caveats associated with different types of data and, in particular, the advantages and issues of employing more than one type of research methodology (typically quantitative data are preferred by policymakers; Brannen & Moss, 2013 ).

In addition, the researcher should be attentive to the ways in which the processes of “translation” involved in interpreting data are likely, often unwittingly, to reflect rather than reveal the contexts in which the research is carried out. Data have to be understood and interpreted in relation to the contexts in which the research is funded (by whom and for whom), the research questions posed, the theoretical frameworks that are fashionable at the time of study, and the methods by which the data are produced ( Brannen, 2005a , 2005b ).

Just as when we use or reuse archived data, it is important in primary research to take into account the broad historical and social contexts of the data and the research inquiry. All data analyses require contextualization, and whether this is part of a mixed method or multimethod research strategy, it is necessary to have recourse to diverse data sources and data collected in different ways. MMMR is not only a matter of strategy. It is not a tool-kit or a technical fix, nor is it a belt-and-braces approach. MMMR requires as much if not more reflexivity than other types of research. This means that researchers need to examine their own presumptions and preferences about different methods and the results that derive from each method and be open to shifting away from entrenched positions to which they cling—theoretical, epistemological, and methodological. At a practical level, the multimethod researcher should be prepared to learn new skills and engage with new communities of practice.

Future Directions

In the future, social science research is likely to become increasingly expensive. Primary research may also prove more difficult to do for other reasons, for example the restrictions imposed by ethics committees. Many researchers will have recourse to secondary analysis of existing contemporary data or turn to older archived data. MMMR research will therefore increasingly involve the analysis of different types of data rather than the application of different methods. For example, in our own work we have become increasingly engaged not only in examining data from different surveys but also in interrogating the assumptions that underlie the variables created in those surveys and reconciling (or not) these variables with new qualitative data we have collected.

Another possible trend that is likely to promote the use of MMMR is the growth in demand for interdisciplinary research that embraces disciplines beyond the social sciences. This trend will require even more emphasis on researchers working outside their traditional comfort zones and intellectual and methodological silos. For example, in our own research on families and children’s food practices we have been required to engage with nutritionists.

A third trend concerns the growing external pressure on social scientists to justify the importance of their work to society at large, while from within social sciences there is pressure on them to reassert the role of social science as publically and politically engaged. In the UK, we are increasingly required by funding bodies—research councils and government—and by universities to demonstrate the societal “impact” of our research. Part and parcel of this mission is the need to demonstrate the credibility of our research findings and to educate the wider world about the rigor and robustness of our methods. In order to understand the conditions of society in a globalized world, it will indeed be necessary to deepen, develop, and extend our methodological repertoire. MMMR is likely to continue to develop an increasingly high profile in this endeavor.

Discussion Questions

Discuss the benefits and challenges of linking a qualitative sample to a survey.

Identify within a MMMR study a research question that can be addressed both by qualitative and quantitative methods and a research question that can be addressed via one method only; discuss some different ways of integrating the data from these methods.

Discuss two or three methodological strategies for integrating dissonant research results based on quantitative and qualitative data.

Create a research design for a longitudinal research project that employs MMMR as part of that design.

Suggested Websites

http://eprints.ncrm.ac.uk

UK’s National Centre of Research Methods website, where its “e-print” series of working and commissioned papers are found.

http://www.qualitative-research.net/fqs

Website of a European online qualitative journal, FQS.

The study titled Food Practices and Employed Families with Younger Children was funded as part of a special initiative funded by the UK’s Economic and Social Research Council and the UK’s Food Standards Agency (RES-190–25-0010) and subsequently with the Department of Health (ES/J012556/1). The current research team includes Charlie Owen and Antonia Simon (statisticians and secondary data analysts), the principal investigator Dr. Rebecca O’Connell (anthropologist), and Professor Julia Brannen (sociologist). Katie Hollinghurst and Ann Mooney were formerly part of the team (psychologists).

A “Dietary Feedback” report was provided to each individual participant about his or her own intake within 3 months of the diet diary being completed. This provided information about the individual intakes of fat, saturated fat, non-milk extrinsic sugars, dietary fiber (as non-starch polysaccharide), Vitamin C, folate, calcium, iron and energy (Table 15.1 ) relative to average intakes for each of these items for children in the UK, these being based on the results for children of this age from the NDNS conducted in the 1990s ( Gregory & Hinds, 1995 ; Gregory et al., 2000 ).

However at the aggregate level, secondary analysis of the NDNS for 2008–2010 ( National Centre for Social Research, Medical Research Council Resource Centre for Human Nutrition Research, & University College London Medical School, 2012 ), a combined data set of respondents from Year 1 (2008–2009) and Year 2 (2009–2010) found that social class and household income were related to children’s fruit and vegetable consumption and overall diet quality (as measured by our nutritional score; Simon et al., forthcoming ).

A feasible explanation for this finding is that mothers working these long hours are more likely to use paid or unpaid childcare in addition to sharing with a partner or spouse.

They avoid two key problems associated with self-report of family meals in the extant literature in which “family meals” are not usually defined by interviewers but by interviewees themselves (cf. Hammons & Fiese, 2011 ): that people will answer the same response but mean different things and, additionally, also because of the normativity surrounding family meals, some participants may overreport their participation in them. In short, such data seemed advantageous compared to poorly designed survey questions that are associated with known problems related to reliability, validity, and bias (cf. Hammons & Fiese, 2011 ).

Bates, B. , Lennox, A. , Bates, C. , & Swan, G. (Eds.). ( 2011 ). National Diet & Nutrition Survey: Headline results from Years 1 & 2 (combined) of the rolling programme 2008/9–2009/10. Appendix A. Dietary data collection and editing. London, England: Food Standards Agency and Department of Health.

Google Scholar

Google Preview

Blatchford, P. ( 2005 ). A multi-method approach to the study of school class size differences.   International Journal of Social Research Methodology , 8 (3), 195–205.

Brannen, J. ( 1992 ). Mixing methods: Qualitative and quantitative research . Aldershot, UK: Ashgate.

Brannen, J. ( 2005 a). Mixed methods research: A discussion paper. NCRM Methods Review Papers NCRM/005. Southampton, UK: National Center for Research Methods. Retrieved from http://eprints.ncrm.ac.uk/89/

Brannen, J. ( 2005 b). Mixing methods: The entry of qualitative and quantitative approaches into the research process.   International Journal of Social Research Methodology [Special Issue], 8 (3), 173–185.

Brannen, J. , & Moss, G. ( 2013 ). Critical issues in designing mixed methods policy research.   American Behavioral Scientist , 7 , 152–172.

Brannen, J. , O’Connell, R. , & Mooney, A. ( 2013 ). Families, meals and synchronicity: Eating together in British dual earner families.   Community, Work and Family , 16 (4), 417–434. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/13668803.2013.776514 doi:10.1080/13668803.2013.776514

Bryman, A. ( 2008 ). Why do researchers integrate/combine/mesh/blend/mix/merge/fuse quantitative and qualitative research? In M. Bergman (Ed.), Advances in Mixed methods research: Theories and applications (pp. 87–100). London, England: Sage.

Caracelli, V.J. , & Greene, J. ( 1993 ). Data analysis strategies for mixed-method evaluation designs.   Educational Evaluation and Policy Analysis , 15 (2), 195–207.

Centre for Research on Families and Relationships. ( 2012 ). Is there something special about family meals? Exploring how family meal habits relate to young children’s diets. Briefing 62. Edinburgh, UK: Author. Retrieved from http://www.era.lib.ed.ac.uk/bitstream/1842/6554/1/briefing%2062.pdf

Crepinsek, M. , & Burstein, N. ( 2004 ). Maternal employment and children’s nutrition: Vol. 2. Other nutrition-related outcomes . Washington, DC: Economic Research Service, US Department of Agriculture.

Denscombe, M. ( 2008 ). Communities of practice: A research paradigm for the mixed method approach.   Journal of Mixed Methods Research , 2 , 270–284.

Denzin, N. , & Lincoln, Y. (Eds.). ( 2005 ). The SAGE handbook of qualitative research . Thousand Oaks, CA: Sage.

DeVault, D. L. ( 1991 ). Feeding the family: The social organization of caring as gendered work . Chicago, IL: University of Chicago Press.

Dewey, J. ( 1991 ). The pattern of inquiry. In J. Dewey (Ed.), The later works (pp. 105–123). Carbondale: Southern University Illinois Press.

Ettlinger, N. ( 2007 ). Precarity unbound.   Alternatives , 32 , 319–340.

Gillman, M. W. , S. L. Rifas-Shiman , A. Lindsay Frazier , H. R. H. Rockett , C. A. Camargo Jr ., A. E. Field , . . . G. A. Colditz . ( 2000 ). Family dinner and diet quality among older children and adolescents.   Archives of Family Medicine , 9 , 235–240.

Glueck, S. , & Glueck, E. ( 1950 ). Unravelling juvenile delinquency . New York, NY: Commonwealth Fund.

Glueck, S. , & Glueck, E. ( 1968 ). Delinquents and nondelinquents in perspective . Cambridge, MA: Harvard University Press.

Greene, J. , Caracelli, V. J. , & Graham, W. F. ( 1989 ). Towards a conceptual framework for mixed-method evaluation designs.   Education , Evaluation and Policy Analysis , 11 (3), 255–274.

Gregory J. R. , & Hinds K. ( 1995 ). National Diet and Nutrition Survey: Children aged 1 1⁄2 to 4 1⁄2 Years . London, England: Stationery Office.

Gregory, J. R. , Lowe S. , Bates, C. J. , Prentice, A. , Jackson, L. V. , Smithers, G. , . . . Farron, M. ( 2000 ). National Diet and Nutrition Survey: Young people aged 4 to 18 years: Vol. 1. Report of the diet and nutrition survey . London, England: Stationery Office.

Hammersley, M. ( 2000 ). Varieties of social research: A typology.   International Journal of Social Research Methodology , 3 (3), 221–231.

Hammersley, M. ( 2005 ). The myth of research-based practice: The critical case of educational inquiry.   International Journal for Multiple Research Approaches , 8 (4), 317–331.

Hammersley, M. ( 2008 ). Troubles with triangulation. In M. Bergman (Ed.), Advances in mixed methods research: Theories and applications (pp. 22–37). London, England: Sage.

Hammond, C. ( 2005 ). The wider benefits of adult learning: An illustration of the advantages of multi-method research.   International Journal of Social Research Methodology , 8 (3), 239–257.

Hammons, A. , & Fiese, B. ( 2011 ). Is frequency of shared family meals related to the nutritional health of children and adolescents?   Pediatrics , 127 (6), e1565–e1574.

Harrits, G. ( 2011 ). More than method? A discussion of paradigm differences within mixed methods research.   Journal of Mixed Methods Research , 5 (2), 150–167.

Hawkins, S. , Cole, T. , & Law, C. ( 2009 ). Examining the relationship between maternal employment and health behaviours in 5-year-old British children.   Journal of Epidemiology and Community Health , 63 (12), 999–1004.

Jackson, P.   Olive, S. , & Smith, G. ( 2009 ). Myths of the family meal: Re-reading Edwardian life histories. In P. Jackson (Ed.), Changing families, changing food , (pp. 131–145). Basingstoke, UK: Palgrave Macmillan.

Jick, T. ( 1979 ). Mixing qualitative and quantitative methods: Triangulation in action.   Administrative Science Quarterly , 24 , 602–611.

Kelle, U. ( 2001 ). Sociological explanations between micro and macro and the integration of qualitative and quantitative methods.   FQS , 2 (1). Retrieved from http://www.qualitative-research.net/fqs-eng.htm

Knight, A. , O’Connell, R. , & Brannen, J. ( 2014 ). The temporality of food practices: Intergenerational relations, childhood memories and mothers’ food practices in working families with young children.   Families , Relationships and Societies , 3 (2), 303–318.

Laub, J. , & Sampson, R. ( 1993 ). Turning points in the life course: Why change matters to the study of crime,   Criminology , 31 , 301–325.

Laub, J. , & Sampson, R. ( 1998 ). Integrating qualitative and quantitative data. In J. Giele & G. Elder (Eds.), Methods of life course research: Qualitative and quantitative approaches (pp. 213–231). London, England: Sage.

Law, J. ( 2004 ). After method: Mess in social science research . New York, NY: Routledge.

Mak, T. , Prynne, C. , Cole, D. , Fitt, E. , Roberts, C. , Bates, B. , & Stephen, A. ( 2012 ). Assessing eating context and fruit and vegetable consumption in children: New methods using food diaries in the UK National Diet and Nutrition Survey Rolling Programme.   International Journal of Behavioral Nutrition and Physical Activity , 9 , 126.

Mason, J. ( 2006 ). Mixing methods in a qualitatively driven way,”   Qualitative Research , 6 (1), 9–26.

Metcalfe, A. , Dryden, C. , Johnson, M. , Owen, J. , & Shipton, G. ( 2009 ). Fathers, food and family life. In P. Jackson (Ed.), Changing families, changing food . Basingstoke, UK: Palgrave Macmillan.

McIntosh, A. , Davis, G. , Nayga, R. , Anding, J. , Torres, C. , Kubena, K. , . . . You, W. ( 2008 ). Parental time, role strain, and children’s fat intake and obesity-related outcomes . Washington, DC: US Department of Agriculture, Economic Research Service.

Morgan, D. ( 2007 ). Paradigms lost and pragmatism regained; Methodological implications of combining qualitative and quantitative methods.   Journal of Mixed Methods Research , 1 (1), 48–76.

Murcott, A. ( 1997 ). Family meals—A thing of the past? In P. Caplan (Ed.), Food, identity and health (pp. 32–49). London, England: Routledge.

Murcott, A. ( 2010 , March 9). Family meals: Myth, reality and the reality of myth . Myths and Realities: A New Series of Public Debates. London, England: British Library.

National Centre for Social Research, Medical Research Council Resource Centre for Human Nutrition Research, & University College London Medical School. ( 2012 ). National Diet and Nutrition Survey, 2008–2010 [Computer file]. 3rd ed. Essex, UK: UK Data Archive [distributor]. Retrieved from http://dx.doi.org/10.5255/UKDA-SN-6533-1

Neumark-Sztainer, D. , Hannon, P. J. , Story, M. , Croll, J. , & Perry, C. , ( 2003 ). Family meal patterns: Associations with socio-demographic characteristics and improved dietary intake among adolescents.   Journal of the American Dietetic Association , 103 , 317–322.

Nilsen, A. , & Brannen, J. ( 2010 ). The use of mixed methods in biographical research. In A. Tashakorri & C. Teddlie (Eds.), SAGE handbook of mixed methods research in social & behavioral research (2nd ed., pp. 677–696). London, England: Sage.

O’Cathain, A. ( 2009 ). Reporting results. In S. Andrew & E. Halcomb (Eds.), Mixed methods research for nursing and the health sciences (pp. 135–158). Oxford, UK: Blackwell.

O’Connell, R. ( 2013 ). The use of visual methods with children in a mixed methods study of family food practices.   International Journal of Social Research Methodology , 16 (1), 31–46.

O’Connell, R. , & Brannen, J. ( 2014 ). Children’s food, power and control: Negotiations in families with younger children in England.   Childhood , 21 (1), 87–102.

O’Connell, R. , & Brannen, J. ( 2015 ). Food, Families and work . London, England: Bloomsbury.

O’Connell, R. , Brannen, J. , Mooney, A. , Knight, A. , & Simon, A. (2011). Food and families who work: A summary. Available at http://www.esrc.ac.uk/my-esrc/grants/RES-190-25-0010/outputs/Read/e7d99b9f-eafd-4650-9fd2-0efcf72b4555

O’Connell, R. , & Simon, A. (2013, April). A mixed methods approach to meals in working families: Addressing lazy assumptions and methodologies difficulties . Paper presented at the British Sociological Association Annual Conference: Engaging Sociology. London, England.

Omwuegbuzie, A. , & Leech, N. ( 2005 ). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies.   International Journal of Social Research Methodology , 8 (5), 375–389.

Owen, C. (2013, July). Do the children of employed mothers eat fewer “family meals”? Paper presented at the Understanding Society Conference, University of Essex.

Reiss, A. L. ( 1968 ). Stuff and nonsense about social surveys and participant observation. In H. L. Becker , B. Geer , D. Riesman , & R. S. Weiss (Eds.), Institutions and the person: Papers in memory of Everett C. Hughes (pp. 351–367). Chicago, IL: Aldine.

Sammons, P. , Siraj-Blatchford, I. , Sylva, K. , Melhuish, E. , Taggard, B. , & Elliot, K. ( 2005 ). Investigating the effects of pre-school provision: Using mixed methods in the EPPE research.   International Journal of Social Research Methodology , 8 (3), 207–224.

Schwandt, T. ( 2005 ). A diagnostic reading of scientifically based research for education.   Educational Theory , 55 , 285–305.

Seltzer-Kelly, D. , Westwood, D. , & Pena-Guzman, M. ( 2012 ). A methodological self-study of quantitizing: Negotiated meaning and revealing multiplicity.   Journal of Mixed Methods Research , 6 (4), 258–275.

Simon, A. , O’Connell, R. , & Stephen, A. M. ( 2012 ). Designing a nutritional scoring system for assessing diet quality for children aged 10 years an under in the UK.   Methodological Innovations Online , 7 (2), 27–47.

Simon, A. , Owen, C. , O’Connell, R. , & Stephen, A. ( forthcoming ). Exploring associations between children’s diets and maternal employment through an analysis of the UK National Diet and Nutrition Survey.

Skafida, V. ( 2013 ). The family meal panacea: Exploring how different aspects of family meal occurrence, meal habits and meal enjoyment relate to young children’s diets.   Sociology of Health and Illness , 25 (6), 906–923. doi:10.1111/1467-9566.12007

Stephen, A. ( 2007 ). The case for diet diaries in longitudinal studies.   International Journal of Social Research Methodology , 10 (5), 365–377.

Tashakorri, A. , & Creswell, J. ( 2007 ). Exploring the nature of research questions in mixed methods research.   Journal of Mixed Methods Research , 1 (3), 207–211.

Woolley, C. ( 2009 ). Meeting the methods challenge of integration in a sociological study of structure and agency.   Journal of Mixed Methods Research , 3 (1), 7–26.

Yin, R. ( 2006 ). Mixed methods research: Are the methods genuinely integrated or merely parallel?   Research in the Schools , 13 (1), 41–47.

Warde, A. , & Hetherington, K. ( 1994 ). English households and routine food practices: A research note.   Sociological Review , 42 (4), 758–778.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

New : flexible, templated dashboards for more control.  Meet Dashboards

Learn / Guides / Quantitative data analysis guide

Back to guides

The ultimate guide to quantitative data analysis

Numbers help us make sense of the world. We collect quantitative data on our speed and distance as we drive, the number of hours we spend on our cell phones, and how much we save at the grocery store.

Our businesses run on numbers, too. We spend hours poring over key performance indicators (KPIs) like lead-to-client conversions, net profit margins, and bounce and churn rates.

But all of this quantitative data can feel overwhelming and confusing. Lists and spreadsheets of numbers don’t tell you much on their own—you have to conduct quantitative data analysis to understand them and make informed decisions.

Last updated

Reading time.

data analysis strategies in quantitative research

This guide explains what quantitative data analysis is and why it’s important, and gives you a four-step process to conduct a quantitative data analysis, so you know exactly what’s happening in your business and what your users need .

Collect quantitative customer data with Hotjar

Use Hotjar’s tools to gather the customer insights you need to make quantitative data analysis a breeze.

What is quantitative data analysis? 

Quantitative data analysis is the process of analyzing and interpreting numerical data. It helps you make sense of information by identifying patterns, trends, and relationships between variables through mathematical calculations and statistical tests. 

With quantitative data analysis, you turn spreadsheets of individual data points into meaningful insights to drive informed decisions. Columns of numbers from an experiment or survey transform into useful insights—like which marketing campaign asset your average customer prefers or which website factors are most closely connected to your bounce rate. 

Without analytics, data is just noise. Analyzing data helps you make decisions which are informed and free from bias.

What quantitative data analysis is not

But as powerful as quantitative data analysis is, it’s not without its limitations. It only gives you the what, not the why . For example, it can tell you how many website visitors or conversions you have on an average day, but it can’t tell you why users visited your site or made a purchase.

For the why behind user behavior, you need qualitative data analysis , a process for making sense of qualitative research like open-ended survey responses, interview clips, or behavioral observations. By analyzing non-numerical data, you gain useful contextual insights to shape your strategy, product, and messaging. 

Quantitative data analysis vs. qualitative data analysis 

Let’s take an even deeper dive into the differences between quantitative data analysis and qualitative data analysis to explore what they do and when you need them.

data analysis strategies in quantitative research

The bottom line: quantitative data analysis and qualitative data analysis are complementary processes. They work hand-in-hand to tell you what’s happening in your business and why.  

💡 Pro tip: easily toggle between quantitative and qualitative data analysis with Hotjar Funnels . 

The Funnels tool helps you visualize quantitative metrics like drop-off and conversion rates in your sales or conversion funnel to understand when and where users leave your website. You can break down your data even further to compare conversion performance by user segment.

Spot a potential issue? A single click takes you to relevant session recordings , where you see user behaviors like mouse movements, scrolls, and clicks. With this qualitative data to provide context, you'll better understand what you need to optimize to streamline the user experience (UX) and increase conversions .

Hotjar Funnels lets you quickly explore the story behind the quantitative data

4 benefits of quantitative data analysis

There’s a reason product, web design, and marketing teams take time to analyze metrics: the process pays off big time. 

Four major benefits of quantitative data analysis include:

1. Make confident decisions 

With quantitative data analysis, you know you’ve got data-driven insights to back up your decisions . For example, if you launch a concept testing survey to gauge user reactions to a new logo design, and 92% of users rate it ‘very good’—you'll feel certain when you give the designer the green light. 

Since you’re relying less on intuition and more on facts, you reduce the risks of making the wrong decision. (You’ll also find it way easier to get buy-in from team members and stakeholders for your next proposed project. 🙌)

2. Reduce costs

By crunching the numbers, you can spot opportunities to reduce spend . For example, if an ad campaign has lower-than-average click-through rates , you might decide to cut your losses and invest your budget elsewhere. 

Or, by analyzing ecommerce metrics , like website traffic by source, you may find you’re getting very little return on investment from a certain social media channel—and scale back spending in that area.

3. Personalize the user experience

Quantitative data analysis helps you map the customer journey , so you get a better sense of customers’ demographics, what page elements they interact with on your site, and where they drop off or convert . 

These insights let you better personalize your website, product, or communication, so you can segment ads, emails, and website content for specific user personas or target groups.

4. Improve user satisfaction and delight

Quantitative data analysis lets you see where your website or product is doing well—and where it falls short for your users . For example, you might see stellar results from KPIs like time on page, but conversion rates for that page are low. 

These quantitative insights encourage you to dive deeper into qualitative data to see why that’s happening—looking for moments of confusion or frustration on session recordings, for example—so you can make adjustments and optimize your conversions by improving customer satisfaction and delight.

💡Pro tip: use Net Promoter Score® (NPS) surveys to capture quantifiable customer satisfaction data that’s easy for you to analyze and interpret. 

With an NPS tool like Hotjar, you can create an on-page survey to ask users how likely they are to recommend you to others on a scale from 0 to 10. (And for added context, you can ask follow-up questions about why customers selected the rating they did—rich qualitative data is always a bonus!)

data analysis strategies in quantitative research

Hotjar graphs your quantitative NPS data to show changes over time

4 steps to effective quantitative data analysis 

Quantitative data analysis sounds way more intimidating than it actually is. Here’s how to make sense of your company’s numbers in just four steps:

1. Collect data

Before you can actually start the analysis process, you need data to analyze. This involves conducting quantitative research and collecting numerical data from various sources, including: 

Interviews or focus groups 

Website analytics

Observations, from tools like heatmaps or session recordings

Questionnaires, like surveys or on-page feedback widgets

Just ensure the questions you ask in your surveys are close-ended questions—providing respondents with select choices to choose from instead of open-ended questions that allow for free responses.

data analysis strategies in quantitative research

Hotjar’s pricing plans survey template provides close-ended questions

 2. Clean data

Once you’ve collected your data, it’s time to clean it up. Look through your results to find errors, duplicates, and omissions. Keep an eye out for outliers, too. Outliers are data points that differ significantly from the rest of the set—and they can skew your results if you don’t remove them.

By taking the time to clean your data set, you ensure your data is accurate, consistent, and relevant before it’s time to analyze. 

3. Analyze and interpret data

At this point, your data’s all cleaned up and ready for the main event. This step involves crunching the numbers to find patterns and trends via mathematical and statistical methods. 

Two main branches of quantitative data analysis exist: 

Descriptive analysis : methods to summarize or describe attributes of your data set. For example, you may calculate key stats like distribution and frequency, or mean, median, and mode.

Inferential analysis : methods that let you draw conclusions from statistics—like analyzing the relationship between variables or making predictions. These methods include t-tests, cross-tabulation, and factor analysis. (For more detailed explanations and how-tos, head to our guide on quantitative data analysis methods.)

Then, interpret your data to determine the best course of action. What does the data suggest you do ? For example, if your analysis shows a strong correlation between email open rate and time sent, you may explore optimal send times for each user segment.

4. Visualize and share data

Once you’ve analyzed and interpreted your data, create easy-to-read, engaging data visualizations—like charts, graphs, and tables—to present your results to team members and stakeholders. Data visualizations highlight similarities and differences between data sets and show the relationships between variables.

Software can do this part for you. For example, the Hotjar Dashboard shows all of your key metrics in one place—and automatically creates bar graphs to show how your top pages’ performance compares. And with just one click, you can navigate to the Trends tool to analyze product metrics for different segments on a single chart. 

Hotjar Trends lets you compare metrics across segments

Discover rich user insights with quantitative data analysis

Conducting quantitative data analysis takes a little bit of time and know-how, but it’s much more manageable than you might think. 

By choosing the right methods and following clear steps, you gain insights into product performance and customer experience —and you’ll be well on your way to making better decisions and creating more customer satisfaction and loyalty.

FAQs about quantitative data analysis

What is quantitative data analysis.

Quantitative data analysis is the process of making sense of numerical data through mathematical calculations and statistical tests. It helps you identify patterns, relationships, and trends to make better decisions.

How is quantitative data analysis different from qualitative data analysis?

Quantitative and qualitative data analysis are both essential processes for making sense of quantitative and qualitative research .

Quantitative data analysis helps you summarize and interpret numerical results from close-ended questions to understand what is happening. Qualitative data analysis helps you summarize and interpret non-numerical results, like opinions or behavior, to understand why the numbers look like they do.

 If you want to make strong data-driven decisions, you need both.

What are some benefits of quantitative data analysis?

Quantitative data analysis turns numbers into rich insights. Some benefits of this process include: 

Making more confident decisions

Identifying ways to cut costs

Personalizing the user experience

Improving customer satisfaction

What methods can I use to analyze quantitative data?

Quantitative data analysis has two branches: descriptive statistics and inferential statistics. 

Descriptive statistics provide a snapshot of the data’s features by calculating measures like mean, median, and mode. 

Inferential statistics , as the name implies, involves making inferences about what the data means. Dozens of methods exist for this branch of quantitative data analysis, but three commonly used techniques are: 

Cross tabulation

Factor analysis

The 7 Most Useful Data Analysis Methods and Techniques

Data analytics is the process of analyzing raw data to draw out meaningful insights. These insights are then used to determine the best course of action.

When is the best time to roll out that marketing campaign? Is the current team structure as effective as it could be? Which customer segments are most likely to purchase your new product?

Ultimately, data analytics is a crucial driver of any successful business strategy. But how do data analysts actually turn raw data into something useful? There are a range of methods and techniques that data analysts use depending on the type of data in question and the kinds of insights they want to uncover.

You can get a hands-on introduction to data analytics in this free short course .

In this post, we’ll explore some of the most useful data analysis techniques. By the end, you’ll have a much clearer idea of how you can transform meaningless data into business intelligence. We’ll cover:

  • What is data analysis and why is it important?
  • What is the difference between qualitative and quantitative data?
  • Regression analysis
  • Monte Carlo simulation
  • Factor analysis
  • Cohort analysis
  • Cluster analysis
  • Time series analysis
  • Sentiment analysis
  • The data analysis process
  • The best tools for data analysis
  •  Key takeaways

The first six methods listed are used for quantitative data , while the last technique applies to qualitative data. We briefly explain the difference between quantitative and qualitative data in section two, but if you want to skip straight to a particular analysis technique, just use the clickable menu.

1. What is data analysis and why is it important?

Data analysis is, put simply, the process of discovering useful information by evaluating data. This is done through a process of inspecting, cleaning, transforming, and modeling data using analytical and statistical tools, which we will explore in detail further along in this article.

Why is data analysis important? Analyzing data effectively helps organizations make business decisions. Nowadays, data is collected by businesses constantly: through surveys, online tracking, online marketing analytics, collected subscription and registration data (think newsletters), social media monitoring, among other methods.

These data will appear as different structures, including—but not limited to—the following:

The concept of big data —data that is so large, fast, or complex, that it is difficult or impossible to process using traditional methods—gained momentum in the early 2000s. Then, Doug Laney, an industry analyst, articulated what is now known as the mainstream definition of big data as the three Vs: volume, velocity, and variety. 

  • Volume: As mentioned earlier, organizations are collecting data constantly. In the not-too-distant past it would have been a real issue to store, but nowadays storage is cheap and takes up little space.
  • Velocity: Received data needs to be handled in a timely manner. With the growth of the Internet of Things, this can mean these data are coming in constantly, and at an unprecedented speed.
  • Variety: The data being collected and stored by organizations comes in many forms, ranging from structured data—that is, more traditional, numerical data—to unstructured data—think emails, videos, audio, and so on. We’ll cover structured and unstructured data a little further on.

This is a form of data that provides information about other data, such as an image. In everyday life you’ll find this by, for example, right-clicking on a file in a folder and selecting “Get Info”, which will show you information such as file size and kind, date of creation, and so on.

Real-time data

This is data that is presented as soon as it is acquired. A good example of this is a stock market ticket, which provides information on the most-active stocks in real time.

Machine data

This is data that is produced wholly by machines, without human instruction. An example of this could be call logs automatically generated by your smartphone.

Quantitative and qualitative data

Quantitative data—otherwise known as structured data— may appear as a “traditional” database—that is, with rows and columns. Qualitative data—otherwise known as unstructured data—are the other types of data that don’t fit into rows and columns, which can include text, images, videos and more. We’ll discuss this further in the next section.

2. What is the difference between quantitative and qualitative data?

How you analyze your data depends on the type of data you’re dealing with— quantitative or qualitative . So what’s the difference?

Quantitative data is anything measurable , comprising specific quantities and numbers. Some examples of quantitative data include sales figures, email click-through rates, number of website visitors, and percentage revenue increase. Quantitative data analysis techniques focus on the statistical, mathematical, or numerical analysis of (usually large) datasets. This includes the manipulation of statistical data using computational techniques and algorithms. Quantitative analysis techniques are often used to explain certain phenomena or to make predictions.

Qualitative data cannot be measured objectively , and is therefore open to more subjective interpretation. Some examples of qualitative data include comments left in response to a survey question, things people have said during interviews, tweets and other social media posts, and the text included in product reviews. With qualitative data analysis, the focus is on making sense of unstructured data (such as written text, or transcripts of spoken conversations). Often, qualitative analysis will organize the data into themes—a process which, fortunately, can be automated.

Data analysts work with both quantitative and qualitative data , so it’s important to be familiar with a variety of analysis methods. Let’s take a look at some of the most useful techniques now.

3. Data analysis techniques

Now we’re familiar with some of the different types of data, let’s focus on the topic at hand: different methods for analyzing data. 

a. Regression analysis

Regression analysis is used to estimate the relationship between a set of variables. When conducting any type of regression analysis , you’re looking to see if there’s a correlation between a dependent variable (that’s the variable or outcome you want to measure or predict) and any number of independent variables (factors which may have an impact on the dependent variable). The aim of regression analysis is to estimate how one or more variables might impact the dependent variable, in order to identify trends and patterns. This is especially useful for making predictions and forecasting future trends.

Let’s imagine you work for an ecommerce company and you want to examine the relationship between: (a) how much money is spent on social media marketing, and (b) sales revenue. In this case, sales revenue is your dependent variable—it’s the factor you’re most interested in predicting and boosting. Social media spend is your independent variable; you want to determine whether or not it has an impact on sales and, ultimately, whether it’s worth increasing, decreasing, or keeping the same. Using regression analysis, you’d be able to see if there’s a relationship between the two variables. A positive correlation would imply that the more you spend on social media marketing, the more sales revenue you make. No correlation at all might suggest that social media marketing has no bearing on your sales. Understanding the relationship between these two variables would help you to make informed decisions about the social media budget going forward. However: It’s important to note that, on their own, regressions can only be used to determine whether or not there is a relationship between a set of variables—they don’t tell you anything about cause and effect. So, while a positive correlation between social media spend and sales revenue may suggest that one impacts the other, it’s impossible to draw definitive conclusions based on this analysis alone.

There are many different types of regression analysis, and the model you use depends on the type of data you have for the dependent variable. For example, your dependent variable might be continuous (i.e. something that can be measured on a continuous scale, such as sales revenue in USD), in which case you’d use a different type of regression analysis than if your dependent variable was categorical in nature (i.e. comprising values that can be categorised into a number of distinct groups based on a certain characteristic, such as customer location by continent). You can learn more about different types of dependent variables and how to choose the right regression analysis in this guide .

Regression analysis in action: Investigating the relationship between clothing brand Benetton’s advertising expenditure and sales

b. Monte Carlo simulation

When making decisions or taking certain actions, there are a range of different possible outcomes. If you take the bus, you might get stuck in traffic. If you walk, you might get caught in the rain or bump into your chatty neighbor, potentially delaying your journey. In everyday life, we tend to briefly weigh up the pros and cons before deciding which action to take; however, when the stakes are high, it’s essential to calculate, as thoroughly and accurately as possible, all the potential risks and rewards.

Monte Carlo simulation, otherwise known as the Monte Carlo method, is a computerized technique used to generate models of possible outcomes and their probability distributions. It essentially considers a range of possible outcomes and then calculates how likely it is that each particular outcome will be realized. The Monte Carlo method is used by data analysts to conduct advanced risk analysis, allowing them to better forecast what might happen in the future and make decisions accordingly.

So how does Monte Carlo simulation work, and what can it tell us? To run a Monte Carlo simulation, you’ll start with a mathematical model of your data—such as a spreadsheet. Within your spreadsheet, you’ll have one or several outputs that you’re interested in; profit, for example, or number of sales. You’ll also have a number of inputs; these are variables that may impact your output variable. If you’re looking at profit, relevant inputs might include the number of sales, total marketing spend, and employee salaries. If you knew the exact, definitive values of all your input variables, you’d quite easily be able to calculate what profit you’d be left with at the end. However, when these values are uncertain, a Monte Carlo simulation enables you to calculate all the possible options and their probabilities. What will your profit be if you make 100,000 sales and hire five new employees on a salary of $50,000 each? What is the likelihood of this outcome? What will your profit be if you only make 12,000 sales and hire five new employees? And so on. It does this by replacing all uncertain values with functions which generate random samples from distributions determined by you, and then running a series of calculations and recalculations to produce models of all the possible outcomes and their probability distributions. The Monte Carlo method is one of the most popular techniques for calculating the effect of unpredictable variables on a specific output variable, making it ideal for risk analysis.

Monte Carlo simulation in action: A case study using Monte Carlo simulation for risk analysis

 c. Factor analysis

Factor analysis is a technique used to reduce a large number of variables to a smaller number of factors. It works on the basis that multiple separate, observable variables correlate with each other because they are all associated with an underlying construct. This is useful not only because it condenses large datasets into smaller, more manageable samples, but also because it helps to uncover hidden patterns. This allows you to explore concepts that cannot be easily measured or observed—such as wealth, happiness, fitness, or, for a more business-relevant example, customer loyalty and satisfaction.

Let’s imagine you want to get to know your customers better, so you send out a rather long survey comprising one hundred questions. Some of the questions relate to how they feel about your company and product; for example, “Would you recommend us to a friend?” and “How would you rate the overall customer experience?” Other questions ask things like “What is your yearly household income?” and “How much are you willing to spend on skincare each month?”

Once your survey has been sent out and completed by lots of customers, you end up with a large dataset that essentially tells you one hundred different things about each customer (assuming each customer gives one hundred responses). Instead of looking at each of these responses (or variables) individually, you can use factor analysis to group them into factors that belong together—in other words, to relate them to a single underlying construct. In this example, factor analysis works by finding survey items that are strongly correlated. This is known as covariance . So, if there’s a strong positive correlation between household income and how much they’re willing to spend on skincare each month (i.e. as one increases, so does the other), these items may be grouped together. Together with other variables (survey responses), you may find that they can be reduced to a single factor such as “consumer purchasing power”. Likewise, if a customer experience rating of 10/10 correlates strongly with “yes” responses regarding how likely they are to recommend your product to a friend, these items may be reduced to a single factor such as “customer satisfaction”.

In the end, you have a smaller number of factors rather than hundreds of individual variables. These factors are then taken forward for further analysis, allowing you to learn more about your customers (or any other area you’re interested in exploring).

Factor analysis in action: Using factor analysis to explore customer behavior patterns in Tehran

d. Cohort analysis

Cohort analysis is a data analytics technique that groups users based on a shared characteristic , such as the date they signed up for a service or the product they purchased. Once users are grouped into cohorts, analysts can track their behavior over time to identify trends and patterns.

So what does this mean and why is it useful? Let’s break down the above definition further. A cohort is a group of people who share a common characteristic (or action) during a given time period. Students who enrolled at university in 2020 may be referred to as the 2020 cohort. Customers who purchased something from your online store via the app in the month of December may also be considered a cohort.

With cohort analysis, you’re dividing your customers or users into groups and looking at how these groups behave over time. So, rather than looking at a single, isolated snapshot of all your customers at a given moment in time (with each customer at a different point in their journey), you’re examining your customers’ behavior in the context of the customer lifecycle. As a result, you can start to identify patterns of behavior at various points in the customer journey—say, from their first ever visit to your website, through to email newsletter sign-up, to their first purchase, and so on. As such, cohort analysis is dynamic, allowing you to uncover valuable insights about the customer lifecycle.

This is useful because it allows companies to tailor their service to specific customer segments (or cohorts). Let’s imagine you run a 50% discount campaign in order to attract potential new customers to your website. Once you’ve attracted a group of new customers (a cohort), you’ll want to track whether they actually buy anything and, if they do, whether or not (and how frequently) they make a repeat purchase. With these insights, you’ll start to gain a much better understanding of when this particular cohort might benefit from another discount offer or retargeting ads on social media, for example. Ultimately, cohort analysis allows companies to optimize their service offerings (and marketing) to provide a more targeted, personalized experience. You can learn more about how to run cohort analysis using Google Analytics .

Cohort analysis in action: How Ticketmaster used cohort analysis to boost revenue

e. Cluster analysis

Cluster analysis is an exploratory technique that seeks to identify structures within a dataset. The goal of cluster analysis is to sort different data points into groups (or clusters) that are internally homogeneous and externally heterogeneous. This means that data points within a cluster are similar to each other, and dissimilar to data points in another cluster. Clustering is used to gain insight into how data is distributed in a given dataset, or as a preprocessing step for other algorithms.

There are many real-world applications of cluster analysis. In marketing, cluster analysis is commonly used to group a large customer base into distinct segments, allowing for a more targeted approach to advertising and communication. Insurance firms might use cluster analysis to investigate why certain locations are associated with a high number of insurance claims. Another common application is in geology, where experts will use cluster analysis to evaluate which cities are at greatest risk of earthquakes (and thus try to mitigate the risk with protective measures).

It’s important to note that, while cluster analysis may reveal structures within your data, it won’t explain why those structures exist. With that in mind, cluster analysis is a useful starting point for understanding your data and informing further analysis. Clustering algorithms are also used in machine learning—you can learn more about clustering in machine learning in our guide .

Cluster analysis in action: Using cluster analysis for customer segmentation—a telecoms case study example

f. Time series analysis

Time series analysis is a statistical technique used to identify trends and cycles over time. Time series data is a sequence of data points which measure the same variable at different points in time (for example, weekly sales figures or monthly email sign-ups). By looking at time-related trends, analysts are able to forecast how the variable of interest may fluctuate in the future.

When conducting time series analysis, the main patterns you’ll be looking out for in your data are:

  • Trends: Stable, linear increases or decreases over an extended time period.
  • Seasonality: Predictable fluctuations in the data due to seasonal factors over a short period of time. For example, you might see a peak in swimwear sales in summer around the same time every year.
  • Cyclic patterns: Unpredictable cycles where the data fluctuates. Cyclical trends are not due to seasonality, but rather, may occur as a result of economic or industry-related conditions.

As you can imagine, the ability to make informed predictions about the future has immense value for business. Time series analysis and forecasting is used across a variety of industries, most commonly for stock market analysis, economic forecasting, and sales forecasting. There are different types of time series models depending on the data you’re using and the outcomes you want to predict. These models are typically classified into three broad types: the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. For an in-depth look at time series analysis, refer to our guide .

Time series analysis in action: Developing a time series model to predict jute yarn demand in Bangladesh

g. Sentiment analysis

When you think of data, your mind probably automatically goes to numbers and spreadsheets.

Many companies overlook the value of qualitative data, but in reality, there are untold insights to be gained from what people (especially customers) write and say about you. So how do you go about analyzing textual data?

One highly useful qualitative technique is sentiment analysis , a technique which belongs to the broader category of text analysis —the (usually automated) process of sorting and understanding textual data.

With sentiment analysis, the goal is to interpret and classify the emotions conveyed within textual data. From a business perspective, this allows you to ascertain how your customers feel about various aspects of your brand, product, or service.

There are several different types of sentiment analysis models, each with a slightly different focus. The three main types include:

Fine-grained sentiment analysis

If you want to focus on opinion polarity (i.e. positive, neutral, or negative) in depth, fine-grained sentiment analysis will allow you to do so.

For example, if you wanted to interpret star ratings given by customers, you might use fine-grained sentiment analysis to categorize the various ratings along a scale ranging from very positive to very negative.

Emotion detection

This model often uses complex machine learning algorithms to pick out various emotions from your textual data.

You might use an emotion detection model to identify words associated with happiness, anger, frustration, and excitement, giving you insight into how your customers feel when writing about you or your product on, say, a product review site.

Aspect-based sentiment analysis

This type of analysis allows you to identify what specific aspects the emotions or opinions relate to, such as a certain product feature or a new ad campaign.

If a customer writes that they “find the new Instagram advert so annoying”, your model should detect not only a negative sentiment, but also the object towards which it’s directed.

In a nutshell, sentiment analysis uses various Natural Language Processing (NLP) algorithms and systems which are trained to associate certain inputs (for example, certain words) with certain outputs.

For example, the input “annoying” would be recognized and tagged as “negative”. Sentiment analysis is crucial to understanding how your customers feel about you and your products, for identifying areas for improvement, and even for averting PR disasters in real-time!

Sentiment analysis in action: 5 Real-world sentiment analysis case studies

4. The data analysis process

In order to gain meaningful insights from data, data analysts will perform a rigorous step-by-step process. We go over this in detail in our step by step guide to the data analysis process —but, to briefly summarize, the data analysis process generally consists of the following phases:

Defining the question

The first step for any data analyst will be to define the objective of the analysis, sometimes called a ‘problem statement’. Essentially, you’re asking a question with regards to a business problem you’re trying to solve. Once you’ve defined this, you’ll then need to determine which data sources will help you answer this question.

Collecting the data

Now that you’ve defined your objective, the next step will be to set up a strategy for collecting and aggregating the appropriate data. Will you be using quantitative (numeric) or qualitative (descriptive) data? Do these data fit into first-party, second-party, or third-party data?

Learn more: Quantitative vs. Qualitative Data: What’s the Difference? 

Cleaning the data

Unfortunately, your collected data isn’t automatically ready for analysis—you’ll have to clean it first. As a data analyst, this phase of the process will take up the most time. During the data cleaning process, you will likely be:

  • Removing major errors, duplicates, and outliers
  • Removing unwanted data points
  • Structuring the data—that is, fixing typos, layout issues, etc.
  • Filling in major gaps in data

Analyzing the data

Now that we’ve finished cleaning the data, it’s time to analyze it! Many analysis methods have already been described in this article, and it’s up to you to decide which one will best suit the assigned objective. It may fall under one of the following categories:

  • Descriptive analysis , which identifies what has already happened
  • Diagnostic analysis , which focuses on understanding why something has happened
  • Predictive analysis , which identifies future trends based on historical data
  • Prescriptive analysis , which allows you to make recommendations for the future

Visualizing and sharing your findings

We’re almost at the end of the road! Analyses have been made, insights have been gleaned—all that remains to be done is to share this information with others. This is usually done with a data visualization tool, such as Google Charts, or Tableau.

Learn more: 13 of the Most Common Types of Data Visualization

To sum up the process, Will’s explained it all excellently in the following video:

5. The best tools for data analysis

As you can imagine, every phase of the data analysis process requires the data analyst to have a variety of tools under their belt that assist in gaining valuable insights from data. We cover these tools in greater detail in this article , but, in summary, here’s our best-of-the-best list, with links to each product:

The top 9 tools for data analysts

  • Microsoft Excel
  • Jupyter Notebook
  • Apache Spark
  • Microsoft Power BI

6. Key takeaways and further reading

As you can see, there are many different data analysis techniques at your disposal. In order to turn your raw data into actionable insights, it’s important to consider what kind of data you have (is it qualitative or quantitative?) as well as the kinds of insights that will be useful within the given context. In this post, we’ve introduced seven of the most useful data analysis techniques—but there are many more out there to be discovered!

So what now? If you haven’t already, we recommend reading the case studies for each analysis technique discussed in this post (you’ll find a link at the end of each section). For a more hands-on introduction to the kinds of methods and techniques that data analysts use, try out this free introductory data analytics short course. In the meantime, you might also want to read the following:

  • The Best Online Data Analytics Courses for 2024
  • What Is Time Series Data and How Is It Analyzed?
  • What is Spatial Analysis?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Logo for UEN Digital Press with Pressbooks

Part II: Data Analysis Methods in Quantitative Research

Data analysis methods in quantitative research.

We started this module with levels of measurement as a way to categorize our data. Data analysis is directed toward answering the original research question and achieving the study purpose (or aim). Now, we are going to delve into two main statistical analyses to describe our data and make inferences about our data:

Descriptive Statistics and Inferential Statistics.

Descriptive Statistics:

Before you panic, we will not be going into statistical analyses very deeply. We want to simply get a good overview of some of the types of general statistical analyses so that it makes some sense to us when we read results in published research articles.

Descriptive statistics   summarize or describe the characteristics of a data set. This is a method of simply organizing and describing our data. Why? Because data that are not organized in some fashion are super difficult to interpret.

Let’s say our sample is golden retrievers (population “canines”). Our descriptive statistics  tell us more about the same.

  • 37% of our sample is male, 43% female
  • The mean age is 4 years
  • Mode is 6 years
  • Median age is 5.5 years

Image of golden retriever in field

Let’s explore some of the types of descriptive statistics.

Frequency Distributions : A frequency distribution describes the number of observations for each possible value of a measured variable. The numbers are arranged from lowest to highest and features a count of how many times each value occurred.

For example, if 18 students have pet dogs, dog ownership has a frequency of 18.

We might see what other types of pets that students have. Maybe cats, fish, and hamsters. We find that 2 students have hamsters, 9 have fish, 1 has a cat.

You can see that it is very difficult to interpret the various pets into any meaningful interpretation, yes?

Now, let’s take those same pets and place them in a frequency distribution table.                          

As we can now see, this is much easier to interpret.

Let’s say that we want to know how many books our sample population of  students have read in the last year. We collect our data and find this:

We can then take that table and plot it out on a frequency distribution graph. This makes it much easier to see how the numbers are disbursed. Easier on the eyes, yes?

Chart, histogram Description automatically generated

Here’s another example of symmetrical, positive skew, and negative skew:

Understanding Descriptive Statistics | by Sarang Narkhede | Towards Data Science

Correlation : Relationships between two research variables are called correlations . Remember, correlation is not cause-and-effect. Correlations  simply measure the extent of relationship between two variables. To measure correlation in descriptive statistics, the statistical analysis called Pearson’s correlation coefficient I is often used.  You do not need to know how to calculate this for this course. But, do remember that analysis test because you will often see this in published research articles. There really are no set guidelines on what measurement constitutes a “strong” or “weak” correlation, as it really depends on the variables being measured.

However, possible values for correlation coefficients range from -1.00 through .00 to +1.00. A value of +1 means that the two variables are positively correlated, as one variable goes up, the other goes up. A value of r = 0 means that the two variables are not linearly related.

Often, the data will be presented on a scatter plot. Here, we can view the data and there appears to be a straight line (linear) trend between height and weight. The association (or correlation) is positive. That means, that there is a weight increase with height. The Pearson correlation coefficient in this case was r = 0.56.

data analysis strategies in quantitative research

A type I error is made by rejecting a null hypothesis that is true. This means that there was no difference but the researcher concluded that the hypothesis was true.

A type II error is made by accepting that the null hypothesis is true when, in fact, it was false. Meaning there was actually a difference but the researcher did not think their hypothesis was supported.

Hypothesis Testing Procedures : In a general sense, the overall testing of a hypothesis has a systematic methodology. Remember, a hypothesis is an educated guess about the outcome. If we guess wrong, we might set up the tests incorrectly and might get results that are invalid. Sometimes, this is super difficult to get right. The main purpose of statistics is to test a hypothesis.

  • Selecting a statistical test. Lots of factors go into this, including levels of measurement of the variables.
  • Specifying the level of significance. Usually 0.05 is chosen.
  • Computing a test statistic. Lots of software programs to help with this.
  • Determining degrees of freedom ( df ). This refers to the number of observations free to vary about a parameter. Computing this is easy (but you don’t need to know how for this course).
  • Comparing the test statistic to a theoretical value. Theoretical values exist for all test statistics, which is compared to the study statistics to help establish significance.

Some of the common inferential statistics you will see include:

Comparison tests: Comparison tests look for differences among group means. They can be used to test the effect of a categorical variable on the mean value of some other characteristic.

T-tests are used when comparing the means of precisely two groups (e.g., the average heights of men and women). ANOVA and MANOVA tests are used when comparing the means of more than two groups (e.g., the average heights of children, teenagers, and adults).

  • t -tests (compares differences in two groups) – either paired t-test (example: What is the effect of two different test prep programs on the average exam scores for students from the same class?) or independent t-test (example: What is the difference in average exam scores for students from two different schools?)
  • analysis of variance (ANOVA, which compares differences in three or more groups) (example: What is the difference in average pain levels among post-surgical patients given three different painkillers?) or MANOVA (compares differences in three or more groups, and 2 or more outcomes) (example: What is the effect of flower species on petal length, petal width, and stem length?)

Correlation tests: Correlation tests check whether variables are related without hypothesizing a cause-and-effect relationship.

  • Pearson r (measures the strength and direction of the relationship between two variables) (example: How are latitude and temperature related?)

Nonparametric tests: Non-parametric tests don’t make as many assumptions about the data, and are useful when one or more of the common statistical assumptions are violated. However, the inferences they make aren’t as strong as with parametric tests.

  • chi-squared ( X 2 ) test (measures differences in proportions). Chi-square tests are often used to test hypotheses. The chi-square statistic compares the size of any discrepancies between the expected results and the actual results, given the size of the sample and the number of variables in the relationship. For example, the results of tossing a fair coin meet these criteria. We can apply a chi-square test to determine which type of candy is most popular and make sure that our shelves are well stocked. Or maybe you’re a scientist studying the offspring of cats to determine the likelihood of certain genetic traits being passed to a litter of kittens.

Inferential Versus Descriptive Statistics Summary Table

Statistical Significance Versus Clinical Significance

Finally, when it comes to statistical significance  in hypothesis testing, the normal probability value in nursing is <0.05. A p=value (probability) is a statistical measurement used to validate a hypothesis against measured data in the study. Meaning, it measures the likelihood that the results were actually observed due to the intervention, or if the results were just due by chance. The p-value, in measuring the probability of obtaining the observed results, assumes the null hypothesis is true.

The lower the p-value, the greater the statistical significance of the observed difference.

In the example earlier about our diabetic patients receiving online diet education, let’s say we had p = 0.05. Would that be a statistically significant result?

If you answered yes, you are correct!

What if our result was p = 0.8?

Not significant. Good job!

That’s pretty straightforward, right? Below 0.05, significant. Over 0.05 not   significant.

Could we have significance clinically even if we do not have statistically significant results? Yes. Let’s explore this a bit.

Statistical hypothesis testing provides little information for interpretation purposes. It’s pretty mathematical and we can still get it wrong. Additionally, attaining statistical significance does not really state whether a finding is clinically meaningful. With a large enough sample, even a small very tiny relationship may be statistically significant. But, clinical significance  is the practical importance of research. Meaning, we need to ask what the palpable effects may be on the lives of patients or healthcare decisions.

Remember, hypothesis testing cannot prove. It also cannot tell us much other than “yeah, it’s probably likely that there would be some change with this intervention”. Hypothesis testing tells us the likelihood that the outcome was due to an intervention or influence and not just by chance. Also, as nurses and clinicians, we are not concerned with a group of people – we are concerned at the individual, holistic level. The goal of evidence-based practice is to use best evidence for decisions about specific individual needs.

data analysis strategies in quantitative research

Additionally, begin your Discussion section. What are the implications to practice? Is there little evidence or a lot? Would you recommend additional studies? If so, what type of study would you recommend, and why?

data analysis strategies in quantitative research

  • Were all the important results discussed?
  • Did the researchers discuss any study limitations and their possible effects on the credibility of the findings? In discussing limitations, were key threats to the study’s validity and possible biases reviewed? Did the interpretations take limitations into account?
  • What types of evidence were offered in support of the interpretation, and was that evidence persuasive? Were results interpreted in light of findings from other studies?
  • Did the researchers make any unjustifiable causal inferences? Were alternative explanations for the findings considered? Were the rationales for rejecting these alternatives convincing?
  • Did the interpretation consider the precision of the results and/or the magnitude of effects?
  • Did the researchers draw any unwarranted conclusions about the generalizability of the results?
  • Did the researchers discuss the study’s implications for clinical practice or future nursing research? Did they make specific recommendations?
  • If yes, are the stated implications appropriate, given the study’s limitations and the magnitude of the effects as well as evidence from other studies? Are there important implications that the report neglected to include?
  • Did the researchers mention or assess clinical significance? Did they make a distinction between statistical and clinical significance?
  • If clinical significance was examined, was it assessed in terms of group-level information (e.g., effect sizes) or individual-level results? How was clinical significance operationalized?

References & Attribution

“ Green check mark ” by rawpixel licensed CC0 .

“ Magnifying glass ” by rawpixel licensed CC0

“ Orange flame ” by rawpixel licensed CC0 .

Polit, D. & Beck, C. (2021).  Lippincott CoursePoint Enhanced for Polit’s Essentials of Nursing Research  (10th ed.). Wolters Kluwer Health 

Vaid, N. K. (2019) Statistical performance measures. Medium. https://neeraj-kumar-vaid.medium.com/statistical-performance-measures-12bad66694b7

Evidence-Based Practice & Research Methodologies Copyright © by Tracy Fawns is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Privacy Policy
  • SignUp/Login

Research Method

Home » Quantitative Research – Methods, Types and Analysis

Quantitative Research – Methods, Types and Analysis

Table of Contents

What is Quantitative Research

Quantitative Research

Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions . This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected. It often involves the use of surveys, experiments, or other structured data collection methods to gather quantitative data.

Quantitative Research Methods

Quantitative Research Methods

Quantitative Research Methods are as follows:

Descriptive Research Design

Descriptive research design is used to describe the characteristics of a population or phenomenon being studied. This research method is used to answer the questions of what, where, when, and how. Descriptive research designs use a variety of methods such as observation, case studies, and surveys to collect data. The data is then analyzed using statistical tools to identify patterns and relationships.

Correlational Research Design

Correlational research design is used to investigate the relationship between two or more variables. Researchers use correlational research to determine whether a relationship exists between variables and to what extent they are related. This research method involves collecting data from a sample and analyzing it using statistical tools such as correlation coefficients.

Quasi-experimental Research Design

Quasi-experimental research design is used to investigate cause-and-effect relationships between variables. This research method is similar to experimental research design, but it lacks full control over the independent variable. Researchers use quasi-experimental research designs when it is not feasible or ethical to manipulate the independent variable.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This research method involves manipulating the independent variable and observing the effects on the dependent variable. Researchers use experimental research designs to test hypotheses and establish cause-and-effect relationships.

Survey Research

Survey research involves collecting data from a sample of individuals using a standardized questionnaire. This research method is used to gather information on attitudes, beliefs, and behaviors of individuals. Researchers use survey research to collect data quickly and efficiently from a large sample size. Survey research can be conducted through various methods such as online, phone, mail, or in-person interviews.

Quantitative Research Analysis Methods

Here are some commonly used quantitative research analysis methods:

Statistical Analysis

Statistical analysis is the most common quantitative research analysis method. It involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis can be used to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.

Regression Analysis

Regression analysis is a statistical technique used to analyze the relationship between one dependent variable and one or more independent variables. Researchers use regression analysis to identify and quantify the impact of independent variables on the dependent variable.

Factor Analysis

Factor analysis is a statistical technique used to identify underlying factors that explain the correlations among a set of variables. Researchers use factor analysis to reduce a large number of variables to a smaller set of factors that capture the most important information.

Structural Equation Modeling

Structural equation modeling is a statistical technique used to test complex relationships between variables. It involves specifying a model that includes both observed and unobserved variables, and then using statistical methods to test the fit of the model to the data.

Time Series Analysis

Time series analysis is a statistical technique used to analyze data that is collected over time. It involves identifying patterns and trends in the data, as well as any seasonal or cyclical variations.

Multilevel Modeling

Multilevel modeling is a statistical technique used to analyze data that is nested within multiple levels. For example, researchers might use multilevel modeling to analyze data that is collected from individuals who are nested within groups, such as students nested within schools.

Applications of Quantitative Research

Quantitative research has many applications across a wide range of fields. Here are some common examples:

  • Market Research : Quantitative research is used extensively in market research to understand consumer behavior, preferences, and trends. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform marketing strategies, product development, and pricing decisions.
  • Health Research: Quantitative research is used in health research to study the effectiveness of medical treatments, identify risk factors for diseases, and track health outcomes over time. Researchers use statistical methods to analyze data from clinical trials, surveys, and other sources to inform medical practice and policy.
  • Social Science Research: Quantitative research is used in social science research to study human behavior, attitudes, and social structures. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform social policies, educational programs, and community interventions.
  • Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.
  • Environmental Research: Quantitative research is used in environmental research to study the impact of human activities on the environment, assess the effectiveness of conservation strategies, and identify ways to reduce environmental risks. Researchers use statistical methods to analyze data from field studies, experiments, and other sources.

Characteristics of Quantitative Research

Here are some key characteristics of quantitative research:

  • Numerical data : Quantitative research involves collecting numerical data through standardized methods such as surveys, experiments, and observational studies. This data is analyzed using statistical methods to identify patterns and relationships.
  • Large sample size: Quantitative research often involves collecting data from a large sample of individuals or groups in order to increase the reliability and generalizability of the findings.
  • Objective approach: Quantitative research aims to be objective and impartial in its approach, focusing on the collection and analysis of data rather than personal beliefs, opinions, or experiences.
  • Control over variables: Quantitative research often involves manipulating variables to test hypotheses and establish cause-and-effect relationships. Researchers aim to control for extraneous variables that may impact the results.
  • Replicable : Quantitative research aims to be replicable, meaning that other researchers should be able to conduct similar studies and obtain similar results using the same methods.
  • Statistical analysis: Quantitative research involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis allows researchers to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.
  • Generalizability: Quantitative research aims to produce findings that can be generalized to larger populations beyond the specific sample studied. This is achieved through the use of random sampling methods and statistical inference.

Examples of Quantitative Research

Here are some examples of quantitative research in different fields:

  • Market Research: A company conducts a survey of 1000 consumers to determine their brand awareness and preferences. The data is analyzed using statistical methods to identify trends and patterns that can inform marketing strategies.
  • Health Research : A researcher conducts a randomized controlled trial to test the effectiveness of a new drug for treating a particular medical condition. The study involves collecting data from a large sample of patients and analyzing the results using statistical methods.
  • Social Science Research : A sociologist conducts a survey of 500 people to study attitudes toward immigration in a particular country. The data is analyzed using statistical methods to identify factors that influence these attitudes.
  • Education Research: A researcher conducts an experiment to compare the effectiveness of two different teaching methods for improving student learning outcomes. The study involves randomly assigning students to different groups and collecting data on their performance on standardized tests.
  • Environmental Research : A team of researchers conduct a study to investigate the impact of climate change on the distribution and abundance of a particular species of plant or animal. The study involves collecting data on environmental factors and population sizes over time and analyzing the results using statistical methods.
  • Psychology : A researcher conducts a survey of 500 college students to investigate the relationship between social media use and mental health. The data is analyzed using statistical methods to identify correlations and potential causal relationships.
  • Political Science: A team of researchers conducts a study to investigate voter behavior during an election. They use survey methods to collect data on voting patterns, demographics, and political attitudes, and analyze the results using statistical methods.

How to Conduct Quantitative Research

Here is a general overview of how to conduct quantitative research:

  • Develop a research question: The first step in conducting quantitative research is to develop a clear and specific research question. This question should be based on a gap in existing knowledge, and should be answerable using quantitative methods.
  • Develop a research design: Once you have a research question, you will need to develop a research design. This involves deciding on the appropriate methods to collect data, such as surveys, experiments, or observational studies. You will also need to determine the appropriate sample size, data collection instruments, and data analysis techniques.
  • Collect data: The next step is to collect data. This may involve administering surveys or questionnaires, conducting experiments, or gathering data from existing sources. It is important to use standardized methods to ensure that the data is reliable and valid.
  • Analyze data : Once the data has been collected, it is time to analyze it. This involves using statistical methods to identify patterns, trends, and relationships between variables. Common statistical techniques include correlation analysis, regression analysis, and hypothesis testing.
  • Interpret results: After analyzing the data, you will need to interpret the results. This involves identifying the key findings, determining their significance, and drawing conclusions based on the data.
  • Communicate findings: Finally, you will need to communicate your findings. This may involve writing a research report, presenting at a conference, or publishing in a peer-reviewed journal. It is important to clearly communicate the research question, methods, results, and conclusions to ensure that others can understand and replicate your research.

When to use Quantitative Research

Here are some situations when quantitative research can be appropriate:

  • To test a hypothesis: Quantitative research is often used to test a hypothesis or a theory. It involves collecting numerical data and using statistical analysis to determine if the data supports or refutes the hypothesis.
  • To generalize findings: If you want to generalize the findings of your study to a larger population, quantitative research can be useful. This is because it allows you to collect numerical data from a representative sample of the population and use statistical analysis to make inferences about the population as a whole.
  • To measure relationships between variables: If you want to measure the relationship between two or more variables, such as the relationship between age and income, or between education level and job satisfaction, quantitative research can be useful. It allows you to collect numerical data on both variables and use statistical analysis to determine the strength and direction of the relationship.
  • To identify patterns or trends: Quantitative research can be useful for identifying patterns or trends in data. For example, you can use quantitative research to identify trends in consumer behavior or to identify patterns in stock market data.
  • To quantify attitudes or opinions : If you want to measure attitudes or opinions on a particular topic, quantitative research can be useful. It allows you to collect numerical data using surveys or questionnaires and analyze the data using statistical methods to determine the prevalence of certain attitudes or opinions.

Purpose of Quantitative Research

The purpose of quantitative research is to systematically investigate and measure the relationships between variables or phenomena using numerical data and statistical analysis. The main objectives of quantitative research include:

  • Description : To provide a detailed and accurate description of a particular phenomenon or population.
  • Explanation : To explain the reasons for the occurrence of a particular phenomenon, such as identifying the factors that influence a behavior or attitude.
  • Prediction : To predict future trends or behaviors based on past patterns and relationships between variables.
  • Control : To identify the best strategies for controlling or influencing a particular outcome or behavior.

Quantitative research is used in many different fields, including social sciences, business, engineering, and health sciences. It can be used to investigate a wide range of phenomena, from human behavior and attitudes to physical and biological processes. The purpose of quantitative research is to provide reliable and valid data that can be used to inform decision-making and improve understanding of the world around us.

Advantages of Quantitative Research

There are several advantages of quantitative research, including:

  • Objectivity : Quantitative research is based on objective data and statistical analysis, which reduces the potential for bias or subjectivity in the research process.
  • Reproducibility : Because quantitative research involves standardized methods and measurements, it is more likely to be reproducible and reliable.
  • Generalizability : Quantitative research allows for generalizations to be made about a population based on a representative sample, which can inform decision-making and policy development.
  • Precision : Quantitative research allows for precise measurement and analysis of data, which can provide a more accurate understanding of phenomena and relationships between variables.
  • Efficiency : Quantitative research can be conducted relatively quickly and efficiently, especially when compared to qualitative research, which may involve lengthy data collection and analysis.
  • Large sample sizes : Quantitative research can accommodate large sample sizes, which can increase the representativeness and generalizability of the results.

Limitations of Quantitative Research

There are several limitations of quantitative research, including:

  • Limited understanding of context: Quantitative research typically focuses on numerical data and statistical analysis, which may not provide a comprehensive understanding of the context or underlying factors that influence a phenomenon.
  • Simplification of complex phenomena: Quantitative research often involves simplifying complex phenomena into measurable variables, which may not capture the full complexity of the phenomenon being studied.
  • Potential for researcher bias: Although quantitative research aims to be objective, there is still the potential for researcher bias in areas such as sampling, data collection, and data analysis.
  • Limited ability to explore new ideas: Quantitative research is often based on pre-determined research questions and hypotheses, which may limit the ability to explore new ideas or unexpected findings.
  • Limited ability to capture subjective experiences : Quantitative research is typically focused on objective data and may not capture the subjective experiences of individuals or groups being studied.
  • Ethical concerns : Quantitative research may raise ethical concerns, such as invasion of privacy or the potential for harm to participants.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

data analysis strategies in quantitative research

  • Full funnel reporting
  • Customized white label dashboards
  • Custom connectors
  • Bad data detection
  • Solutions By Role PPC SEO Social Media CMO BI-IT Sales Vertical marketing Web analytics By Industry Marketing Agencies SaaS eCommerce
  • Case Studies
  • CEO Dashboard
  • Sales Dashboard
  • Schedule Demo
  • 101 Guide to Quantitative Data Analysis [Methods + Techniques]
  • March 8, 2023

data analysis strategies in quantitative research

Quantitative data analysis comes with the challenge of analyzing large datasets consisting of numeric variables and statistics. Researchers often get overwhelmed by various techniques, methods, and data sources. 

At the same time, the importance of data collection and analysis drastically increases. It helps improve current products/services, identify the potential for a new product, understand target market psychology, and plan upcoming campaigns. 

We have compiled this in-depth guide to ensure you get over the complexities of quantitative data analysis. Let’s begin with the basic meaning and importance of quantitative data analysis. 

Quantitative data analysis meaning 

Quantitative data analysis evaluates quantifiable and structured data to obtain simplified results. Analysts aim to interpret and draw conclusions from numeric variables and statistics. The entire analytical process works on algorithms and data analytics software, helping gain valuable insights. Continuous values are broken into parts for easy understanding using various tools and software. Such data is extracted through surveys and questionnaires. 

However, data analytics software also helps extract quantitative data through email campaigns, websites, and social media. 

Qualitative Vs. Quantitative research: major differences 

Qualitative research aims at extracting valuable insights through non-numerical data like the psychology of customers . The research aims at obtaining solid results and confirming assumptions on general ideas. Furthermore, the collected data presentation remains descriptive instead of numerical-centric.

On the other hand, quantitative research focuses on numbers and statistics to identify gaps in current marketing and operational methods. It successfully answers questions like how many leads are converted in a specific email campaign. 

Collecting quantitative data includes surveys, polls, questionnaires, etc. It remains efficient in identifying trends and patterns in the collected data. However, the obtained results aren’t always accurate as there are chances of numerical errors. 

Note: both quantitative and qualitative data can be obtained with surveys. However, qualitative data collection focuses on asking open-ended questions.

In contrast, quantitative research focuses on close-ended ones. 

4-Step Process of Quantitative data analysis 

Now that we understand the meaning of quantitative data analysis, let’s proceed with four simple steps for conducting it. 

Step 1: Identifying your goals and objectives.

Start by analyzing current business problems and the ones you plan to address with your analysis. 

For example , your customer churn rate significantly increased in the last month.

Do you want to identify the reasons behind it? Being clear with the objective helps in collecting and analyzing relevant data. 

Step 2: Data collection 

Now you are clear with the issue you plan to address. Let’s identify and collect data from all relevant sources. 

For example , conducting a survey of MCQs targeting all possible reasons behind the increase in the churn rate. Identify all relevant data sources and collect data for further analysis. 

Step 3: Data cleaning 

As discussed earlier, quantitative data doesn’t remain highly accurate as they are always chances of errors. Due to this, quantitative data analysis goes through many stages of cleaning. 

Firstly, analysts start with data validation to identify if the data was collected based on defined procedures. Secondly, large datasets require a lot of editing to identify errors like empty fields or wrongly inserted digits. 

Remember that the collected data consists of many duplications, unwanted data points, a lack of structure, and major gaps you must eliminate. Lastly, the collected data is presented in structured formats like tables for easy analysis. 

Step 4: Data Analysis and interpretation 

Now, you are equipped with fairly accurate data sets required for analysis. Using tools and software, data analysts interpret the collected data to draw valuable conclusions. Many techniques are used for quantitative data analysis, including time-series analysis, regression analysis, etc. However, applying the techniques correctly play a greater role than the type of technique used. 

What are the methods for Quantitative research data collection? 

Now that we know the meaning of quantitative data collection, let’s look at some methods of collecting it:

Surveys: close-ended questions

Surveys remain one of the most common methods of quantitative research data collection. These surveys include super-specific questions where respondents answer yes/no or multiple-choice questions. Most companies are going with rating questions or checklist-type survey questions. 

Conducting interviews 

Interviews remain another commonly used method for quantitative data collection. The interviews remain structured with specific questions. Telephone interviews were generally preferred until the introduction of video interviews using Skype, Zoom, etc. Some researchers also go with face-to-face interviews to collect quality data. 

Analytical tools 

Manually collecting and analyzing large datasets remain inconvenient. Many different analytical tools are available to collect, analyze, interpret, and visualize a large amount of data. For instance, tools like GrowthNirvana remain effective in efficient marketing analytics and provide relevant data without delays. All valuable insights required for business growth are easily extracted to make quicker decisions. 

Document review: analyzing primary data

Researchers often analyze the available primary data like public records. The findings support the results generated from other quantitative data collection methods. 

Methods and techniques of quantitative data analysis 

The two common methods used for quantitative data analysis are descriptive and inferential statistics. Analysts use both methods to generate valuable insights. Here’s how: 

Descriptive statistics 

This method describes a dataset and provides an initial idea to help researchers identify potential trends or patterns. It generally focuses on analyzing single variables and explains the details of specific datasets. There are two ways of analyzing data using the descriptive statistics technique- numerical and graphical representation. Let’s start with the numerical method of quantitative data analysis. 

  • Numerical Method 

The numerical method organizes and evaluates data arithmetically to obtain simpler answers to complex problems. Describing data remains the easiest using the measure of central tendency and dispersion. 

The measures of central tendency like mean, median, and mode help identify the collected data’s central position. On the flip side, the measures of dispersion, like range, standard deviation, variance, etc., help understand the extent of data distribution concerning the central point. 

  • Graphical method 

The graphical method provides a better understanding of data through visual representation. Evaluating large data sets becomes easier if presented using a bar chart, pie chart, histogram, boxplot, etc. 

Inferential statistics 

Conducting only descriptive analytics isn’t enough to draw valuable conclusions from the collected data. It only provides limited information on the datasets, emphasizing inferential statistics’ importance. 

Inferential statistics make predictions using the data generated through descriptive statistics. It helps establish relationships between different variables to make relevant predictions. The technique remains suitable for large datasets. 

Certain samples of the data are taken to represent the entire set as evaluating large data remains hectic. Therefore, the summarized samples generated with descriptive statistics are used to draw valuable conclusions. 

Let’s now focus on some commonly used methods in inferential statistics: 

  • Regression analysis 

The method establishes a relationship between a dependent and independent variable(s). It assesses their current strength and predicts future possibilities to devise enhanced strategies. The most commonly used regression models are simple linear and multiple linear. 

  • Cross tabulations 

Cross tabulation or contingency table method is one of the most used methods for market research. It assists in the easy analysis of two or more variables through systematic rows and columns. Furthermore, the major goal of cross tabs remains intact in showing the changes in a dependent variable based on different subgroups. 

  • Monte Carlo method 

The method focuses on weighing all possible outcomes of specific scenarios. Analyzing the pros and cons helps in predicting advanced risks before taking action. Therefore, forecasting future risks based on changing scenarios improves decision-making. 

  • SWOT analysis 

A SWOT analysis identifies an organization’s strengths, weaknesses, opportunities, and threats. It takes into consideration internal and external factors to make better business plans. Companies often conduct a SWOT analysis to improve their products and services or while initiating a new project. 

  • Time Series analysis 

Time series or trend analysis evaluates data sets recorded within specific intervals. Instead of taking random samples, the data is recorded in given time frames. Companies often use time series analysis for forecasting demand and supply. 

What are examples of Quantitative data?

Let’s now look at some examples of quantitative data: 

  • Total number of app downloads in a month 
  • The total number of people who loved a newly introduced product feature 
  • Number of users converted with a marketing campaign 
  • Total number of website conversions in six months 
  • Number of customers residing in a specific location 

What does a data analyst do?

A data analyst collects and interprets data to answer key questions like the potential of new product development, changes in customer purchase behavior, gaps in current marketing campaigns, etc. Many data analysts conduct exploratory analysis to identify trends and patterns during the data cleaning process. 

The job also includes communicating the findings to team members to create better strategies. One can thrive in this position with the right technical and leadership skills. Therefore, a data analyst’s two key roles include knowing how to collect and use the collected data for business growth. 

What is the typical process that a data analyst follows?

The process followed by a data analyst includes the following steps: 

  • Setting objectives 

Understanding the goals behind collecting data help target the right areas for data collection. Next, identify the type of data you need to collect to conduct a specific analysis. 

  • Collecting data 

Now you are clear with the goals and data requirements. Focus on collecting data from identified sources using different methods of data collection. Some of the top sources include surveys, polls, interviews, etc. 

  • Data cleaning 

The data collection process includes collecting a large amount of data which requires further cleaning for analysis. This step removes duplicate records and identifies omissions and other numerical errors. 

  • Data analytics and interpretation 

Now, data analysts focus on analyzing the collected data using various tools and software. Based on the analysis, they draw relevant conclusions to make the most of their findings. 

  • Data visualization 

Data analysts must also convey the acquired information to relevant team members. Data visualization refers to the graphical interpretation of collected data for easy understanding. The process also helps analysts identify hidden insights for detailed reporting. 

What techniques and tools do data analytics use?

Data analysts use various techniques, including regression analysis, the Monte Carlo method, factor analysis, cohort analysis, etc. The right blend of techniques to suit specific situations helps achieve the required results. 

The tools used for data analysis reduce the manual burden of analysts and improve overall decision-making. There are varied categories of tools in data analysis, including business intelligence, ETL tools, automation tools, data mining, data visualization tools, etc. 

Some popular choices for data analysis include Google Analytics, Growth Nirvana (marketing analytics), Improvado , Datapine , etc. 

What are the skills required to become a data analyst?

A data analyst requires the following skills to thrive in the field: 

  • Complete knowledge of python programming
  • Mathematical and statistical understanding 
  • Data decluttering, organizing, and analyzing 
  • SQL knowledge
  • Problem-solving 
  • Logical reasoning and critical thinking 
  • Sharp communication skills
  • Collaboration 

What are some of the best data analytics course?

Let’s now look at some of the best data analytics courses in 2022 to help you gain all relevant skills: 

  • Detailed learning: Data Analyst Nanodegree program by Udacity  

A 4-month program helping people develop advanced programming skills to handle complex data-related issues. It covers everything from data analysis, and visualization, to exploration. 

What are some of the best data analytics course?

        Source

  • Best data analytics course for beginners: Become A Data Analyst by LinkedIn. 

The course consists of beginner-friendly lessons suitable for people with no prior understanding of data analysis. The experts in the industry take all sessions. Furthermore, you can easily complete the course within the free 30 days LinkedIn Learning period. 

Best data analytics course for beginners

  • Bite-sized learning: Data Analyst with R by Datacamp

The entire learning experience breaks down into multiple courses to help you keep up the pace. Industry experts curate about 19 different courses with a duration of 4 hours for every course. Furthermore, it also helps students gain practical exposure by working with real-life datasets. 

Data Analyst with R by Datacamp

      Source

What does the future hold for data analytics?

Data analytics remains a constantly evolving area that will become increasingly important for businesses in the coming future. Extracting real-time insights will help enhance business operations for continuous growth. Furthermore, the increasing growth of business analytics tools makes it easier for businesses to analyze data and draw conclusions without complex coding knowledge. 

Key Takeaways 

  • Analysts must use quantitative (number-oriented) and qualitative (non-numeric) data to devise and modify business strategies. 
  • Surveys are used for obtaining both quantitative and qualitative research data. However, quantitative surveys include only close-ended questions. 
  • Data cleaning remains one of the most crucial steps of data analysis. It ensures the collected data doesn’t contain duplications, omissions, unwanted data points, etc. 
  • The top skills possessed by data analysts include python programming, statistical knowledge, data decluttering, SQL knowledge, collaboration, and communication skills. 

The top two data analytics techniques — descriptive and inferential statistics- are complementary.

Related Resources

  • 101 Guide to Quantitative Data Analysis
  • Marketing Automation In Your Marketing Strategy in 2023
  • How to use Analytics Data to Reach New and Existing Customers
  • How to Leverage Email Marketing with Advanced Analytics?
  • The Best 7 Domo Alternatives in 2023
  • 5 Best Improvado Alternatives for Marketers in 2023
  • supermetrics alternatives
  • marketing agency reporting tools

Related Guide Resources:

  • Connect Google Analytics with Google Data Studio: 101 [New Guide]
  • An Ultimate Guide To Omnichannel Analytics: Meaning, Benefits, Setup Process
  • What is Marketing Analytics? Examples and Its Importance

You may also like...

data analysis strategies in quantitative research

Top 20 data connectors in 2023

GA4 Reports

How To Make Custom Reports In Google Analytics 4 (2023)

ppc lead generation strategy

How To Get More Leads With PPC: Top 12 PPC Lead Generation St...

Growth Nirvana Inc. 145 Cedar Way #14 Laguna Beach, CA 92651

Copyright © 2024 Growth Nirvana.  All rights reserved.

Give us a few details & get your automated custom dashboard in no time

No worries before you go explore how growth nirvana helped growing brands scale up to new heights., is your current reporting process taking up valuable time.

Your Modern Business Guide To Data Analysis Methods And Techniques

Data analysis methods and techniques blog post by datapine

Table of Contents

1) What Is Data Analysis?

2) Why Is Data Analysis Important?

3) What Is The Data Analysis Process?

4) Types Of Data Analysis Methods

5) Top Data Analysis Techniques To Apply

6) Quality Criteria For Data Analysis

7) Data Analysis Limitations & Barriers

8) Data Analysis Skills

9) Data Analysis In The Big Data Environment

In our data-rich age, understanding how to analyze and extract true meaning from our business’s digital insights is one of the primary drivers of success.

Despite the colossal volume of data we create every day, a mere 0.5% is actually analyzed and used for data discovery , improvement, and intelligence. While that may not seem like much, considering the amount of digital information we have at our fingertips, half a percent still accounts for a vast amount of data.

With so much data and so little time, knowing how to collect, curate, organize, and make sense of all of this potentially business-boosting information can be a minefield – but online data analysis is the solution.

In science, data analysis uses a more complex approach with advanced techniques to explore and experiment with data. On the other hand, in a business context, data is used to make data-driven decisions that will enable the company to improve its overall performance. In this post, we will cover the analysis of data from an organizational point of view while still going through the scientific and statistical foundations that are fundamental to understanding the basics of data analysis. 

To put all of that into perspective, we will answer a host of important analytical questions, explore analytical methods and techniques, while demonstrating how to perform analysis in the real world with a 17-step blueprint for success.

What Is Data Analysis?

Data analysis is the process of collecting, modeling, and analyzing data using various statistical and logical methods and techniques. Businesses rely on analytics processes and tools to extract insights that support strategic and operational decision-making.

All these various methods are largely based on two core areas: quantitative and qualitative research.

To explain the key differences between qualitative and quantitative research, here’s a video for your viewing pleasure:

Gaining a better understanding of different techniques and methods in quantitative research as well as qualitative insights will give your analyzing efforts a more clearly defined direction, so it’s worth taking the time to allow this particular knowledge to sink in. Additionally, you will be able to create a comprehensive analytical report that will skyrocket your analysis.

Apart from qualitative and quantitative categories, there are also other types of data that you should be aware of before dividing into complex data analysis processes. These categories include: 

  • Big data: Refers to massive data sets that need to be analyzed using advanced software to reveal patterns and trends. It is considered to be one of the best analytical assets as it provides larger volumes of data at a faster rate. 
  • Metadata: Putting it simply, metadata is data that provides insights about other data. It summarizes key information about specific data that makes it easier to find and reuse for later purposes. 
  • Real time data: As its name suggests, real time data is presented as soon as it is acquired. From an organizational perspective, this is the most valuable data as it can help you make important decisions based on the latest developments. Our guide on real time analytics will tell you more about the topic. 
  • Machine data: This is more complex data that is generated solely by a machine such as phones, computers, or even websites and embedded systems, without previous human interaction.

Why Is Data Analysis Important?

Before we go into detail about the categories of analysis along with its methods and techniques, you must understand the potential that analyzing data can bring to your organization.

  • Informed decision-making : From a management perspective, you can benefit from analyzing your data as it helps you make decisions based on facts and not simple intuition. For instance, you can understand where to invest your capital, detect growth opportunities, predict your income, or tackle uncommon situations before they become problems. Through this, you can extract relevant insights from all areas in your organization, and with the help of dashboard software , present the data in a professional and interactive way to different stakeholders.
  • Reduce costs : Another great benefit is to reduce costs. With the help of advanced technologies such as predictive analytics, businesses can spot improvement opportunities, trends, and patterns in their data and plan their strategies accordingly. In time, this will help you save money and resources on implementing the wrong strategies. And not just that, by predicting different scenarios such as sales and demand you can also anticipate production and supply. 
  • Target customers better : Customers are arguably the most crucial element in any business. By using analytics to get a 360° vision of all aspects related to your customers, you can understand which channels they use to communicate with you, their demographics, interests, habits, purchasing behaviors, and more. In the long run, it will drive success to your marketing strategies, allow you to identify new potential customers, and avoid wasting resources on targeting the wrong people or sending the wrong message. You can also track customer satisfaction by analyzing your client’s reviews or your customer service department’s performance.

What Is The Data Analysis Process?

Data analysis process graphic

When we talk about analyzing data there is an order to follow in order to extract the needed conclusions. The analysis process consists of 5 key stages. We will cover each of them more in detail later in the post, but to start providing the needed context to understand what is coming next, here is a rundown of the 5 essential steps of data analysis. 

  • Identify: Before you get your hands dirty with data, you first need to identify why you need it in the first place. The identification is the stage in which you establish the questions you will need to answer. For example, what is the customer's perception of our brand? Or what type of packaging is more engaging to our potential customers? Once the questions are outlined you are ready for the next step. 
  • Collect: As its name suggests, this is the stage where you start collecting the needed data. Here, you define which sources of data you will use and how you will use them. The collection of data can come in different forms such as internal or external sources, surveys, interviews, questionnaires, and focus groups, among others.  An important note here is that the way you collect the data will be different in a quantitative and qualitative scenario. 
  • Clean: Once you have the necessary data it is time to clean it and leave it ready for analysis. Not all the data you collect will be useful, when collecting big amounts of data in different formats it is very likely that you will find yourself with duplicate or badly formatted data. To avoid this, before you start working with your data you need to make sure to erase any white spaces, duplicate records, or formatting errors. This way you avoid hurting your analysis with bad-quality data. 
  • Analyze : With the help of various techniques such as statistical analysis, regressions, neural networks, text analysis, and more, you can start analyzing and manipulating your data to extract relevant conclusions. At this stage, you find trends, correlations, variations, and patterns that can help you answer the questions you first thought of in the identify stage. Various technologies in the market assist researchers and average users with the management of their data. Some of them include business intelligence and visualization software, predictive analytics, and data mining, among others. 
  • Interpret: Last but not least you have one of the most important steps: it is time to interpret your results. This stage is where the researcher comes up with courses of action based on the findings. For example, here you would understand if your clients prefer packaging that is red or green, plastic or paper, etc. Additionally, at this stage, you can also find some limitations and work on them. 

Now that you have a basic understanding of the key data analysis steps, let’s look at the top 17 essential methods.

17 Essential Types Of Data Analysis Methods

Before diving into the 17 essential types of methods, it is important that we go over really fast through the main analysis categories. Starting with the category of descriptive up to prescriptive analysis, the complexity and effort of data evaluation increases, but also the added value for the company.

a) Descriptive analysis - What happened.

The descriptive analysis method is the starting point for any analytic reflection, and it aims to answer the question of what happened? It does this by ordering, manipulating, and interpreting raw data from various sources to turn it into valuable insights for your organization.

Performing descriptive analysis is essential, as it enables us to present our insights in a meaningful way. Although it is relevant to mention that this analysis on its own will not allow you to predict future outcomes or tell you the answer to questions like why something happened, it will leave your data organized and ready to conduct further investigations.

b) Exploratory analysis - How to explore data relationships.

As its name suggests, the main aim of the exploratory analysis is to explore. Prior to it, there is still no notion of the relationship between the data and the variables. Once the data is investigated, exploratory analysis helps you to find connections and generate hypotheses and solutions for specific problems. A typical area of ​​application for it is data mining.

c) Diagnostic analysis - Why it happened.

Diagnostic data analytics empowers analysts and executives by helping them gain a firm contextual understanding of why something happened. If you know why something happened as well as how it happened, you will be able to pinpoint the exact ways of tackling the issue or challenge.

Designed to provide direct and actionable answers to specific questions, this is one of the world’s most important methods in research, among its other key organizational functions such as retail analytics , e.g.

c) Predictive analysis - What will happen.

The predictive method allows you to look into the future to answer the question: what will happen? In order to do this, it uses the results of the previously mentioned descriptive, exploratory, and diagnostic analysis, in addition to machine learning (ML) and artificial intelligence (AI). Through this, you can uncover future trends, potential problems or inefficiencies, connections, and casualties in your data.

With predictive analysis, you can unfold and develop initiatives that will not only enhance your various operational processes but also help you gain an all-important edge over the competition. If you understand why a trend, pattern, or event happened through data, you will be able to develop an informed projection of how things may unfold in particular areas of the business.

e) Prescriptive analysis - How will it happen.

Another of the most effective types of analysis methods in research. Prescriptive data techniques cross over from predictive analysis in the way that it revolves around using patterns or trends to develop responsive, practical business strategies.

By drilling down into prescriptive analysis, you will play an active role in the data consumption process by taking well-arranged sets of visual data and using it as a powerful fix to emerging issues in a number of key areas, including marketing, sales, customer experience, HR, fulfillment, finance, logistics analytics , and others.

Top 17 data analysis methods

As mentioned at the beginning of the post, data analysis methods can be divided into two big categories: quantitative and qualitative. Each of these categories holds a powerful analytical value that changes depending on the scenario and type of data you are working with. Below, we will discuss 17 methods that are divided into qualitative and quantitative approaches. 

Without further ado, here are the 17 essential types of data analysis methods with some use cases in the business world: 

A. Quantitative Methods 

To put it simply, quantitative analysis refers to all methods that use numerical data or data that can be turned into numbers (e.g. category variables like gender, age, etc.) to extract valuable insights. It is used to extract valuable conclusions about relationships, differences, and test hypotheses. Below we discuss some of the key quantitative methods. 

1. Cluster analysis

The action of grouping a set of data elements in a way that said elements are more similar (in a particular sense) to each other than to those in other groups – hence the term ‘cluster.’ Since there is no target variable when clustering, the method is often used to find hidden patterns in the data. The approach is also used to provide additional context to a trend or dataset.

Let's look at it from an organizational perspective. In a perfect world, marketers would be able to analyze each customer separately and give them the best-personalized service, but let's face it, with a large customer base, it is timely impossible to do that. That's where clustering comes in. By grouping customers into clusters based on demographics, purchasing behaviors, monetary value, or any other factor that might be relevant for your company, you will be able to immediately optimize your efforts and give your customers the best experience based on their needs.

2. Cohort analysis

This type of data analysis approach uses historical data to examine and compare a determined segment of users' behavior, which can then be grouped with others with similar characteristics. By using this methodology, it's possible to gain a wealth of insight into consumer needs or a firm understanding of a broader target group.

Cohort analysis can be really useful for performing analysis in marketing as it will allow you to understand the impact of your campaigns on specific groups of customers. To exemplify, imagine you send an email campaign encouraging customers to sign up for your site. For this, you create two versions of the campaign with different designs, CTAs, and ad content. Later on, you can use cohort analysis to track the performance of the campaign for a longer period of time and understand which type of content is driving your customers to sign up, repurchase, or engage in other ways.  

A useful tool to start performing cohort analysis method is Google Analytics. You can learn more about the benefits and limitations of using cohorts in GA in this useful guide . In the bottom image, you see an example of how you visualize a cohort in this tool. The segments (devices traffic) are divided into date cohorts (usage of devices) and then analyzed week by week to extract insights into performance.

Cohort analysis chart example from google analytics

3. Regression analysis

Regression uses historical data to understand how a dependent variable's value is affected when one (linear regression) or more independent variables (multiple regression) change or stay the same. By understanding each variable's relationship and how it developed in the past, you can anticipate possible outcomes and make better decisions in the future.

Let's bring it down with an example. Imagine you did a regression analysis of your sales in 2019 and discovered that variables like product quality, store design, customer service, marketing campaigns, and sales channels affected the overall result. Now you want to use regression to analyze which of these variables changed or if any new ones appeared during 2020. For example, you couldn’t sell as much in your physical store due to COVID lockdowns. Therefore, your sales could’ve either dropped in general or increased in your online channels. Through this, you can understand which independent variables affected the overall performance of your dependent variable, annual sales.

If you want to go deeper into this type of analysis, check out this article and learn more about how you can benefit from regression.

4. Neural networks

The neural network forms the basis for the intelligent algorithms of machine learning. It is a form of analytics that attempts, with minimal intervention, to understand how the human brain would generate insights and predict values. Neural networks learn from each and every data transaction, meaning that they evolve and advance over time.

A typical area of application for neural networks is predictive analytics. There are BI reporting tools that have this feature implemented within them, such as the Predictive Analytics Tool from datapine. This tool enables users to quickly and easily generate all kinds of predictions. All you have to do is select the data to be processed based on your KPIs, and the software automatically calculates forecasts based on historical and current data. Thanks to its user-friendly interface, anyone in your organization can manage it; there’s no need to be an advanced scientist. 

Here is an example of how you can use the predictive analysis tool from datapine:

Example on how to use predictive analytics tool from datapine

**click to enlarge**

5. Factor analysis

The factor analysis also called “dimension reduction” is a type of data analysis used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. The aim here is to uncover independent latent variables, an ideal method for streamlining specific segments.

A good way to understand this data analysis method is a customer evaluation of a product. The initial assessment is based on different variables like color, shape, wearability, current trends, materials, comfort, the place where they bought the product, and frequency of usage. Like this, the list can be endless, depending on what you want to track. In this case, factor analysis comes into the picture by summarizing all of these variables into homogenous groups, for example, by grouping the variables color, materials, quality, and trends into a brother latent variable of design.

If you want to start analyzing data using factor analysis we recommend you take a look at this practical guide from UCLA.

6. Data mining

A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.  When considering how to analyze data, adopting a data mining mindset is essential to success - as such, it’s an area that is worth exploring in greater detail.

An excellent use case of data mining is datapine intelligent data alerts . With the help of artificial intelligence and machine learning, they provide automated signals based on particular commands or occurrences within a dataset. For example, if you’re monitoring supply chain KPIs , you could set an intelligent alarm to trigger when invalid or low-quality data appears. By doing so, you will be able to drill down deep into the issue and fix it swiftly and effectively.

In the following picture, you can see how the intelligent alarms from datapine work. By setting up ranges on daily orders, sessions, and revenues, the alarms will notify you if the goal was not completed or if it exceeded expectations.

Example on how to use intelligent alerts from datapine

7. Time series analysis

As its name suggests, time series analysis is used to analyze a set of data points collected over a specified period of time. Although analysts use this method to monitor the data points in a specific interval of time rather than just monitoring them intermittently, the time series analysis is not uniquely used for the purpose of collecting data over time. Instead, it allows researchers to understand if variables changed during the duration of the study, how the different variables are dependent, and how did it reach the end result. 

In a business context, this method is used to understand the causes of different trends and patterns to extract valuable insights. Another way of using this method is with the help of time series forecasting. Powered by predictive technologies, businesses can analyze various data sets over a period of time and forecast different future events. 

A great use case to put time series analysis into perspective is seasonality effects on sales. By using time series forecasting to analyze sales data of a specific product over time, you can understand if sales rise over a specific period of time (e.g. swimwear during summertime, or candy during Halloween). These insights allow you to predict demand and prepare production accordingly.  

8. Decision Trees 

The decision tree analysis aims to act as a support tool to make smart and strategic decisions. By visually displaying potential outcomes, consequences, and costs in a tree-like model, researchers and company users can easily evaluate all factors involved and choose the best course of action. Decision trees are helpful to analyze quantitative data and they allow for an improved decision-making process by helping you spot improvement opportunities, reduce costs, and enhance operational efficiency and production.

But how does a decision tree actually works? This method works like a flowchart that starts with the main decision that you need to make and branches out based on the different outcomes and consequences of each decision. Each outcome will outline its own consequences, costs, and gains and, at the end of the analysis, you can compare each of them and make the smartest decision. 

Businesses can use them to understand which project is more cost-effective and will bring more earnings in the long run. For example, imagine you need to decide if you want to update your software app or build a new app entirely.  Here you would compare the total costs, the time needed to be invested, potential revenue, and any other factor that might affect your decision.  In the end, you would be able to see which of these two options is more realistic and attainable for your company or research.

9. Conjoint analysis 

Last but not least, we have the conjoint analysis. This approach is usually used in surveys to understand how individuals value different attributes of a product or service and it is one of the most effective methods to extract consumer preferences. When it comes to purchasing, some clients might be more price-focused, others more features-focused, and others might have a sustainable focus. Whatever your customer's preferences are, you can find them with conjoint analysis. Through this, companies can define pricing strategies, packaging options, subscription packages, and more. 

A great example of conjoint analysis is in marketing and sales. For instance, a cupcake brand might use conjoint analysis and find that its clients prefer gluten-free options and cupcakes with healthier toppings over super sugary ones. Thus, the cupcake brand can turn these insights into advertisements and promotions to increase sales of this particular type of product. And not just that, conjoint analysis can also help businesses segment their customers based on their interests. This allows them to send different messaging that will bring value to each of the segments. 

10. Correspondence Analysis

Also known as reciprocal averaging, correspondence analysis is a method used to analyze the relationship between categorical variables presented within a contingency table. A contingency table is a table that displays two (simple correspondence analysis) or more (multiple correspondence analysis) categorical variables across rows and columns that show the distribution of the data, which is usually answers to a survey or questionnaire on a specific topic. 

This method starts by calculating an “expected value” which is done by multiplying row and column averages and dividing it by the overall original value of the specific table cell. The “expected value” is then subtracted from the original value resulting in a “residual number” which is what allows you to extract conclusions about relationships and distribution. The results of this analysis are later displayed using a map that represents the relationship between the different values. The closest two values are in the map, the bigger the relationship. Let’s put it into perspective with an example. 

Imagine you are carrying out a market research analysis about outdoor clothing brands and how they are perceived by the public. For this analysis, you ask a group of people to match each brand with a certain attribute which can be durability, innovation, quality materials, etc. When calculating the residual numbers, you can see that brand A has a positive residual for innovation but a negative one for durability. This means that brand A is not positioned as a durable brand in the market, something that competitors could take advantage of. 

11. Multidimensional Scaling (MDS)

MDS is a method used to observe the similarities or disparities between objects which can be colors, brands, people, geographical coordinates, and more. The objects are plotted using an “MDS map” that positions similar objects together and disparate ones far apart. The (dis) similarities between objects are represented using one or more dimensions that can be observed using a numerical scale. For example, if you want to know how people feel about the COVID-19 vaccine, you can use 1 for “don’t believe in the vaccine at all”  and 10 for “firmly believe in the vaccine” and a scale of 2 to 9 for in between responses.  When analyzing an MDS map the only thing that matters is the distance between the objects, the orientation of the dimensions is arbitrary and has no meaning at all. 

Multidimensional scaling is a valuable technique for market research, especially when it comes to evaluating product or brand positioning. For instance, if a cupcake brand wants to know how they are positioned compared to competitors, it can define 2-3 dimensions such as taste, ingredients, shopping experience, or more, and do a multidimensional scaling analysis to find improvement opportunities as well as areas in which competitors are currently leading. 

Another business example is in procurement when deciding on different suppliers. Decision makers can generate an MDS map to see how the different prices, delivery times, technical services, and more of the different suppliers differ and pick the one that suits their needs the best. 

A final example proposed by a research paper on "An Improved Study of Multilevel Semantic Network Visualization for Analyzing Sentiment Word of Movie Review Data". Researchers picked a two-dimensional MDS map to display the distances and relationships between different sentiments in movie reviews. They used 36 sentiment words and distributed them based on their emotional distance as we can see in the image below where the words "outraged" and "sweet" are on opposite sides of the map, marking the distance between the two emotions very clearly.

Example of multidimensional scaling analysis

Aside from being a valuable technique to analyze dissimilarities, MDS also serves as a dimension-reduction technique for large dimensional data. 

B. Qualitative Methods

Qualitative data analysis methods are defined as the observation of non-numerical data that is gathered and produced using methods of observation such as interviews, focus groups, questionnaires, and more. As opposed to quantitative methods, qualitative data is more subjective and highly valuable in analyzing customer retention and product development.

12. Text analysis

Text analysis, also known in the industry as text mining, works by taking large sets of textual data and arranging them in a way that makes it easier to manage. By working through this cleansing process in stringent detail, you will be able to extract the data that is truly relevant to your organization and use it to develop actionable insights that will propel you forward.

Modern software accelerate the application of text analytics. Thanks to the combination of machine learning and intelligent algorithms, you can perform advanced analytical processes such as sentiment analysis. This technique allows you to understand the intentions and emotions of a text, for example, if it's positive, negative, or neutral, and then give it a score depending on certain factors and categories that are relevant to your brand. Sentiment analysis is often used to monitor brand and product reputation and to understand how successful your customer experience is. To learn more about the topic check out this insightful article .

By analyzing data from various word-based sources, including product reviews, articles, social media communications, and survey responses, you will gain invaluable insights into your audience, as well as their needs, preferences, and pain points. This will allow you to create campaigns, services, and communications that meet your prospects’ needs on a personal level, growing your audience while boosting customer retention. There are various other “sub-methods” that are an extension of text analysis. Each of them serves a more specific purpose and we will look at them in detail next. 

13. Content Analysis

This is a straightforward and very popular method that examines the presence and frequency of certain words, concepts, and subjects in different content formats such as text, image, audio, or video. For example, the number of times the name of a celebrity is mentioned on social media or online tabloids. It does this by coding text data that is later categorized and tabulated in a way that can provide valuable insights, making it the perfect mix of quantitative and qualitative analysis.

There are two types of content analysis. The first one is the conceptual analysis which focuses on explicit data, for instance, the number of times a concept or word is mentioned in a piece of content. The second one is relational analysis, which focuses on the relationship between different concepts or words and how they are connected within a specific context. 

Content analysis is often used by marketers to measure brand reputation and customer behavior. For example, by analyzing customer reviews. It can also be used to analyze customer interviews and find directions for new product development. It is also important to note, that in order to extract the maximum potential out of this analysis method, it is necessary to have a clearly defined research question. 

14. Thematic Analysis

Very similar to content analysis, thematic analysis also helps in identifying and interpreting patterns in qualitative data with the main difference being that the first one can also be applied to quantitative analysis. The thematic method analyzes large pieces of text data such as focus group transcripts or interviews and groups them into themes or categories that come up frequently within the text. It is a great method when trying to figure out peoples view’s and opinions about a certain topic. For example, if you are a brand that cares about sustainability, you can do a survey of your customers to analyze their views and opinions about sustainability and how they apply it to their lives. You can also analyze customer service calls transcripts to find common issues and improve your service. 

Thematic analysis is a very subjective technique that relies on the researcher’s judgment. Therefore,  to avoid biases, it has 6 steps that include familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up. It is also important to note that, because it is a flexible approach, the data can be interpreted in multiple ways and it can be hard to select what data is more important to emphasize. 

15. Narrative Analysis 

A bit more complex in nature than the two previous ones, narrative analysis is used to explore the meaning behind the stories that people tell and most importantly, how they tell them. By looking into the words that people use to describe a situation you can extract valuable conclusions about their perspective on a specific topic. Common sources for narrative data include autobiographies, family stories, opinion pieces, and testimonials, among others. 

From a business perspective, narrative analysis can be useful to analyze customer behaviors and feelings towards a specific product, service, feature, or others. It provides unique and deep insights that can be extremely valuable. However, it has some drawbacks.  

The biggest weakness of this method is that the sample sizes are usually very small due to the complexity and time-consuming nature of the collection of narrative data. Plus, the way a subject tells a story will be significantly influenced by his or her specific experiences, making it very hard to replicate in a subsequent study. 

16. Discourse Analysis

Discourse analysis is used to understand the meaning behind any type of written, verbal, or symbolic discourse based on its political, social, or cultural context. It mixes the analysis of languages and situations together. This means that the way the content is constructed and the meaning behind it is significantly influenced by the culture and society it takes place in. For example, if you are analyzing political speeches you need to consider different context elements such as the politician's background, the current political context of the country, the audience to which the speech is directed, and so on. 

From a business point of view, discourse analysis is a great market research tool. It allows marketers to understand how the norms and ideas of the specific market work and how their customers relate to those ideas. It can be very useful to build a brand mission or develop a unique tone of voice. 

17. Grounded Theory Analysis

Traditionally, researchers decide on a method and hypothesis and start to collect the data to prove that hypothesis. The grounded theory is the only method that doesn’t require an initial research question or hypothesis as its value lies in the generation of new theories. With the grounded theory method, you can go into the analysis process with an open mind and explore the data to generate new theories through tests and revisions. In fact, it is not necessary to collect the data and then start to analyze it. Researchers usually start to find valuable insights as they are gathering the data. 

All of these elements make grounded theory a very valuable method as theories are fully backed by data instead of initial assumptions. It is a great technique to analyze poorly researched topics or find the causes behind specific company outcomes. For example, product managers and marketers might use the grounded theory to find the causes of high levels of customer churn and look into customer surveys and reviews to develop new theories about the causes. 

How To Analyze Data? Top 17 Data Analysis Techniques To Apply

17 top data analysis techniques by datapine

Now that we’ve answered the questions “what is data analysis’”, why is it important, and covered the different data analysis types, it’s time to dig deeper into how to perform your analysis by working through these 17 essential techniques.

1. Collaborate your needs

Before you begin analyzing or drilling down into any techniques, it’s crucial to sit down collaboratively with all key stakeholders within your organization, decide on your primary campaign or strategic goals, and gain a fundamental understanding of the types of insights that will best benefit your progress or provide you with the level of vision you need to evolve your organization.

2. Establish your questions

Once you’ve outlined your core objectives, you should consider which questions will need answering to help you achieve your mission. This is one of the most important techniques as it will shape the very foundations of your success.

To help you ask the right things and ensure your data works for you, you have to ask the right data analysis questions .

3. Data democratization

After giving your data analytics methodology some real direction, and knowing which questions need answering to extract optimum value from the information available to your organization, you should continue with democratization.

Data democratization is an action that aims to connect data from various sources efficiently and quickly so that anyone in your organization can access it at any given moment. You can extract data in text, images, videos, numbers, or any other format. And then perform cross-database analysis to achieve more advanced insights to share with the rest of the company interactively.  

Once you have decided on your most valuable sources, you need to take all of this into a structured format to start collecting your insights. For this purpose, datapine offers an easy all-in-one data connectors feature to integrate all your internal and external sources and manage them at your will. Additionally, datapine’s end-to-end solution automatically updates your data, allowing you to save time and focus on performing the right analysis to grow your company.

data connectors from datapine

4. Think of governance 

When collecting data in a business or research context you always need to think about security and privacy. With data breaches becoming a topic of concern for businesses, the need to protect your client's or subject’s sensitive information becomes critical. 

To ensure that all this is taken care of, you need to think of a data governance strategy. According to Gartner , this concept refers to “ the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics .” In simpler words, data governance is a collection of processes, roles, and policies, that ensure the efficient use of data while still achieving the main company goals. It ensures that clear roles are in place for who can access the information and how they can access it. In time, this not only ensures that sensitive information is protected but also allows for an efficient analysis as a whole. 

5. Clean your data

After harvesting from so many sources you will be left with a vast amount of information that can be overwhelming to deal with. At the same time, you can be faced with incorrect data that can be misleading to your analysis. The smartest thing you can do to avoid dealing with this in the future is to clean the data. This is fundamental before visualizing it, as it will ensure that the insights you extract from it are correct.

There are many things that you need to look for in the cleaning process. The most important one is to eliminate any duplicate observations; this usually appears when using multiple internal and external sources of information. You can also add any missing codes, fix empty fields, and eliminate incorrectly formatted data.

Another usual form of cleaning is done with text data. As we mentioned earlier, most companies today analyze customer reviews, social media comments, questionnaires, and several other text inputs. In order for algorithms to detect patterns, text data needs to be revised to avoid invalid characters or any syntax or spelling errors. 

Most importantly, the aim of cleaning is to prevent you from arriving at false conclusions that can damage your company in the long run. By using clean data, you will also help BI solutions to interact better with your information and create better reports for your organization.

6. Set your KPIs

Once you’ve set your sources, cleaned your data, and established clear-cut questions you want your insights to answer, you need to set a host of key performance indicators (KPIs) that will help you track, measure, and shape your progress in a number of key areas.

KPIs are critical to both qualitative and quantitative analysis research. This is one of the primary methods of data analysis you certainly shouldn’t overlook.

To help you set the best possible KPIs for your initiatives and activities, here is an example of a relevant logistics KPI : transportation-related costs. If you want to see more go explore our collection of key performance indicator examples .

Transportation costs logistics KPIs

7. Omit useless data

Having bestowed your data analysis tools and techniques with true purpose and defined your mission, you should explore the raw data you’ve collected from all sources and use your KPIs as a reference for chopping out any information you deem to be useless.

Trimming the informational fat is one of the most crucial methods of analysis as it will allow you to focus your analytical efforts and squeeze every drop of value from the remaining ‘lean’ information.

Any stats, facts, figures, or metrics that don’t align with your business goals or fit with your KPI management strategies should be eliminated from the equation.

8. Build a data management roadmap

While, at this point, this particular step is optional (you will have already gained a wealth of insight and formed a fairly sound strategy by now), creating a data governance roadmap will help your data analysis methods and techniques become successful on a more sustainable basis. These roadmaps, if developed properly, are also built so they can be tweaked and scaled over time.

Invest ample time in developing a roadmap that will help you store, manage, and handle your data internally, and you will make your analysis techniques all the more fluid and functional – one of the most powerful types of data analysis methods available today.

9. Integrate technology

There are many ways to analyze data, but one of the most vital aspects of analytical success in a business context is integrating the right decision support software and technology.

Robust analysis platforms will not only allow you to pull critical data from your most valuable sources while working with dynamic KPIs that will offer you actionable insights; it will also present them in a digestible, visual, interactive format from one central, live dashboard . A data methodology you can count on.

By integrating the right technology within your data analysis methodology, you’ll avoid fragmenting your insights, saving you time and effort while allowing you to enjoy the maximum value from your business’s most valuable insights.

For a look at the power of software for the purpose of analysis and to enhance your methods of analyzing, glance over our selection of dashboard examples .

10. Answer your questions

By considering each of the above efforts, working with the right technology, and fostering a cohesive internal culture where everyone buys into the different ways to analyze data as well as the power of digital intelligence, you will swiftly start to answer your most burning business questions. Arguably, the best way to make your data concepts accessible across the organization is through data visualization.

11. Visualize your data

Online data visualization is a powerful tool as it lets you tell a story with your metrics, allowing users across the organization to extract meaningful insights that aid business evolution – and it covers all the different ways to analyze data.

The purpose of analyzing is to make your entire organization more informed and intelligent, and with the right platform or dashboard, this is simpler than you think, as demonstrated by our marketing dashboard .

An executive dashboard example showcasing high-level marketing KPIs such as cost per lead, MQL, SQL, and cost per customer.

This visual, dynamic, and interactive online dashboard is a data analysis example designed to give Chief Marketing Officers (CMO) an overview of relevant metrics to help them understand if they achieved their monthly goals.

In detail, this example generated with a modern dashboard creator displays interactive charts for monthly revenues, costs, net income, and net income per customer; all of them are compared with the previous month so that you can understand how the data fluctuated. In addition, it shows a detailed summary of the number of users, customers, SQLs, and MQLs per month to visualize the whole picture and extract relevant insights or trends for your marketing reports .

The CMO dashboard is perfect for c-level management as it can help them monitor the strategic outcome of their marketing efforts and make data-driven decisions that can benefit the company exponentially.

12. Be careful with the interpretation

We already dedicated an entire post to data interpretation as it is a fundamental part of the process of data analysis. It gives meaning to the analytical information and aims to drive a concise conclusion from the analysis results. Since most of the time companies are dealing with data from many different sources, the interpretation stage needs to be done carefully and properly in order to avoid misinterpretations. 

To help you through the process, here we list three common practices that you need to avoid at all costs when looking at your data:

  • Correlation vs. causation: The human brain is formatted to find patterns. This behavior leads to one of the most common mistakes when performing interpretation: confusing correlation with causation. Although these two aspects can exist simultaneously, it is not correct to assume that because two things happened together, one provoked the other. A piece of advice to avoid falling into this mistake is never to trust just intuition, trust the data. If there is no objective evidence of causation, then always stick to correlation. 
  • Confirmation bias: This phenomenon describes the tendency to select and interpret only the data necessary to prove one hypothesis, often ignoring the elements that might disprove it. Even if it's not done on purpose, confirmation bias can represent a real problem, as excluding relevant information can lead to false conclusions and, therefore, bad business decisions. To avoid it, always try to disprove your hypothesis instead of proving it, share your analysis with other team members, and avoid drawing any conclusions before the entire analytical project is finalized.
  • Statistical significance: To put it in short words, statistical significance helps analysts understand if a result is actually accurate or if it happened because of a sampling error or pure chance. The level of statistical significance needed might depend on the sample size and the industry being analyzed. In any case, ignoring the significance of a result when it might influence decision-making can be a huge mistake.

13. Build a narrative

Now, we’re going to look at how you can bring all of these elements together in a way that will benefit your business - starting with a little something called data storytelling.

The human brain responds incredibly well to strong stories or narratives. Once you’ve cleansed, shaped, and visualized your most invaluable data using various BI dashboard tools , you should strive to tell a story - one with a clear-cut beginning, middle, and end.

By doing so, you will make your analytical efforts more accessible, digestible, and universal, empowering more people within your organization to use your discoveries to their actionable advantage.

14. Consider autonomous technology

Autonomous technologies, such as artificial intelligence (AI) and machine learning (ML), play a significant role in the advancement of understanding how to analyze data more effectively.

Gartner predicts that by the end of this year, 80% of emerging technologies will be developed with AI foundations. This is a testament to the ever-growing power and value of autonomous technologies.

At the moment, these technologies are revolutionizing the analysis industry. Some examples that we mentioned earlier are neural networks, intelligent alarms, and sentiment analysis.

15. Share the load

If you work with the right tools and dashboards, you will be able to present your metrics in a digestible, value-driven format, allowing almost everyone in the organization to connect with and use relevant data to their advantage.

Modern dashboards consolidate data from various sources, providing access to a wealth of insights in one centralized location, no matter if you need to monitor recruitment metrics or generate reports that need to be sent across numerous departments. Moreover, these cutting-edge tools offer access to dashboards from a multitude of devices, meaning that everyone within the business can connect with practical insights remotely - and share the load.

Once everyone is able to work with a data-driven mindset, you will catalyze the success of your business in ways you never thought possible. And when it comes to knowing how to analyze data, this kind of collaborative approach is essential.

16. Data analysis tools

In order to perform high-quality analysis of data, it is fundamental to use tools and software that will ensure the best results. Here we leave you a small summary of four fundamental categories of data analysis tools for your organization.

  • Business Intelligence: BI tools allow you to process significant amounts of data from several sources in any format. Through this, you can not only analyze and monitor your data to extract relevant insights but also create interactive reports and dashboards to visualize your KPIs and use them for your company's good. datapine is an amazing online BI software that is focused on delivering powerful online analysis features that are accessible to beginner and advanced users. Like this, it offers a full-service solution that includes cutting-edge analysis of data, KPIs visualization, live dashboards, reporting, and artificial intelligence technologies to predict trends and minimize risk.
  • Statistical analysis: These tools are usually designed for scientists, statisticians, market researchers, and mathematicians, as they allow them to perform complex statistical analyses with methods like regression analysis, predictive analysis, and statistical modeling. A good tool to perform this type of analysis is R-Studio as it offers a powerful data modeling and hypothesis testing feature that can cover both academic and general data analysis. This tool is one of the favorite ones in the industry, due to its capability for data cleaning, data reduction, and performing advanced analysis with several statistical methods. Another relevant tool to mention is SPSS from IBM. The software offers advanced statistical analysis for users of all skill levels. Thanks to a vast library of machine learning algorithms, text analysis, and a hypothesis testing approach it can help your company find relevant insights to drive better decisions. SPSS also works as a cloud service that enables you to run it anywhere.
  • SQL Consoles: SQL is a programming language often used to handle structured data in relational databases. Tools like these are popular among data scientists as they are extremely effective in unlocking these databases' value. Undoubtedly, one of the most used SQL software in the market is MySQL Workbench . This tool offers several features such as a visual tool for database modeling and monitoring, complete SQL optimization, administration tools, and visual performance dashboards to keep track of KPIs.
  • Data Visualization: These tools are used to represent your data through charts, graphs, and maps that allow you to find patterns and trends in the data. datapine's already mentioned BI platform also offers a wealth of powerful online data visualization tools with several benefits. Some of them include: delivering compelling data-driven presentations to share with your entire company, the ability to see your data online with any device wherever you are, an interactive dashboard design feature that enables you to showcase your results in an interactive and understandable way, and to perform online self-service reports that can be used simultaneously with several other people to enhance team productivity.

17. Refine your process constantly 

Last is a step that might seem obvious to some people, but it can be easily ignored if you think you are done. Once you have extracted the needed results, you should always take a retrospective look at your project and think about what you can improve. As you saw throughout this long list of techniques, data analysis is a complex process that requires constant refinement. For this reason, you should always go one step further and keep improving. 

Quality Criteria For Data Analysis

So far we’ve covered a list of methods and techniques that should help you perform efficient data analysis. But how do you measure the quality and validity of your results? This is done with the help of some science quality criteria. Here we will go into a more theoretical area that is critical to understanding the fundamentals of statistical analysis in science. However, you should also be aware of these steps in a business context, as they will allow you to assess the quality of your results in the correct way. Let’s dig in. 

  • Internal validity: The results of a survey are internally valid if they measure what they are supposed to measure and thus provide credible results. In other words , internal validity measures the trustworthiness of the results and how they can be affected by factors such as the research design, operational definitions, how the variables are measured, and more. For instance, imagine you are doing an interview to ask people if they brush their teeth two times a day. While most of them will answer yes, you can still notice that their answers correspond to what is socially acceptable, which is to brush your teeth at least twice a day. In this case, you can’t be 100% sure if respondents actually brush their teeth twice a day or if they just say that they do, therefore, the internal validity of this interview is very low. 
  • External validity: Essentially, external validity refers to the extent to which the results of your research can be applied to a broader context. It basically aims to prove that the findings of a study can be applied in the real world. If the research can be applied to other settings, individuals, and times, then the external validity is high. 
  • Reliability : If your research is reliable, it means that it can be reproduced. If your measurement were repeated under the same conditions, it would produce similar results. This means that your measuring instrument consistently produces reliable results. For example, imagine a doctor building a symptoms questionnaire to detect a specific disease in a patient. Then, various other doctors use this questionnaire but end up diagnosing the same patient with a different condition. This means the questionnaire is not reliable in detecting the initial disease. Another important note here is that in order for your research to be reliable, it also needs to be objective. If the results of a study are the same, independent of who assesses them or interprets them, the study can be considered reliable. Let’s see the objectivity criteria in more detail now. 
  • Objectivity: In data science, objectivity means that the researcher needs to stay fully objective when it comes to its analysis. The results of a study need to be affected by objective criteria and not by the beliefs, personality, or values of the researcher. Objectivity needs to be ensured when you are gathering the data, for example, when interviewing individuals, the questions need to be asked in a way that doesn't influence the results. Paired with this, objectivity also needs to be thought of when interpreting the data. If different researchers reach the same conclusions, then the study is objective. For this last point, you can set predefined criteria to interpret the results to ensure all researchers follow the same steps. 

The discussed quality criteria cover mostly potential influences in a quantitative context. Analysis in qualitative research has by default additional subjective influences that must be controlled in a different way. Therefore, there are other quality criteria for this kind of research such as credibility, transferability, dependability, and confirmability. You can see each of them more in detail on this resource . 

Data Analysis Limitations & Barriers

Analyzing data is not an easy task. As you’ve seen throughout this post, there are many steps and techniques that you need to apply in order to extract useful information from your research. While a well-performed analysis can bring various benefits to your organization it doesn't come without limitations. In this section, we will discuss some of the main barriers you might encounter when conducting an analysis. Let’s see them more in detail. 

  • Lack of clear goals: No matter how good your data or analysis might be if you don’t have clear goals or a hypothesis the process might be worthless. While we mentioned some methods that don’t require a predefined hypothesis, it is always better to enter the analytical process with some clear guidelines of what you are expecting to get out of it, especially in a business context in which data is utilized to support important strategic decisions. 
  • Objectivity: Arguably one of the biggest barriers when it comes to data analysis in research is to stay objective. When trying to prove a hypothesis, researchers might find themselves, intentionally or unintentionally, directing the results toward an outcome that they want. To avoid this, always question your assumptions and avoid confusing facts with opinions. You can also show your findings to a research partner or external person to confirm that your results are objective. 
  • Data representation: A fundamental part of the analytical procedure is the way you represent your data. You can use various graphs and charts to represent your findings, but not all of them will work for all purposes. Choosing the wrong visual can not only damage your analysis but can mislead your audience, therefore, it is important to understand when to use each type of data depending on your analytical goals. Our complete guide on the types of graphs and charts lists 20 different visuals with examples of when to use them. 
  • Flawed correlation : Misleading statistics can significantly damage your research. We’ve already pointed out a few interpretation issues previously in the post, but it is an important barrier that we can't avoid addressing here as well. Flawed correlations occur when two variables appear related to each other but they are not. Confusing correlations with causation can lead to a wrong interpretation of results which can lead to building wrong strategies and loss of resources, therefore, it is very important to identify the different interpretation mistakes and avoid them. 
  • Sample size: A very common barrier to a reliable and efficient analysis process is the sample size. In order for the results to be trustworthy, the sample size should be representative of what you are analyzing. For example, imagine you have a company of 1000 employees and you ask the question “do you like working here?” to 50 employees of which 49 say yes, which means 95%. Now, imagine you ask the same question to the 1000 employees and 950 say yes, which also means 95%. Saying that 95% of employees like working in the company when the sample size was only 50 is not a representative or trustworthy conclusion. The significance of the results is way more accurate when surveying a bigger sample size.   
  • Privacy concerns: In some cases, data collection can be subjected to privacy regulations. Businesses gather all kinds of information from their customers from purchasing behaviors to addresses and phone numbers. If this falls into the wrong hands due to a breach, it can affect the security and confidentiality of your clients. To avoid this issue, you need to collect only the data that is needed for your research and, if you are using sensitive facts, make it anonymous so customers are protected. The misuse of customer data can severely damage a business's reputation, so it is important to keep an eye on privacy. 
  • Lack of communication between teams : When it comes to performing data analysis on a business level, it is very likely that each department and team will have different goals and strategies. However, they are all working for the same common goal of helping the business run smoothly and keep growing. When teams are not connected and communicating with each other, it can directly affect the way general strategies are built. To avoid these issues, tools such as data dashboards enable teams to stay connected through data in a visually appealing way. 
  • Innumeracy : Businesses are working with data more and more every day. While there are many BI tools available to perform effective analysis, data literacy is still a constant barrier. Not all employees know how to apply analysis techniques or extract insights from them. To prevent this from happening, you can implement different training opportunities that will prepare every relevant user to deal with data. 

Key Data Analysis Skills

As you've learned throughout this lengthy guide, analyzing data is a complex task that requires a lot of knowledge and skills. That said, thanks to the rise of self-service tools the process is way more accessible and agile than it once was. Regardless, there are still some key skills that are valuable to have when working with data, we list the most important ones below.

  • Critical and statistical thinking: To successfully analyze data you need to be creative and think out of the box. Yes, that might sound like a weird statement considering that data is often tight to facts. However, a great level of critical thinking is required to uncover connections, come up with a valuable hypothesis, and extract conclusions that go a step further from the surface. This, of course, needs to be complemented by statistical thinking and an understanding of numbers. 
  • Data cleaning: Anyone who has ever worked with data before will tell you that the cleaning and preparation process accounts for 80% of a data analyst's work, therefore, the skill is fundamental. But not just that, not cleaning the data adequately can also significantly damage the analysis which can lead to poor decision-making in a business scenario. While there are multiple tools that automate the cleaning process and eliminate the possibility of human error, it is still a valuable skill to dominate. 
  • Data visualization: Visuals make the information easier to understand and analyze, not only for professional users but especially for non-technical ones. Having the necessary skills to not only choose the right chart type but know when to apply it correctly is key. This also means being able to design visually compelling charts that make the data exploration process more efficient. 
  • SQL: The Structured Query Language or SQL is a programming language used to communicate with databases. It is fundamental knowledge as it enables you to update, manipulate, and organize data from relational databases which are the most common databases used by companies. It is fairly easy to learn and one of the most valuable skills when it comes to data analysis. 
  • Communication skills: This is a skill that is especially valuable in a business environment. Being able to clearly communicate analytical outcomes to colleagues is incredibly important, especially when the information you are trying to convey is complex for non-technical people. This applies to in-person communication as well as written format, for example, when generating a dashboard or report. While this might be considered a “soft” skill compared to the other ones we mentioned, it should not be ignored as you most likely will need to share analytical findings with others no matter the context. 

Data Analysis In The Big Data Environment

Big data is invaluable to today’s businesses, and by using different methods for data analysis, it’s possible to view your data in a way that can help you turn insight into positive action.

To inspire your efforts and put the importance of big data into context, here are some insights that you should know:

  • By 2026 the industry of big data is expected to be worth approximately $273.4 billion.
  • 94% of enterprises say that analyzing data is important for their growth and digital transformation. 
  • Companies that exploit the full potential of their data can increase their operating margins by 60% .
  • We already told you the benefits of Artificial Intelligence through this article. This industry's financial impact is expected to grow up to $40 billion by 2025.

Data analysis concepts may come in many forms, but fundamentally, any solid methodology will help to make your business more streamlined, cohesive, insightful, and successful than ever before.

Key Takeaways From Data Analysis 

As we reach the end of our data analysis journey, we leave a small summary of the main methods and techniques to perform excellent analysis and grow your business.

17 Essential Types of Data Analysis Methods:

  • Cluster analysis
  • Cohort analysis
  • Regression analysis
  • Factor analysis
  • Neural Networks
  • Data Mining
  • Text analysis
  • Time series analysis
  • Decision trees
  • Conjoint analysis 
  • Correspondence Analysis
  • Multidimensional Scaling 
  • Content analysis 
  • Thematic analysis
  • Narrative analysis 
  • Grounded theory analysis
  • Discourse analysis 

Top 17 Data Analysis Techniques:

  • Collaborate your needs
  • Establish your questions
  • Data democratization
  • Think of data governance 
  • Clean your data
  • Set your KPIs
  • Omit useless data
  • Build a data management roadmap
  • Integrate technology
  • Answer your questions
  • Visualize your data
  • Interpretation of data
  • Consider autonomous technology
  • Build a narrative
  • Share the load
  • Data Analysis tools
  • Refine your process constantly 

We’ve pondered the data analysis definition and drilled down into the practical applications of data-centric analytics, and one thing is clear: by taking measures to arrange your data and making your metrics work for you, it’s possible to transform raw information into action - the kind of that will push your business to the next level.

Yes, good data analytics techniques result in enhanced business intelligence (BI). To help you understand this notion in more detail, read our exploration of business intelligence reporting .

And, if you’re ready to perform your own analysis, drill down into your facts and figures while interacting with your data on astonishing visuals, you can try our software for a free, 14-day trial .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Qualitative vs. Quantitative Research | Differences, Examples & Methods

Qualitative vs. Quantitative Research | Differences, Examples & Methods

Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.

When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.

Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.

Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.

Table of contents

The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.

Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.

Qualitative vs. quantitative research

Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).

Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).

However, some methods are more commonly used in one type or the other.

Quantitative data collection methods

  • Surveys :  List of closed or multiple choice questions that is distributed to a sample (online, in person, or over the phone).
  • Experiments : Situation in which different types of variables are controlled and manipulated to establish cause-and-effect relationships.
  • Observations : Observing subjects in a natural environment where variables can’t be controlled.

Qualitative data collection methods

  • Interviews : Asking open-ended questions verbally to respondents.
  • Focus groups : Discussion among a group of people about a topic to gather opinions that can be used for further research.
  • Ethnography : Participating in a community or organization for an extended period of time to closely observe culture and behavior.
  • Literature review : Survey of published works by other authors.

A rule of thumb for deciding whether to use qualitative or quantitative data is:

  • Use quantitative research if you want to confirm or test something (a theory or hypothesis )
  • Use qualitative research if you want to understand something (concepts, thoughts, experiences)

For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.

Quantitative research approach

You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”

You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.

Qualitative research approach

You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”

Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.

Mixed methods approach

You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.

It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.

Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.

Analyzing quantitative data

Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.

Applications such as Excel, SPSS, or R can be used to calculate things like:

  • Average scores ( means )
  • The number of times a particular answer was given
  • The correlation or causation between two or more variables
  • The reliability and validity of the results

Analyzing qualitative data

Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.

Some common approaches to analyzing qualitative data include:

  • Qualitative content analysis : Tracking the occurrence, position and meaning of words or phrases
  • Thematic analysis : Closely examining the data to identify the main themes and patterns
  • Discourse analysis : Studying how communication works in social contexts

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved February 27, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, what is your plagiarism score.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

data analysis strategies in quantitative research

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection  methods, and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

Journey to Customer Happiness: Strategies for Better CX Programs

Journey to Customer Happiness: Proven Strategies for Building Exceptional CX Programs

Feb 27, 2024

data analysis strategies in quantitative research

Your Effort to Save a Buck on Surveys Is Hurting your Employee Experience

A feedback culture boosts workplace communication by promoting inclusiveness and collaboration. Let's discuss 10 techniques in this article.

Feedback Culture: What It Is, How to Build It, Challenges.

voice of the customer tools

Top Voice of the Customer Tools to Use in 2024 | QuestionPro

Feb 26, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

Data Analytics Course

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language:

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

data mining

Data Mining Architecture: Components, Types & Techniques

Data Analyst Course Fees

Evaluating Data Analyst Course Fees: Budgeting for Education

Sign up for our newsletters.

The best of Business news, in your inbox.

default-logo

  • Open access
  • Published: 27 February 2024

Physician resilience and perceived quality of care among medical doctors with training in psychosomatic medicine during the COVID-19 pandemic: a quantitative and qualitative analysis

  • Christian Fazekas 1 ,
  • Maximilian Zieser 2 ,
  • Barbara Hanfstingl 3 ,
  • Janika Saretzki 4 ,
  • Evelyn Kunschitz 5 , 6 ,
  • Luise Zieser-Stelzhammer 7 ,
  • Dennis Linder 1 , 8 &
  • Franziska Matzer 1  

BMC Health Services Research volume  24 , Article number:  249 ( 2024 ) Cite this article

Metrics details

At an individual level, physician resilience protects against burnout and against its known negative effects on individual physicians, patient safety, and quality of care. However, it remains uncertain whether physician resilience also correlates with maintaining a high level of healthcare quality during crises such as a pandemic. This study aimed to investigate whether higher resilience among physicians, who had received training in resilience-related competences in the past, would be associated with higher quality of care delivered during the COVID-19 pandemic.

This study enrolled physicians working in family medicine, psychiatry, internal medicine, and other medical specialties, who had obtained at least one of three consecutive diplomas in psychosomatic medicine in the past. Participants completed a quantitative and qualitative anonymous online survey. Resilience was measured using the Connor-Davidson Resilience Scale, and healthcare quality was assessed through single-item quality indicators, including perceived quality of care, professional autonomy, adequate time for patient care, and job satisfaction.

The study included 229 physicians (70 males/159 females) with additional training in psychosomatic medicine, working in family medicine (42.5%), psychiatry (28.1%), internal medicine (7.0%), or other medical specialties (22.4%). Participants represented four intensity levels of training background (level 1 to level 4: 9.2%, 32.3%, 46.3%, and 12.2% of participants). Training background in psychosomatic medicine was positively associated with resilience (B = 0.08, SE = 0.04, p  <.05). Resilience and training background independently predicted perceived quality of care, even after controlling for variables such as own health concerns, involvement in the treatment of COVID-19 patients, financial strain, percentage of working hours spent on patient care, age, and gender (resilience: B = 0.33, SE = 0.12, p  <.01; training background: B = 0.17, SE = 0.07, p  <.05). Both resilience and training background predicted job satisfaction (resilience: B = 0.42, SE = 0.12, p  <.001; training background: B = 0.18, SE = 0.07, p  <.05), while resilience alone predicted professional autonomy (B = 0.27, SE = 0.12, p  <.05). In response to an open question about their resources, resilient physicians more frequently reported applying conscious resilient skills/emotion regulation ( p  <.05) and personal coping strategies ( p  <.01) compared to less resilient medical doctors.

Physician resilience appears to play a significant role in the perceived quality of patient care, professional autonomy, and job satisfaction during healthcare crises.

Peer Review reports

Introduction

Resilience is the ability of an individual to respond to stress and adversity in an adaptive way such that goals are achieved at minimal psychological and physical costs and mental well-being rapidly ”bounces back” [ 1 ]. As resilience can counteract negative effects of workplace stress, the concept of physician resilience has emerged in response to high prevalence rates of physician burnout and physician stress [ 2 , 3 , 4 ], which seem to have substantially increased during the COVID-19 pandemic [ 5 , 6 , 7 ].

Negative effects of burnout include personal consequences, such as depression, substance use, disrupted relationships and suicide; burnout impacts on professional attitude and performance, with subsequent lower quality of care, higher rates of medical errors, decreased patient satisfaction, decreased productivity, and increased clinician turnover [ 8 , 9 , 10 , 11 , 12 ]. A large systematic review and meta-analysis including 170 observational studies reported doubled odds of patient safety incidents and almost fourfold odds of decreased job satisfaction associated with physician burnout, in addition to other aspects of reduced quality of care and increased career disengagement [ 13 ].

The COVID-19 pandemic imposed high stress levels on frontline healthcare workers accompanied by reduced professional quality of life and increased risk for the onset of depression and anxiety disorders [ 14 , 15 ]. Consequently, building psychological resilience emerged as an essential focus in response to these challenges, as it may serve as a protective factor against the risk of developing burnout and its negative consequences on individuals and the health care system [ 14 , 16 ]. Higher personal resilience was found to be associated with lower stress levels and lower levels of anxiety, COVID-19 related fear, depression, fatigue, and sleep disturbances during the COVID-19 pandemic [ 17 , 18 , 19 ]. Besides the well-known positive effects of resilience on physicians’ well-being, there is some evidence suggesting that it could also protect against detrimental effects of a health care crisis by maintaining a high level of health care quality.

Health care quality is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes [ 20 ]. One study reported that physician resilience contributed to maintaining health care quality during a financial crisis in the health care system in Portugal [ 21 ]. Another study reported an association between resilience among nurses and perceived quality of care during the COVID-19 pandemic [ 22 ]. A further study on health care workers demonstrates the influence of resilience in reducing burnout during the pandemic [ 23 ]. However, studies investigating specifically an influence of physician resilience on perceived quality of delivered care during this pandemic seem to be widely lacking.

The main aim of this study was (1) to evaluate an assumed association of physician resilience and training background in psychosomatic medicine with perceived quality of care during the COVID-19 pandemic while controlling for potential confounding variables (age, gender, number of patients, treatment of COVID-19 patients, own health concerns, financial strains). We also aimed at (2) investigating the individual and combined contributions of resilience and training background to professional autonomy, sufficient time for patient care and job satisfaction as secondary dependent variables. Participants in the study were enrolled among medical doctors who had participated in a graded resilience-related training, which is an integral part of a consecutive three tier continuing medical education (CME) program in psychosomatic medicine in Austria [ 24 ]. The participants’ selection allowed us to look for (3) a possible association of the consecutive training levels with reported physician resilience. Finally, the qualitative part of the study aimed at (4) exploring self-reported challenges and resources during the COVID-19 pandemic and (5) analyzing differences according to high and low resilience subgroups and gender.

Sample and procedure

In this study, we aimed at evaluating physician resilience, training background, and work-related experiences during the COVID-19 pandemic. Medical specialists with an additional certified training background in psychosomatic medicine were chosen as study cohort as these long-term CME programs also include resilience building and preventative measures and techniques. In Austria, there are three consecutive levels of long-term training in psychosomatic medicine with a duration between 1 year (PSY-1, Diploma for Psychosocial Medicine), additional 2 years (PSY-2, Diploma for Psychosomatic Medicine) and additional 3 years (PSY-3, Diploma for Psychotherapeutic Medicine). Regarding personal development, the training involves supervision, Balint group work and participation in self-awareness groups. It also promotes self-management, communication skills and psychotherapeutic techniques; the third training level is leading to full psychotherapeutic competence [ 24 ]. For the purposes of the present study, a fourth level termed PSY-4 was defined for participants who reported an additional certification in the psychosomatic and psychotherapeutic field beyond PSY-3.

The study was approved by the ethics committee of the Medical University of Graz (32-534ex 19/20). Informed consent was obtained from all participants before data collection. The study was performed via an online-survey with the intention to reach out to all 2807 Austrian medical doctors with additional training background in psychosomatic medicine. A response rate of about 10% was assumed. As inclusion criteria, participants had to be approved in their specific medical field and they had to be currently working in their profession. In order to reach the target group, physicians received an invitation to participate in the study via personal email or within a newsletter by their corresponding Federal Association of Medical Doctors and the Austrian Society for Psychosomatics and Psychotherapeutic Medicine (ÖGPPM). By this way, they also received a token created via the online survey tool LimeSurvey in order to guarantee that only members of the target group could participate. Anonymized quantitative and qualitative data were collected during July and September 2020, the data collection took therefore place shortly after the first COVID-19 lockdown period in Austria (March/ April 2020).

The trait resilience was assessed using the German 10-item-version of the Connor-Davidson Resilience Scale (CD-RISC) [ 25 , 26 ]. The German translation of the unidimensional instrument, as applied in this study, has shown very good psychometric properties with high internal consistency (Cronbach’s α = 0.84). The statements of the self-report assessment are rated on a 5-point Likert scale (example item: “Even if there are obstacles, I believe I can achieve my goals”), adding up to a total sum value ranging from 10 to 50. Item values were averaged, resulting in a scale ranging from 1 to 5, with higher values indicating higher resilience.

Perceived quality of care, professional autonomy, time for patient care, and job satisfaction

We applied four one-item outcome measures that represent quality indicators for health care systems and health care quality, and have been used and validated previously [ 27 , 28 , 29 , 30 ]. These outcome variables encompassed statements about perceived quality of care (“It is possible to provide high quality care to all of my patients”), adequate time for patient care (“I have adequate time to spend with my patients during a typical patient visit”) and professional autonomy (“I have the freedom to make clinical decisions that meet my patients’ needs”). Items were scored on a five-point scale ranging from 1 = ‘strongly disagree’ to 5 = ‘strongly agree’.

Job satisfaction was measured by the question “On the whole, how satisfied were you/are you with your job?” This item was scored on a five-point scale ranging from 1 = ‘very dissatisfied’ to 5 = ‘very satisfied’ [ 27 , 31 ].

Answers to each of these four outcome measures were given in a tabular format. The assessment of the questions was only performed once during the time after the first lockdown, however, these four questions were answered not only for today but also retrospectively for two periods of time: the time before the COVID-19 epidemic, and the time during the initial lockdown in Austria.

Sociodemographic and professional characteristics

In addition to the items of the CD-RISC and the four one-item outcome measures mentioned above, further items on sociodemographic data, the medical specialty, the training background in psychosomatic medicine and work-related data were developed for this study. An English language version of the newly developed parts of the survey is provided as supplemental material in the Additional File 1 . We asked participants to indicate the percentage of total work time typically spent on direct patient care. In addition, we inquired about the average number of hours worked during three specific time periods (before the COVID-19 pandemic, during the initial lockdown from mid-March to late April 2020, and at the time of questionnaire completion). We additionally asked if COVID-19 patients were treated during the lockdown (yes/no), if there were health concerns for one´s own health during the lockdown (4-point rating scale ranging from “yes” to “no”) and if there was a financial strain in the current situation (7-point rating scale ranging from “0” = no agreement, to “6” = full agreement).

Qualitative questions

Challenges during the crisis were assessed by the open question “Please indicate the three most difficult job-related challenges during the COVID-19 crisis”. Resources were measured by the open question “Please indicate three things that helped you most in your professional practice during the COVID-19 crisis”.

See Additional File 1 : Survey.

Statistical analysis

Before statistical analysis, all collected data were controlled before processing. Data collection was carried out with the online survey tool LimeSurvey hosted by the Medical University of Graz. Subsequently, we stored, processed, and analyzed data at the server park of the Medical University of Graz which is reliably protected from external access.

As descriptive statistics, we first calculated mean values of all our variables of interest across participants’ level of training in psychosomatic medicine. As there appeared to be a distinct positive linear trend in dependent variables across the four training groups, we subsequently used training level as a continuous variable in the analysis. These descriptive data are presented as supplemental material in the Additional File. Moreover, we present simple correlations between our variables of interest.

For our main results, we used linear multiple regressions on the four dependent variables quality of care, professional autonomy, adequate time for patient care (hereinafter time for patients), and job satisfaction. The dependent variables entered the model as participant means of the three measurements, e.g., quality of care in these regressions enters as each participant’s mean of responses concerning before, during, and after the COVID-19 lockdown. In addition, we also conducted linear regressions with resilience as the dependent variable. As one of the covariates, we used concerns about one’s health due to COVID-19 during the lockdown (health concerns). Previous studies showed that fear of infection can influence willingness to provide patient care [ 32 ]. Health concerns may thus also play an important role for the relationships assessed here. Further, we controlled for five additional variables that may act as confounders: involvement in COVID-19 patients treatment, financial strain as perceived at the time of completing the survey, the percentage of working time spent on patient care, age (as a continuous variable, measured in 6 categories from under 30 to over 69, see Table  1 ) and gender.

As to conditions for applying linear models, we tested for appropriate model specification, particularly using conventional linear models, using the Ramsey Regression Equation Specification Error Test, for normality using the Shapiro-Wilk test, and for homoscedasticity using the Breusch-Pagan test. We found no violations of assumptions concerning linearity or homoscedasticity with all p  >.05. Concerning normality, the Shapiro-Wilk test was significant for all four models, indicating non-normal distributions of residuals. However, considering the samples size of 214 participants, we are confident that conventional linear regressions are appropriate as opposed to alternative methods [ 33 ].

By design, the previously mentioned regressions on the participant means are not able to capture differences between the three time periods. We thus supplemented the analyses using repeated measures ANOVAs to account for the multiple responses regarding the dependent variables, once concerning the time of completing the questionnaire, once concerning the time of the lockdown, and once concerning the time before the pandemic.

Importantly, results of these rANOVAs can be differentiated into “between-subject” effects, which represent the effect of our independent variables on the mean of the three measurements per dependent variable exactly as captured by the regressions described above, and “within-subject” effects. In our case, the within-subject effects represent differences between the time periods which the three measurements represent. We also tested for interaction effects between the time period and our main independent variables. Results from all repeated measures ANOVAs are presented in the Additional File.

The aforementioned statistical analyses were conducted using R version 4.2.2 and the packages lme4 version 1.1–31 [ 34 ] as well as lmerTest version 3.1-3 [ 35 ] to compute mixed models and rANOVA parameters, respectively.

Qualitative data analysis (QDA) and an additional frequency analysis were conducted using the statistical software program MAXQDA 2022. One of the authors (FM) developed the initial coding framework including code definitions and examples, employing a thematic coding approach [ 36 ]; thus, the categories were drawn inductively out of the answers. After critical discussion within the study group and adaption of the initial coding framework, a second rater (JS) independently coded the texts based on the existing coding framework.

After the second coding, interrater reliability was calculated using Cohen’s Kappa statistic and reached 0.95 (range 0.83-1.0), with a Kappa of at more than 0.8 representing almost perfect match between two raters [ 37 ]. Finally, any discrepancies in coding (texts with a match below a Kappa of 0.9 between the two raters) were resolved through critical discussion until consensus was guaranteed. For analyzing differences in self-reported challenges and resources, physician resilience (high vs. low resilience) and gender (male / female) were utilized as independent variables and the categories of QDA as dependent variables. Groups with high vs. low resilience were determined by median split of the CD-RISC. Differences between the categories were calculated by Chi-Square tests using IBM SPSS 26; a p -value of less than 0.05 was considered significant.

Sample description

The final sample consisted of N  = 229 respondents (response rate: 8.2%), with a higher proportion of female participants (69.4%). Physicians were working in the fields of family medicine (42.5%), psychiatry (28.1%), internal medicine (7.0%), and other medical specialties (22.4%), with most of them working outside hospitals (79.5%) and spending most of their working time on direct patient care (72,1%). Around 27% had treated COVID-19 patients and around 40% had health concerns for their own health. Mean working hours were lower during the initial lockdown, but had reached the initial level at the time of questionnaire completion. Perceived financial strain of the sample was rather low (mean = 2 on a scale from 0 to 6). Concerning the mean resilience of study participants, we find a mean of 4.1 which corresponds to a value of 31 on a scale of 0 to 40 and is within the typical range of an adult population [ 38 ].

Details of all sociodemographic and work-related sample characteristics are presented in Table  1 .

Quantitative results

Correlations of training level and resilience with outcome variables.

For a first evaluation of the investigated associations, simple correlation coefficients were calculated using the participant means of the outcome measures regarding the three time periods. Correlations between training levels in psychosomatic medicine, resilience, all outcome measures (perceived quality of care, professional autonomy, time for patients, job satisfaction), and control variables (own health concerns, involvement in the treatment of COVID-19 patients, financial strain, percentage of working hours spent on patient care, age, and gender) are shown in Table  2 . Correlation coefficients indicate strong, positive correlations (0.47 to 0.68) between the four outcome measures and positive correlations between the outcomes and training levels as well as resilience, with the exception of resilience and time for patients. Moreover, we found a positive association between psychosomatic training and resilience, and a negative association between resilience and health concerns. With regard to control variables, we found no association between the outcome variables with gender or the proportion of work time spent on patient care.

Resilience and training levels as predictors of perceived quality of care and further quality indicators

As our main outcomes of interest, we used the dependent variables perceived quality of care, professional autonomy, time for patients, and job satisfaction, each measured three times reflecting participants assessment before the COVID-19 pandemic, during the first lockdown, and when completing the survey. In the regressions presented here, we averaged the three time periods for each participant.

As the main predictors, we used resilience and training level in psychosomatic medicine, which both entered as continuous variables. As further covariates, we used concerns about one’s health due to COVID-19 during the lockdown (health concerns), involvement in COVID-19 patients treatment, financial strain as perceived at the time of completing the survey, the percentage of working time spent on patient care, age (as a continuous variable, measured in 6 categories from under 30 to over 69, see Table  1 ) and gender.

Results from the regressions are presented in Table  3 . We found that resilience and psychosomatic training both significantly contribute to explaining the perceived quality of care provided to patients. Moreover, we found that resilience and psychosomatic training are positively associated with job satisfaction. Furthermore, resilience is also significantly associated with professional autonomy.

In order to assess the relevance of the investigated time periods on the reported results, we conducted rANOVAs as additional analyses. While our main findings (i.e., the between-subject effects described above) remain unchanged, there are significant differences between the three periods (i.e., within-subject effects) as all four dependent variables were perceived as significantly lower during the lockdown. Moreover, we find interactions between time period and psychosomatic training for professional autonomy and for time for patients. These analyses are presented in detail as supplemental material in the Additional File 2 .

See Additional File 2 : Training levels in psychosomatic medicine and differences in time periods.

Association of training levels with reported physician resilience

As shown in the correlation table (Table  2 ), training levels in psychosomatic medicine and resilience have a small but significantly positive correlation. To test whether this relationship holds when control variables are included, we conducted regressions with resilience as the dependent variable, once without and once with control variables (see Table  4 ). We found that psychosomatic training is positively associated with resilience when the set of control variables was included.

Regarding quality of care and the other outcome variables, resilience appeared to act both as predictor and as a weak mediator, capturing a small proportion of the effect of training on the dependent variables besides the independent effect. Indeed, regressions where resilience is removed from the models (not tabulated) show slightly larger effects of training on the dependent variables. However, this potential mediation effect was not significant (all p  >.05) when tested using structural equation models and maximum likelihood estimation (results not tabulated). Overall, while there is a significant association between resilience and training levels in psychosomatic medicine, both variables have a much stronger independent contribution to explaining the four outcome measures as presented in Table  3 .

Qualitative results

Most respondents answered the open questions concerning challenges during the COVID-19 pandemic ( n  = 194, 84.7%) and their most important resources ( n  = 192, 83.8%). Results of QDA regarding physician-reported challenges and their most important resources during the COVID-19 crisis are presented in Tables  5 and 6 .

Self-reported challenges and resources of physicians with high and low resilience

Comparisons of challenges and resources regarding resilience levels were based on a median split of the sample (Median = 41.0, Range = 29–50). Participants with a resilience sum score of 42 or more were considered to have “high resilience” ( n  = 45) whereas those with a value of 41 or less were considered to have “low resilience” ( n  = 55).

Physicians with low vs. high resilience were compared regarding their reported challenges and resources. Chi-square tests revealed that, within the mentioned challenges, there were no statistically significant differences between physicians with high or low resilience.

As the naming of the most helpful resources is concerned, physicians with high vs. low resilience differed in the categories resilience/emotion regulation (Chi-square = 5.909, p  =.015) and personal coping strategies (Chi-square = 6.747, p  =.009): physicians with high resilience scores more often reported to employ strategies of emotion regulation and to have personal coping strategies (e.g. hobbies and sports). Figures  1 and 2 show the categories of QDA of self-reported difficulties and resources by comparing physicians with high vs. low resilience values.

figure 1

QDA frequency analysis of self-reported challenges according to resilience level

figure 2

QDA frequency analysis of self-reported resources according to resilience level. * p  <.05, ** p  <.01

Gender differences regarding self-reported challenges and resources

Gender differences regarding self-reported challenges and resources were calculated using Chi-square tests and revealed that males and females experienced similar challenges and resources during the COVID-19 pandemic. No significant differences were found when comparing the number of male and female physicians reporting on specific categories ( p  >.05).

Promoting resilience is recognized as a key strategy in medicine to counteract burnout and its known negative effects on health care providers, patients, and health care quality [ 3 , 8 , 39 ]. Recently, it has been shown that resilience has mitigated burnout symptoms triggered by the COVID-19 pandemic in health care workers [ 23 ]. To our knowledge, however, data about the association of resilience among physicians with perceived quality of delivered care during health care crisis have been scanty. In the present study, we investigated this assumed association in a specific cohort of medical doctors during the COVID-19 pandemic [ 6 , 7 ]. More specifically, we looked at whether and, if so, to what extent resilience and postgraduate training in psychosomatic medicine in Austria were related to perceived quality of care and to further indicators of health care quality, i.e., professional autonomy, adequate time for patient care, and job satisfaction.

We found that both resilience and training level in psychosomatic medicine separately predicted perceived quality of delivered care when controlling for age, gender, number of patients, treatment of COVID-19 patients, own health concerns, financial strains, as well as training and resilience as control variables. Resilience also predicted professional autonomy and job satisfaction but not adequate time for patients during the COVID-19 pandemic. These results are in line with the sparse literature suggesting that physician resilience is associated with maintaining high quality of care during times of crisis [ 21 , 22 ]. Similarly, a recent study among oncology professionals also reported a protective role of resilience on well-being and job performance during the COVID-19 pandemic [ 40 ].

In our study, resilience predicted three of four indicators of health care quality, with adequate time for patients being the exception– a result likely influenced by “systemic” conditions due to the COVID-19 pandemic. While the other indicators of health care quality - quality of care, professional autonomy, and job satisfaction– may be influenced by internal factors such as resilience, adequate time for patient care may depend more on external, organizational and institutional factors. These results imply that personal characteristics of individual physicians such as resilience promote more competence in dealing with challenging professional tasks in biopsychosocial medicine, particularly during times of pronounced health care crises.

The findings of the qualitative part of this study substantiate the main quantitative findings. Physicians with high resilience scores reported employing strategies of emotion regulation more often when asked to indicate three things that helped them most in their professional practice during the COVID-19 crisis. The following verbatim quotes answering this question give an insight into physician’s resources which were categorized as “resilience and emotion regulation”: “my adaptability to unfamiliar circumstances”; “I become creative and alert in critical situations and deal with patients in a calming way”; “my calm manner”; “my basically optimistic attitude”; “my ability of self-reflection”. Physicians with high resilience scores also reported more frequently relying on personal coping strategies such as cultivating hobbies and practicing sports. Examples for this category “personal coping strategies” are: “almost daily short hikes”; “regular movement with my children and dog”; “mindfulness meditation”; “good and healthy food”; “regeneration in the evening with good Netflix series”; “manual work at home”.

Moreover, the amount of training in psychosomatic medicine was also positively associated with the perceived quality of care and job satisfaction. Psychosomatic training may thus foster physicians` positive coping behavior with challenging professional situations. Similar positive effects by long-term trainings in psychosomatic medicine in Austria were reported before including effects on increased empathy for patients, the provision of adequate time for patient encounters, an increase in interdisciplinary cooperation with mental health professionals and improved job satisfaction after the training as compared to before the training [ 41 , 42 , 43 ].

Alongside the independent contribution of resilience and training background to quality of care, we found that more training was also associated with higher resilience levels. Although we cannot postulate causal relationships, we tested a potential mediation effect of the training background on quality of care through increased resilience, i.e., training in psychosomatic medicine might have increased physicians’ capacity to deal with and recover from stressful events, which in turn promoted quality of care. Although we did not find a significant effect on this matter, a potential mediation effect of resilience on the reported relationship between postgraduate training background and quality of care may deserve further attention, particularly in programs explicitly promoting resilience.

Successful interventions to promote individual physician resilience most likely require a medium- to long-term perspective. Although the Austrian PSY-curricula do not explicitly target an increase of resilience, they include several components that may be helpful in doing so, such as supervision, self-reflection and self-regulation skills [ 24 , 43 ]. Taking into account the long duration of psychosomatic training and the slow rise in resilience mean scores over training levels (means from PSY-1 to PSY-4 are M = 4.02; M = 4.04; M = 4.13 and M = 4.23, respectively) we assume that it may indeed be difficult to foster resilience by short-term trainings as is often expected [ 44 , 45 ]. Resilience, as a complex psychological construct with many influencing factors, is strongly connected to personal growth and traits, which are generally slow-changing factors [ 46 ], and a best practice guidance on how to improve resilience among physicians is still lacking [ 47 ]. The results of this study could contribute to inspire future designs of resilience trainings. Supervision, self-reflection, and self-regulation skills, including support for the successful management of difficult patient situations as a medical professional [ 1 , 2 , 39 ] may constitute important aspects of the psychosomatic training to foster resilience in medical doctors.

As demonstrated by the COVID-19 crisis, individual resilience of health care workers is linked to institutional resilience and work-related stressors [ 48 , 49 ]. Therefore, long-term trainings for individual resilience in combination with interventions for institutional resilience and a reduction of stressors will be needed to ensure healthy work environments and sustainable quality of care.

This study combines quantitative and qualitative data collected during a particularly challenging period of time for physicians in Austria and can thus be regarded an important contribution to research on the highly relevant relationships between quality of care, resilience, and training background.

It seems plausible to assume that the reported results on physician resilience predicting perceived quality of care, professional autonomy and job satisfaction can also be generalized to other health care systems, yet, further research in other health care settings should substantiate this assumption. However, the generalizability of reported findings on training background in psychosomatic medicine as a predictor of quality of care appears to be limited to health care systems offering comparable training programs, which cover psychosomatic medicine and provide supervision, self-reflection and training in self-regulation skills to a similar or higher extent [ 50 ]. In addition, there are other limitations to be considered when interpreting the results of this study. First, this study presents a cross-sectional design and does not include real longitudinal measurements in assessing the three-time points. Therefore, estimates for outcome variables before and during the lockdown may be retrospectively biased. Nevertheless, the study was able to collect relevant data despite the rapid onset of the COVID-19 pandemic and the resulting changes in physicians’ work.

Secondly, the response rate of potential participants was lower than expected. From altogether 2792 Austrian doctors with psychosomatic training background, only 229 participated in the study. This may have been largely due to the heterogeneity of invitations for study participation, as some of the cooperating medical chambers sent out personal invitations while others informed on the study only by a newsletter. We also assume that many medical doctors were saturated in terms of survey participation, as many COVID-related studies were conducted at that time. As with all voluntary surveys, results may be prone to selection bias. For example, physicians more sensitive to biopsychosocial aspects of their profession may have been more likely to respond to the survey invitation. This may have contributed to the comparatively large proportion of study participants trained in psychotherapeutic medicine. Due to the low response rate and a potential selection bias, the reported results may not be transferable to the total group of medical doctors trained in psychosomatic medicine.

Third, findings in this study cohort may not apply to the same extent to medical specialists in other health care fields. Aiming to investigate the relevance of resilience and potential resilience-related training, we focused on the specific subgroup of medical doctors with a certified training in psychosomatic medicine, potentially limiting the scope of our findings. While psychosomatic training cannot be directly equated with resilience training, we are confident that at least parts of the training curricula directly relate to resilience-relevant skills and that the chosen population was thus suitable for our research questions.

As another limitation, this study reports on perceived quality of care and does not include objective quality indicators.

Finally, the study lacks a control group. Although the inclusion of a matched control group would have strengthened the study, time constraints prohibited recruiting a well-matched control group from all medical fields represented in the study population. Priority was thus given to launching the survey while the immediate effects of the COVID-19 pandemic and the first lockdown in Austria were still salient for practitioners.

Physician resilience and training level in psychosomatic medicine are both independently and significantly associated with perceived quality of patient care and job satisfaction in times of a health care crisis. Promotion of resilience among health-care workers may require long-term trainings and interventions in which institutional and individual resilience interventions should complement each other.

Data availability

Data of the quantitative and qualitative parts of the study will be provided upon request by the corresponding author.

Epstein RM, Krasner MS. Physician resilience: what it means, why it matters, and how to promote it. Acad Med. 2013;88(3):301–3. https://doi.org/10.1097/ACM.0b013e318280cff0 .

Article   PubMed   Google Scholar  

Jensen PM, Trollope-Kumar K, Waters H, Everson J. Building physician resilience. Can Fam Physician. 2008;54(5):722–9. PMID: 18474706; PMCID: PMC2377221.

PubMed   PubMed Central   Google Scholar  

Rotenstein LS, Torre M, Ramos MA, Rosales RC, Guille C, Sen S, Mata DA. Prevalence of Burnout among Physicians: a systematic review. JAMA. 2018;320(11):1131–50. https://doi.org/10.1001/jama.2018.12777 . PMID: 30326495; PMCID: PMC6233645.

Article   PubMed   PubMed Central   Google Scholar  

Shen X, Xu H, Feng J, Ye J, Lu Z, Gan Y. The global prevalence of burnout among general practitioners: a systematic review and meta-analysis. Fam Pract. 2022;39(5):943–950. https://doi.org/10.1093/fampra/cmab180 . PMID: 35089320.

Alkhamees AA, Aljohani MS, Kalani S, Ali AM, Almatham F, Alwabili A, Alsughier NA, Rutledge T. Physician’s burnout during the COVID-19 pandemic: a systematic review and Meta-analysis. Int J Environ Res Public Health. 2023;20(5):4598. https://doi.org/10.3390/ijerph20054598 . PMID: 36901612; PMCID: PMC10001574.

Weilenmann S, Ernst J, Petry H, Pfaltz MC, Sazpinar O, Gehrke S, Paolercio F, von Känel R, Spiller TR. Health Care workers’ Mental Health during the first weeks of the SARS-CoV-2 pandemic in Switzerland-A cross-sectional study. Front Psychiatry. 2021;12:594340. https://doi.org/10.3389/fpsyt.2021.594340 . PMID: 33815162; PMCID: PMC8012487.

Schaefert R, Stein B, Meinlschmidt G, Roemmel N, Huber CG, Hepp U, Saillant S, Fazekas C, Vitinius F. COVID-19-Related Psychosocial Care in General hospitals: results of an online survey of Psychosomatic, Psychiatric, and Psychological Consultation and Liaison Services in Germany, Austria, and Switzerland. Front Psychiatry. 2022;13:870984. https://doi.org/10.3389/fpsyt.2022.870984 . PMID: 35815043; PMCID: PMC9270003.

Mehta LS, Murphy DJ Jr. Strategies to prevent burnout in the cardiovascular health-care workforce. Nat Rev Cardiol. 2021;18(7):455–6. https://doi.org/10.1038/s41569-021-00553-0 . PMID: 33864029; PMCID: PMC8050818.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Shanafelt T, Goh J, Sinsky C. The business case for investing in physician well-being. JAMA Intern Med. 2017;177:1826–32. https://doi.org/10.1001/jamainternmed.2017.4340 .

Williams ES et al. Understanding physicians’ intentions to withdraw from practice: the role of job satisfaction, job stress, mental and physical health. 2001. Health Care Manage. Rev. 2010;35:105–115. https://doi.org/10.1097/01.HMR.0000304509.58297.6f .

Dewa CS, Loong D, Bonato S, Thanh NX, Jacobs P. How does burnout affect physician productivity? A systematic literature review. BMC Health Serv Res. 2014;14:325. https://doi.org/10.1186/1472-6963-14-325 .

Shanafelt TD, et al. Satisfaction with work-life balance and the career and retirement plans of US oncologists. J Clin Oncol. 2014;32:1127–35. https://doi.org/10.1200/JCO.2013.53.4560 .

Hodkinson A, Zhou A, Johnson J, Geraghty K, Riley R, Zhou A, Panagopoulou E, Chew-Graham CA, Peters D, Esmail A, Panagioti M. Associations of physician burnout with career engagement and quality of patient care: systematic review and meta-analysis. BMJ. 2022;378:e070442. https://doi.org/10.1136/bmj-2022-070442 . PMID: 36104064; PMCID: PMC9472104.

Coco M, Guerrera CS, Santisi G, Riggio F, Grasso R, Di Corrado D, Di Nuovo S, Ramaci T. Psychosocial impact and role of Resilience on Healthcare Workers during COVID-19 pandemic. Sustainability. 2021;13(13):7096. https://doi.org/10.3390/su13137096 .

Article   CAS   Google Scholar  

Varrasi S, Guerrera CS, Platania GA, et al. Professional quality of life and psychopathological symptoms among first-line healthcare workers facing COVID-19 pandemic: an exploratory study in an Italian southern hospital. Health Psychol Res. 2023;11. https://doi.org/10.52965/001c.67961 .

West CP, Dyrbye LN, Sinsky C, Trockel M, Tutty M, Nedelec L, Carlasare LE, Shanafelt TD. Resilience and burnout among Physicians and the General US Working Population. JAMA Netw Open. 2020;3(7):e209385. https://doi.org/10.1001/jamanetworkopen.2020.9385 . PMID: 32614425; PMCID: PMC7333021.

Huffman EM, Athanasiadis DI, Anton NE, Haskett LA, Doster DL, Stefanidis D, Lee NK. How resilient is your team? Exploring healthcare providers’ well-being during the COVID-19 pandemic. Am J Surg. 2021;221(2):277–84. https://doi.org/10.1016/j.amjsurg.2020.09.005 . Epub 2020 Sep 12. PMID: 32994041; PMCID: PMC7486626.

Havnen A, Anyan F, Hjemdal O, Solem S, Gurigard Riksfjord M, Hagen K. Resilience moderates negative outcome from stress during the COVID-19 pandemic: a Moderated-Mediation Approach. Int J Environ Res Public Health. 2020;17(18):6461. https://doi.org/10.3390/ijerph17186461 . PMID: 32899835; PMCID: PMC7558712.

Midorikawa H, Tachikawa H, Kushibiki N, Wataya K, Takahashi S, Shiratori Y, Nemoto K, Sasahara S, Doki S, Hori D, Matsuzaki I, Arai T, Yamagata K. Association of fear of COVID-19 and resilience with psychological distress among health care workers in hospitals responding to COVID-19: analysis of a cross-sectional study. Front Psychiatry. 2023;14:1150374. https://doi.org/10.3389/fpsyt.2023.1150374 . PMID: 37181870; PMCID: PMC10172588.

World Health Organization. Quality of Care. [Internet]. Available from: https://www.who.int/health-topics/quality-of-care#tab=tab_1 .

Russo G, Pires CA, Perelman J, Gonçalves L, Barros PP. Exploring public sector physicians’ resilience, reactions and coping strategies in times of economic crisis; findings from a survey in Portugal’s capital city area. BMC Health Serv Res. 2017;17(1):207. Published 2017 Mar 15. https://doi.org/10.1186/s12913-017-2151-1 .

Labrague LJ, de Los Santos JAA. Resilience as a mediator between compassion fatigue, nurses’ work outcomes, and quality of care during the COVID-19 pandemic. Appl Nurs Res. 2021;61:151476. https://doi.org/10.1016/j.apnr.2021.151476 . Epub 2021 Jul 7. PMID: 34544570; PMCID: PMC8448586.

Ferreira P, and Sofia Gomes. The role of Resilience in reducing burnout: a study with HealthcareWorkers during the COVID-19 pandemic. Social Sci. 2021;10:317. https://doi.org/10.3390/socsci10090317 .

Article   Google Scholar  

Fazekas C, Leitner A. Towards implementing the biopsychosocial factor in national health care systems: the role of postgraduate training in Austria. Psychother Psychosom. 2012;81(6):391–3. https://doi.org/10.1159/000341183 . Epub 2012 Sep 20. PMID: 23007697.

Connor KM, Davidson JRT. Development of a new resilience scale: the Connor-Davidson Resilience Scale (CD-RISC). Depress Anxiety. 2003;18(2):76–82. https://doi.org/10.1002/da.10113 .

Sarubin N, Gutt D, Giegling I, Bühner M, Hilbert S, Krähenmann O, Wolf M, Jobst A, Sabaß L, Rujescu D, Falkai P, Padberg F. Erste Analyse Der Psychometrischen Eigenschaften Und Struktur Der deutschsprachigen 10- und 25-Item Version Der Connor-Davidson Resilience Scale (CD-RISC). Z Für Gesundheitspsychologie. 2015;23(3):112–22. https://doi.org/10.1026/0943-8149/a000142 .

Tyssen R, Palmer KS, Solberg IB, Voltmer E, Frank E, the United States. Physicians’ perceptions of quality of care, professional autonomy, and job satisfaction in Canada, Norway, and. BMC Health Serv Res. 2013;13:516. https://doi.org/10.1186/1472-6963-13-516 . PMID: 24330820; PMCID: PMC3904199.

Stoddard JJ, Hargraves JL, Reed M, Vratil A. Managed care, professional autonomy, and income: effects on physician career satisfaction. J Gen Intern Med. 2001;16:675–84. https://doi.org/10.1111/j.1525-1497.2001.01206.x .

DeVoe J, Fryer GE Jr, Hargraves JL, Phillips RL, Green LA. Does career dissatisfaction affect the ability of family physicians to deliver high-quality patient care? J Fam Pract. 2002;51(3):223–8. PMID: 11978232.

PubMed   Google Scholar  

Landon BE, Reschovsky J, Blumenthal D. Changes in career satisfaction among primary care and specialist physicians, 1997–2001. JAMA. 2003 Jan 22–29;289(4):442-9. https://doi.org/10.1001/jama.289.4.442 . PMID: 12533123.

Grembowski D, Ulrich CM, Paschane D, Diehr P, Katon W, Martin D, Patrick DL, Velicer C. Managed care and primary physician satisfaction. J Am Board Fam Pract. 2003 Sep-Oct;16(5):383– 93. https://doi.org/10.1111/j.1525-1497.2003.20505.x . PMID: 14645328.

Fazekas C, Diamond M, Möse JR, Neubauer AC. AIDS and Austrian physicians. AIDS Educ Prev. 1992;4(4):279–94. PMID: 1472414.

CAS   PubMed   Google Scholar  

Knief U, Forstmeier W. Violating the normality assumption may be the lesser of two evils. Behav Res Methods. 2021;53(6):2576–90. https://doi.org/10.3758/s13428-021-01587-5 . Epub 2021 May 7. PMID: 33963496; PMCID: PMC8613103.

Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Soft. [Internet]. 2015 Oct. 7 [cited 2023 Jul. 6];67(1):1–48. Available from: https://www.jstatsoft.org/index.php/jss/article/view/v067i01 .

Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest Package: Tests in Linear Mixed Effects Models. J. Stat. Soft. [Internet]. 2017 Dec. 6 [cited 2023 Jul. 6];82(13):1–26. Available from: https://www.jstatsoft.org/index.php/jss/article/view/v082i13 .

Kuckartz U. Einführung in die computergestützte Analyse Qualitativer Daten. Wiesbaden: VS Verlag für Sozialwissenschaften; 2007.

Google Scholar  

Landis JR, Koch GG. The measurement of Observer Agreement for Categorical Data. Biometrics. 1977;33(1):159. https://doi.org/10.2307/2529310 .

Article   CAS   PubMed   Google Scholar  

Davidson JRT. Connor-Davidson Resilience Scale (CD-RISC) Manual. Unpublished. 08-19-2018, accessible at www.cd-risc.com: https://www.connordavidson-resiliencescale.com/CD-RISC%20Manual%2008-19-18.pdf .

Mehta LS, Elkind MSV, Achenbach S, Pinto FJ, Poppas A. Clinician Well-Being: addressing global needs for improvements in the Health Care Field: a Joint Statement from the American College of Cardiology, American Heart Association, European Society of Cardiology, and World Heart Federation. Glob Heart. 2021;16(1):48. https://doi.org/10.5334/gh.1067 . PMID: 34381670; PMCID: PMC8284506.

Banerjee S, Lim KHJ, Murali K, Kamposioras K, Punie K, Oing C, O’Connor M, Thorne E, Devnani B, Lambertini M, Westphalen CB, Garrido P, Amaral T, Morgan G, Haanen JBAG. Hardy. The impact of COVID-19 on oncology professionals: results of the ESMO Resilience Task Force survey collaboration. ESMO Open Volume. 2021;6(2):2059–7029. https://doi.org/10.1016/j.esmoop.2021.100058 .

Langewitz WA, Edlhaimb HP, Höfner C, Koschier A, Nübling M, Leitner A. Evaluation of a two year curriculum in psychosocial and psychosomatic medicine– handling emotions and communicating in a patient centred manner. Psychother Psychosom Med. 2010;60:451–6.

Fazekas C, Matzer F, Greimel ER, Moser G, Stelzig M, Langewitz W, Loewe B, Pieringer W, Jandl-Jager E. Psychosomatic medicine in primary care: influence of training. Wien Klin Wochenschr. 2009;121(13–14):446– 53. https://doi.org/10.1007/s00508-009-1176-9 . PMID: 19657607.

Edlhaimb HP, Söllner W. Die ärztliche Fort- Und Weiterbildung: Deutschland, Schweiz, Österreich. Österreich. In: Adler RH, Herzog W, Joraschky P, Köhle K, Langwitz W, Söllner W, Wesiack W, editors. Uexküll. Psychosomatische Medizin. Theoretische Modelle und klinische Praxis. (7.Aufl.) München. Urban & Fischer; 2011. pp. 1243–7.

Kunzler AM, Helmreich I, Chmitorz A, König J, Binder H, Wessa M, Lieb K. Psychological interventions to foster resilience in healthcare professionals. Cochrane Database Syst Rev. 2020;7(7):CD012527. https://doi.org/10.1002/14651858.CD012527.pub2 . PMID: 32627860; PMCID: PMC8121081.

Pollock A, Campbell P, Cheyne J, Cowie J, Davis B, McCallum J, McGill K, Elders A, Hagen S, McClurg D, Torrens C, Maxwell M. Interventions to support the resilience and mental health of frontline health and social care professionals during and after a disease outbreak, epidemic or pandemic: a mixed methods systematic review. Cochrane Database Syst Rev. 2020;11(11):CD013779. https://doi.org/10.1002/14651858.CD013779 . PMID: 33150970; PMCID: PMC8226433.

Robertson HD, Elliott AM, Burton C, Iversen L, Murchie P, Porteous T, Matheson C, Brit. J Gen Pract. 2016;66(647):e423–33. https://doi.org/10.3399/bjgp16X68526 .

Fox S, Lydon S, Byrne D, Madden C, Connolly F, O’Connor P. A systematic review of interventions to foster physician resilience. Postgrad Med J. 2018;94(1109):162–170. https://doi.org/10.1136/postgradmedj-2017-135212 . Epub 2017 Oct 10. PMID: 29018095.

Wu AW, Connors C, Everly GS Jr. COVID-19: peer support and Crisis Communication Strategies to Promote Institutional Resilience. Ann Intern Med. 2020;M:20–1236. https://doi.org/10.7326/M20-1236 .

Wu AW, Connors CA, Norvell M, Adapting RISE. Meeting the needs of healthcare workers during the COVID-19 pandemic. IntRev Psychiatry. 2021;33(8):711–17. DOI: 10.1080/09540261.2021.2013783Epub 2022 Jan 4.PMID: 35412425.

Zipfel S, Herzog W, Kruse J, Henningsen P. Psychosomatic Medicine in Germany: More Timely than Ever. Psychother Psychosom. 2016;85(5):262-9. https://doi.org/10.1159/000447701 . Epub 2016 Aug 11. PMID: 27509065.

Download references

Acknowledgements

The authors want to express their gratitude to all participating physicians as well as to the Austrian Medical Association and its Department of Psychosocial, Psychosomatic and Psychotherapeutic Medicine, all Federal Medical Associations and the Austrian Society for Psychosomatics and Psychotherapeutic Medicine for supporting the dissemination of the survey.

No external funding was received for this study.

Author information

Authors and affiliations.

Department of Psychiatry, Psychosomatics and Psychotherapy, Division of Medical Psychology, Psychosomatics and Psychotherapy, Medical University of Graz, Auenbruggerplatz 3, 8036, Graz, Austria

Christian Fazekas, Dennis Linder & Franziska Matzer

Independent researcher, Vienna, Austria

Maximilian Zieser

Department of Psychology, University of Klagenfurt, Klagenfurt, Austria

Barbara Hanfstingl

Department of Psychology, University of Graz, Graz, Austria

Janika Saretzki

II. Medical Department for Cardiology, Hanusch Hospital, Vienna, Austria

Evelyn Kunschitz

Karl Landsteiner Institute for Scientific Research in Clinical Cardiology, Hanusch Hospital, Vienna, Austria

Department of Psychosocial, Psychosomatic and Psychotherapeutic Medicine, Austrian Medical Association, Vienna, Austria

Luise Zieser-Stelzhammer

Ben Gurion University of the Negev, Beer-Sheva, Israel

Dennis Linder

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the conception and design of the study and to the development of the survey questionnaire. CF took care of the necessary ethic vote. MZ programmed the online survey. CF and LZ-S contributed to the dissemination of the survey. MZ performed the statistical analysis. FM developed the coding framework for the qualitative analysis, discussed it with CF and JS, and analyzed qualitative data. FM and JS independently rated the text based on the coding framework. CF and FM wrote the first draft of the manuscript. MZ, BH, DL, EK and JS wrote additional sections of the manuscript. All authors contributed to the interpretation of the data, manuscript revision, and read and approved the final manuscript.

Corresponding author

Correspondence to Christian Fazekas .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by the ethics committee of the Medical University of Graz (32-534ex 19/20). Informed consent was obtained from all participants before data collection. All methods were performed in accordance with the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fazekas, C., Zieser, M., Hanfstingl, B. et al. Physician resilience and perceived quality of care among medical doctors with training in psychosomatic medicine during the COVID-19 pandemic: a quantitative and qualitative analysis. BMC Health Serv Res 24 , 249 (2024). https://doi.org/10.1186/s12913-024-10681-1

Download citation

Received : 14 August 2023

Accepted : 03 February 2024

Published : 27 February 2024

DOI : https://doi.org/10.1186/s12913-024-10681-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuing medical education
  • COVID-19 pandemic
  • Quality of care
  • Psychosomatic medicine

BMC Health Services Research

ISSN: 1472-6963

data analysis strategies in quantitative research

Research on intercropping from 1995 to 2021: a worldwide bibliographic review

  • Review Article
  • Published: 26 February 2024

Cite this article

  • Yurui Tang 1   na1 ,
  • Yurong Qiu 1   na1   nAff2 ,
  • Yabing Li 1 , 3 ,
  • Huasen Xu 4 &
  • Xiao-Fei Li   ORCID: orcid.org/0000-0002-5530-1578 1 , 3  

29 Accesses

Explore all metrics

Background and aims

Intercropping or crop mixture is an agroecological strategy to optimize resource-use efficiency and crop yield. In recent decades, therefore, intercropping has gained increasing attention as a more sustainable land management alternative to monoculture-oriented intensive agriculture. However, few studies have attempted to perform a comprehensive and systematic review on this subject from a bibliometric perspective.

This study carried out a quantitative bibliometric analysis to explore the characteristics of publications, research hotspots, and future frontiers regarding intercropping globally from 1995 to 2021. Data of the publications were obtained from the Web of Science Core Collection database.

A total of 7574 publications in intercropping research were identified. The research field of intercropping entered a rapid development stage in 2007, with Chinese scholars and research institutes contributing the most. The two journals with the highest h-index and global total citations are Field Crops Research and Plant and Soil. Research themes on intercropping have evolved over time, with the focus shifting from yield and plant interspecific interactions to sustainable agriculture. Moreover, keyword analysis showed that research frontiers were mainly concentrated on sustainable intensification, microbial community, climate change adaptation, biodiversity, and soil fertility.

Conclusions

The intercropping field continues to develop, attracting the attention of scholars around the world, and the research topics are booming as the research output increases. Future research will pay more attention to long-term agricultural sustainability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

data analysis strategies in quantitative research

Aguilar J, Gramig GG, Hendrickson JR, Archer DW, Forcella F, Liebig MA (2015) Crop species diversity changes in the United States: 1978–2012. PLoS ONE 10:e0136580. https://doi.org/10.1371/journal.pone.0136580

Article   PubMed   PubMed Central   Google Scholar  

Altieri MA, Nicholls CI, Henao A, Lana MA (2015) Agroecology and the design of climate change-resilient farming systems. Agron Sustain Dev 35:869–890. https://doi.org/10.1007/s13593-015-0285-2

Article   Google Scholar  

Aria M, Cuccurullo C (2017) Bibliometrix: an R-tool for comprehensive science mapping analysis. J Informetr 11:959–975. https://doi.org/10.1016/j.joi.2017.08.007

Assefa Y, Yadav S, Mondal MK, Bhattacharya J, Parvin R, Sarker SR, Rahman M, Sutradhar A, Prasad PVV, Bhandari H, Shew AM, Jagadish SVK (2021) Crop diversification in rice-based systems in the polders of Bangladesh: yield stability, profitability, and associated risk. Agr Syst 187:102986. https://doi.org/10.1016/j.agsy.2020.102986

Bedoussac L, Journet EP, Hauggaard-Nielsen H, Naudin C, Corre-Hellou G, Jensen E, Prieur L, Justes E (2015) Ecological principles underlying the increase of productivity achieved by cereal-grain legume intercrops in organic farming. A review. Agron Sustain Dev 35:911–935. https://doi.org/10.1007/s13593-014-0277-7

Beillouin D, Ben-Ari T, Malezieux E, Seufert V, Makowski D (2021) Positive but variable effects of crop diversification on biodiversity and ecosystem services. Global Change Biol 27:4697–4710. https://doi.org/10.1111/gcb.15747

Bommarco R, Kleijn D, Potts SG (2013) Ecological intensification: harnessing ecosystem services for food security. Trends Ecol Evol 28:230–238. https://doi.org/10.1016/j.tree.2012.10.012

Article   PubMed   Google Scholar  

Bornmann L, Haunschild R, Mutz R (2021) Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Hum Soc Sci Commun 8:224. https://doi.org/10.1057/s41599-021-00903-w

Brooker RW, Bennett AE, Cong WF, Daniell TJ, George TS, Hallett PD, Hawes C, Iannetta PPM, Jones HG, Karley AJ, Li L, McKenzie BM, Pakeman RJ, Paterson E, Schoeb C, Shen J, Squire G, Watson CA, Zhang C, Zhang F, Zhang J, White PJ (2015) Improving intercropping: a synthesis of research in agronomy, plant physiology and ecology. New Phytol 206:107–117. https://doi.org/10.1111/nph.13132

Brooker RW, Karley AJ, Newton AC, Pakeman RJ, Schöb C, Pugnaire F (2016) Facilitation and sustainable agriculture: a mechanistic approach to reconciling crop production and conservation. Funct Ecol 30:98–107. https://doi.org/10.1111/1365-2435.12496

Brooker RW, George TS, Homulle Z, Karley AJ, Newton AC, Pakeman RJ, Schob C (2021) Facilitation and biodiversity-ecosystem function relationships in crop production systems and their role in sustainable farming. J Ecol 109:2054–2067. https://doi.org/10.1111/1365-2745.13592

Bybee-Finley KA, Ryan MR (2018) Advancing intercropping research and practices in industrialized agricultural landscapes. Agriculture 8:80. https://doi.org/10.3390/agriculture8060080

Cassman KG, Grassini P (2020) A global perspective on sustainable intensification research. Nat Sustain 3:262–268. https://doi.org/10.1038/s41893-020-0507-8

Chauhan BS, Singh RG, Mahajan G (2012) Ecology and management of weeds under conservation agriculture: a review. Crop Prot 38:57–65. https://doi.org/10.1016/j.cropro.2012.03.010

Cong WF, Hoffland E, Li L, Six J, Sun JH, Bao XG, Zhang FS, van der Werf W (2015) Intercropping enhances soil carbon and nitrogen. Global Change Biol 21:1715–1726. https://doi.org/10.1111/gcb.12738

Article   ADS   Google Scholar  

Corre-Hellou G, Dibet A, Hauggaard-Nielsen H, Crozat Y, Gooding M, Ambus P, Dahlmann C, von Fragstein P, Pristeri A, Monti M, Jensen ES (2011) The competitive ability of pea-barley intercrops against weeds and the interactions with crop productivity and soil N availability. Field Crops Res 122:264–272. https://doi.org/10.1016/j.fcr.2011.04.004

Crews TE, Peoples MB (2004) Legume versus fertilizer sources of nitrogen: ecological tradeoffs and human needs. Agr Ecosyst Environ 102:279–297. https://doi.org/10.1016/j.agee.2003.09.018

Crossley MS, Burke KD, Schoville SD, Radeloff VC (2021) Recent collapse of crop belts and declining diversity of US agriculture since 1840. Global Change Biol 27:151–164. https://doi.org/10.1111/gcb.15396

Duchene O, Vian J-F, Celette F (2017) Intercropping with legume for agroecological cropping systems: complementarity and facilitation processes and the importance of soil microorganisms. A review. Agr Ecosyst Environ 240:148–161. https://doi.org/10.1016/j.agee.2017.02.019

Ehrmann J, Ritz K (2014) Plant: soil interactions in temperate multi-cropping production systems. Plant Soil 376:1–29. https://doi.org/10.1007/s11104-013-1921-8

Feng C, Sun Z, Zhang L, Feng L, Zheng J, Bai W, Gu C, Wang Q, Xu Z, van der Werf W (2021) Maize/peanut intercropping increases land productivity: a meta-analysis. Field Crops Res 270:108208. https://doi.org/10.1016/j.fcr.2021.108208

Franco JG, King SR, Volder A (2018) Component crop physiology and water use efficiency in response to intercropping. Eur J Agron 93:27–39. https://doi.org/10.1016/j.eja.2017.11.005

Fustec J, Lesuffleur F, Mahieu S, Cliquet J-B (2010) Nitrogen rhizodeposition of legumes. A review. Agron Sustain Dev 30:57–66. https://doi.org/10.1051/agro/2009003

Gaba S, Lescourret F, Boudsocq S, Enjalbert J, Hinsinger P, Journet EP, Navas ML, Wery J, Louarn G, Malezieux E, Pelzer E, Prudent M, Ozier-Lafontaine H (2015) Multiple cropping systems as drivers for providing multiple ecosystem services: from concepts to design. Agron Sustain Dev 35:607–623. https://doi.org/10.1007/s13593-014-0272-z

Garcia-Barrios L, Dechnik-Vazquez YA (2021) How multispecies intercrop advantage responds to water stress: a yield-component ecological framework and its experimental application. Front Agr Sci Eng 8:416–431. https://doi.org/10.15302/J-FASE-2021412

Gaudin ACM, Tolhurst TN, Ker AP, Janovicek K, Tortora C, Martin RC, Deen W (2015) Increasing crop diversity mitigates weather variations and improves yield stability. PLoS ONE 10:e0113261. https://doi.org/10.1371/journal.pone.0113261

Grames EM, Stillman AN, Tingley MW, Elphick CS (2019) An automated approach to identifying search terms for systematic reviews using keyword co-occurrence networks. Methods Ecol Evol 10:1645–1654. https://doi.org/10.1111/2041-210X.13268

Gu C, Bastiaans L, Anten NPR, Makowski D, van der Werf W (2021) Annual intercropping suppresses weeds: a meta-analysis. Agr Ecosyst Environ 322:107658. https://doi.org/10.1016/j.agee.2021.107658

Haddaway NR, Woodcock P, Macura B, Collins A (2015) Making literature reviews more reliable through application of lessons from systematic reviews. Conserv Biol 29:1596–1605. https://doi.org/10.1111/cobi.12541

Han R, Zhou B, Huang Y, Lu X, Li S, Li N (2020) Bibliometric overview of research trends on heavy metal health risks and impacts in 1989–2018. J Clean Prod 276:123249. https://doi.org/10.1016/j.jclepro.2020.123249

Haunschild R, Bornmann L, Marx W (2016) Climate change research in view of bibliometrics. PLoS ONE 11:e0160393. https://doi.org/10.1371/journal.pone.0160393

Hirsch JE (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA 102:16569–16572. https://doi.org/10.1073/pnas.0507655102

Article   ADS   PubMed   PubMed Central   Google Scholar  

Huang L, Zhou M, Lv J, Chen K (2020) Trends in global research in forest carbon sequestration: a bibliometric analysis. J Clean Prod 252:119908. https://doi.org/10.1016/j.jclepro.2019.119908

Hufnagel J, Reckling M, Ewert F (2020) Diverse approaches to crop diversification in agricultural research. A review. Agron Sustain Dev 40:2. https://doi.org/10.1007/s13593-020-00617-4

Isbell F, Adler PR, Eisenhauer N, Fornara D, Kimmel K, Kremen C, Letourneau DK, Liebman M, Polley HW, Quijas S, Scherer-Lorenzen M (2017) Benefits of increasing plant diversity in sustainable agroecosystems. J Ecol 105:871–879. https://doi.org/10.1111/1365-2745.12789

Jensen ES, Carlsson G, Hauggaard-Nielsen H (2020) Intercropping of grain legumes and cereals improves the use of soil N resources and reduces the requirement for synthetic fertilizer N: a global-scale analysis. Agron Sustain Dev 40:1. https://doi.org/10.1007/s13593-020-0607-x

Kleinberg J (2003) Bursty and hierarchical structure in streams. Data Min Knowl Dsicov 7:373–397. https://doi.org/10.1023/A:1024940629314

Article   MathSciNet   Google Scholar  

Li J, Liu J (2020) Science mapping of tunnel fires: a scientometric analysis-based study. Fire Technol 56:2111–2135. https://doi.org/10.1007/s10694-020-00969-z

Li L, Li SM, Sun JH, Zhou LL, Bao XG, Zhang HG, Zhang FS (2007) Diversity enhances agricultural productivity via rhizosphere phosphorus facilitation on phosphorus-deficient soils. Proc Natl Acad Sci USA 104:11192–11196. https://doi.org/10.1073/pnas.0704591104

Li L, Zhang L, Zhang F (2013) Crop mixtures and the mechanisms of overyielding. In: Levin SA (ed) Encyclopedia of Biodiversity, 2nd edn. Academic, Waltham, pp 382–395. https://doi.org/10.1016/b978-0-12-384719-5.00363-4

Chapter   Google Scholar  

Li L, Tilman D, Lambers H, Zhang FS (2014) Plant diversity and overyielding: insights from belowground facilitation of intercropping in agriculture. New Phytol 203:63–69. https://doi.org/10.1111/nph.12778

Li B, Li YY, Wu HM, Zhang FF, Li CJ, Li XX, Lambers H, Li L (2016) Root exudates drive interspecific facilitation by enhancing nodulation and N 2 fixation. Proc Natl Acad Sci USA 113:6496–6501. https://doi.org/10.1073/pnas.1523580113

Li C, Hoffland E, Kuyper TW, Yang Y, Li H, Zhang C, Zhang F, van der Werf W (2020a) Yield gain, complementarity and competitive dominance in intercropping in China: A meta-analysis of drivers of yield gain using additive partitioning. Eur J Agron 113:125987. https://doi.org/10.1016/j.eja.2019.125987

Li C, Hoffland E, Kuyper TW, Yu Y, Zhang C, Li H, Zhang F, van der Werf W (2020b) Syndromes of production in intercropping impact yield gains. Nat Plants 6:653–660. https://doi.org/10.1038/s41477-020-0680-9

Li L, Werf WVD, Zhang F (2021a) Crop diversity and sustainable agriculture: mechanisms, designs and applications. Front Agr Sci Eng 8:359–361. https://doi.org/10.15302/J-FASE-2021417

Li XF, Wang ZG, Bao XG, Sun JH, Yang SC, Wang P, Wang CB, Wu JP, Liu XR, Tian XL, Wang Y, Li JP, Wang Y, Xia HY, Mei PP, Wang XF, Zhao JH, Yu RP, Zhang WP, Che ZX, Gui LG, Callaway RM, Tilman D, Li L (2021b) Long-term increased grain yield and soil fertility from intercropping. Nat Sustain 4:943–950. https://doi.org/10.1038/s41893-021-00767-7

Lichtenberg EM, Kennedy CM, Kremen C, Batary P, Berendse F, Bommarco R, Bosque-Perez NA, Carvalheiro LG, Snyder WE, Williams NM, Winfree R, Klatt BK, Astrom S, Benjamin F, Brittain C, Chaplin-Kramer R, Clough Y, Danforth B, Diekoetter T, Eigenbrode SD, Ekroos J, Elle E, Freitas BM, Fukuda Y, Gaines-Day HR, Grab H, Gratton C, Holzschuh A, Isaacs R, Isaia M, Jha S, Jonason D, Jones VP, Klein A-M, Krauss J, Letourneau DK, Macfadyen S, Mallinger RE, Martin EA, Martinez E, Memmott J, Morandin L, Neame L, Otieno M, Park MG, Pfiffner L, Pocock MJO, Ponce C, Potts SG, Poveda K, Ramos M, Rosenheim JA, Rundlof M, Sardinas H, Saunders ME, Schon NL, Sciligo AR, Sidhu CS, Steffan-Dewenter I, Tscharntke T, Vesely M, Weisser WW, Wilson JK, Crowder DW (2017) A global synthesis of the effects of diversified farming systems on arthropod diversity within fields and across agricultural landscapes. Global Change Biol 23(11):4946–4957. https://doi.org/10.1111/gcb.13714

Lithourgidis A, Dordas C, Damalas C, Vlachostergios D (2011) Annual intercrops: an alternative pathway for sustainable agriculture. Aust J Crop Sci 5:396–410

Google Scholar  

Liu X, Zhang L, Hong S (2011) Global biodiversity research during 1900–2009: a bibliometric analysis. Biodivers Conserv 20:807–826. https://doi.org/10.1007/s10531-010-9981-z

Liu K, Guan X, Li G, Duan M, Li Y, Hong Y, Lin M, Fu R, Yu F (2022a) Publication characteristics, topic trends and knowledge domains of karst ecological restoration: a bibliometric and knowledge mapping analysis from 1991 to 2021. Plant Soil 475:169–189. https://doi.org/10.1007/s11104-022-05345-0

Liu Z, Cheng Y, Hai Y, Chen Y, Liu T (2022b) Developments in congenital scoliosis and related research from 1992 to 2021: a thirty-year bibliometric analysis. World Neurosurg 164:e24–e44. https://doi.org/10.1016/j.wneu.2022.02.117

Louarn G, Pereira-Lopès E, Fustec J, Mary B, Voisin A-S, de Faccio Carvalho PC, Gastal F (2015) The amounts and dynamics of nitrogen transfer to grasses differ in alfalfa and white clover-based grass-legume mixtures as a result of rooting strategies and rhizodeposit quality. Plant Soil 389:289–305. https://doi.org/10.1007/s11104-014-2354-8

MacLaren C, Mead A, van Balen D, Claessens L, Etana A, de Haan J, Haagsma W, Jäck O, Keller T, Labuschagne J, Myrbeck Å, Necpalova M, Nziguheba G, Six J, Strauss J, Swanepoel PA, Thierfelder C, Topp C, Tshuma F, Verstegen H, Walker R, Watson C, Wesselink M, Storkey J (2022) Long-term evidence for ecological intensification as a pathway to sustainable agriculture. Nat Sustain 5:770–779. https://doi.org/10.1038/s41893-022-00911-x

Malézieux E, Crozat Y, Dupraz C, Laurans M, Makowski D, Ozier-Lafontaine H, Rapidel B, Tourdonnet S, Valantin-Morison M (2009) Mixing plant species in cropping systems: concepts, tools and models. A review. Agron Sustain Dev 29:43–62. https://doi.org/10.1051/agro:2007057

Mao G, Zou H, Chen G, Du H, Zuo J (2015) Past, current and future of biomass energy research: a bibliometric analysis. Renew Sust Energ Rev 52:1823–1833. https://doi.org/10.1016/j.rser.2015.07.141

Marcos-Pérez M, Sánchez-Navarro V, Zornoza R (2023) Intercropping systems between broccoli and fava bean can enhance overall crop production and improve soil fertility. Sci Hortic 312:111834. https://doi.org/10.1016/j.scienta.2023.111834

Martin-Guay MO, Paquette A, Dupras J, Rivest D (2018) The new Green Revolution: sustainable intensification of agriculture by intercropping. Sci Total Environ 615:767–772. https://doi.org/10.1016/j.scitotenv.2017.10.024

Article   ADS   PubMed   Google Scholar  

Milojevic S (2015) Quantifying the cognitive extent of science. J Informetr 9:962–973. https://doi.org/10.1016/j.joi.2015.10.005

Mourao PR, Martinho VD (2020) Forest entrepreneurship: a bibliometric analysis and a discussion about the co-authorship networks of an emerging scientific field. J Clean Prod 256:120413. https://doi.org/10.1016/j.jclepro.2020.120413

Oerke EC (2005) Crop losses to pests. J Agr Sci 144:31–43. https://doi.org/10.1017/s0021859605005708

Pan XY, Lv JL, Dyck M, He HL (2021) Bibliometric analysis of soil nutrient research between 1992 and 2020. Agriculture 11:223. https://doi.org/10.3390/agriculture11030223

Pelzer E, Hombert N, Jeuffroy M-H, Makowski D (2014) Meta-analysis of the effect of nitrogen fertilization on annual cereal-legume intercrop production. Agron J 106:1775–1786. https://doi.org/10.2134/agronj13.0590

Petticrew M, Roberts H (2006) Systematic reviews in the social sciences: a practical guide. Blackwell Publishing, Oxford. https://doi.org/10.1002/9780470754887

Book   Google Scholar  

Putnam AR, Duke WB (1978) Allelopathy in agroecosystems. Annu Rev Phytopathol 16:431–451. https://doi.org/10.1146/annurev.py.16.090178.002243

Raseduzzaman M, Jensen ES (2017) Does intercropping enhance yield stability in arable crop production? A meta-analysis. Eur J Agron 91:25–33. https://doi.org/10.1016/j.eja.2017.09.009

Reckling M, Albertsson J, Vermue A, Carlsson G, Watson CA, Justes E, Bergkvist G, Jensen ES, Topp CFE (2022) Diversification improves the performance of cereals in European cropping systems. Agron Sustain Dev 42:118. https://doi.org/10.1007/s13593-022-00850-z

Renard D, Tilman D (2019) National food production stabilized by crop diversity. Nature 571:257–260. https://doi.org/10.1038/s41586-019-1316-y

Rockström J, Williams J, Daily G, Noble A, Matthews N, Gordon L, Wetterstrand H, DeClerck F, Shah M, Steduto P, de Fraiture C, Hatibu N, Unver O, Bird J, Sibanda L, Smith J (2017) Sustainable intensification of agriculture for human prosperity and global sustainability. Ambio 46:4–17. https://doi.org/10.1007/s13280-016-0793-6

Romanelli JP, Meli P, Naves RP, Alves MC, Rodrigues RR (2021) Reliability of evidence-review methods in restoration ecology. Conserv Biol 35:142–154. https://doi.org/10.1111/cobi.13661

Song L, Niu Z, Chen S, Zhao S, Qiu Z, Wang Y, Hua X, Ding Z, Ma Q (2023) Effects of pea-tea intercropping on rhizosphere soil microbial communities. Plant Soil. https://doi.org/10.1007/s11104-023-06321-y

Stomph T, Dordas C, Baranger A, de Rijk J, Dong B, Evers J, Gu C, Li L, Simon J, Jensen ES, Wang Q, Wang Y, Wang Z, Xu H, Zhang C, Zhang L, Zhang W-P, Bedoussac L, van der Werf W (2020) Designing intercrops for high yield, yield stability and efficient use of resources: are there principles? Adv Agron 160:1–50. https://doi.org/10.1016/bs.agron.2019.10.002

Tilman D, Cassman KG, Matson PA, Naylor R, Polasky S (2002) Agricultural sustainability and intensive production practices. Nature 418:671–677. https://doi.org/10.1038/nature01014

van Eck NJ, Waltman L (2010) Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84:523–538. https://doi.org/10.1007/s11192-009-0146-3

van der Werf W, Li C, Cong W-F, Zhang F (2020) Intercropping enables a sustainable intensification of agriculture. Front Agr Sci Eng 7:254–256. https://doi.org/10.15302/j-fase-2020352

Wang GZ, Li XG, Xi XQ, Cong W-F (2022) Crop diversification reinforces soil microbiome functions and soil health. Plant Soil 476:375–383. https://doi.org/10.1007/s11104-022-05436-y

Weih M, Karley A, Newton A, Kiær L, Scherber C, Rubiales D, Adam E, Ajal J, Brandmeier J, Pappagallo S, Villegas-Fernández A, Reckling M, Tavoletti S (2021) Grain yield stability of cereal-legume intercrops is greater than sole crops in more productive conditions. Agriculture 11:255. https://doi.org/10.3390/agriculture11030255

Weston LA, Duke SO (2003) Weed and crop allelopathy. Crit Rev Plant Sci 22:367–389. https://doi.org/10.1080/713610861

Wu J, Bao X, Zhang J, Lu B, Zhang W, Callaway RM, Li L (2022) Temporal stability of productivity is associated with complementarity and competitive intensities in intercropping. Ecol Appl 33:e2731. https://doi.org/10.1002/eap.2731

Xu Z, Li C, Zhang C, Yu Y, van der Werf W, Zhang F (2020) Intercropping maize and soybean increases efficiency of land and fertilizer nitrogen use; a meta-analysis. Field Crops Res 246:107661. https://doi.org/10.1016/j.fcr.2019.107661

Xue YF, Xia HY, Christie P, Zhang Z, Li L, Tang CX (2016) Crop acquisition of phosphorus, iron and zinc from soil in cereal/legume intercropping systems: a critical review. Ann Bot 117:363–377. https://doi.org/10.1093/aob/mcv182

Yin W, Chai Q, Zhao C, Yu A, Fan Z, Hu F, Fan H, Guo Y, Coulter JA (2020) Water utilization in intercropping: a review. Agric Water Manage 241:106335. https://doi.org/10.1016/j.agwat.2020.106335

Yu Y, Stomph TJ, Makowski D, van der Werf W (2015) Temporal niche differentiation increases the land equivalent ratio of annual intercrops: a meta-analysis. Field Crops Res 184:133–144. https://doi.org/10.1016/j.fcr.2015.09.010

Yu Y, Stomph TJ, Makowski D, Zhang LZ, van der Werf W (2016) A meta-analysis of relative crop yields in cereal/legume mixtures suggests options for management. Field Crops Res 198:269–279. https://doi.org/10.1016/j.fcr.2016.08.001

Yu RP, Yang H, Xing Y, Zhang WP, Lambers H, Li L (2022) Belowground processes and sustainability in agroecosystems with intercropping. Plant Soil 476:263–288. https://doi.org/10.1007/s11104-022-05487-1

Zhang F, Li L (2003) Using competitive and facilitative interactions in intercropping systems enhances crop productivity and nutrient-use efficiency. Plant Soil 248:305–312. https://doi.org/10.1023/A:1022352229863

Zhang WP, Liu GC, Sun JH, Fornara D, Zhang LZ, Zhang FF, Li L (2016) Temporal dynamics of nutrient uptake by neighboring plant species: evidence from intercropping. Funct Ecol 31:469–479. https://doi.org/10.1111/1365-2435.12732

Zhang Y, Pu S, Lv X, Gao Y, Ge L (2020) Global trends and prospects in microplastics research: a bibliometric analysis. J Hazard Mater 400:123110. https://doi.org/10.1016/j.jhazmat.2020.123110

Zhou XG, Zhang JY, Rahman MKU, Gao DM, Wei Z, Wu FZ, Dini-Andreote F (2023) Interspecific plant interaction via root exudates structures the disease suppressiveness of rhizosphere microbiomes. Mol Plant 16:849–864. https://doi.org/10.1016/j.molp.2023.03.009

Zhuang Y, Liu X, Nguyen T, He Q, Hong S (2013) Global remote sensing research trends during 1991–2010: a bibliometric analysis. Scientometrics 96:203–219. https://doi.org/10.1007/s11192-012-0918-z

Download references

Acknowledgements

This study was supported by the National Natural Science Foundation of China (31901127) and National Key Laboratory of Cotton Bio-breeding and Integrated Utilization, Institute of Cotton Research of Chinese Academy of Agricultural Sciences (CB2023C02). We thank the staff members of our laboratory for their critical reading and comments on the manuscript. We also thank the editor and three anonymous reviewers for their efforts on the manuscript.

Author information

Present address: Cotton Research Institute, Shanxi Agricultural University, Yuncheng, 044000, China

Yurui Tang and Yurong Qiu contributed equally to this work.

Authors and Affiliations

National Key Laboratory of Cotton Bio-breeding and Integrated Utilization, Institute of Cotton Research, Chinese Academy of Agricultural Sciences, Anyang, 455000, China

Yurui Tang, Yurong Qiu, Yabing Li & Xiao-Fei Li

Zhengzhou Research Base, National Key Laboratory of Cotton Bio-breeding and Integrated Utilization, School of Agricultural Sciences, Zhengzhou University, Zhengzhou, 450001, China

Yabing Li & Xiao-Fei Li

College of Resources and Environmental Sciences, Hebei Agricultural University, Baoding, 071001, China

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Yabing Li or Xiao-Fei Li .

Ethics declarations

Conflict of interest.

The authors declare no conflicts of interest.

Additional information

Responsible Editor: Hans Lambers.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Tang, Y., Qiu, Y., Li, Y. et al. Research on intercropping from 1995 to 2021: a worldwide bibliographic review. Plant Soil (2024). https://doi.org/10.1007/s11104-024-06542-9

Download citation

Received : 04 May 2023

Accepted : 04 February 2024

Published : 26 February 2024

DOI : https://doi.org/10.1007/s11104-024-06542-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bibliometrics
  • Species mixture
  • Global evolution
  • Research hotspots
  • Future prospects
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Sampling Quantitative Techniques For Data analysis

    data analysis strategies in quantitative research

  2. Quantitative Research 1

    data analysis strategies in quantitative research

  3. What Is Data Analysis In Quantitative Research

    data analysis strategies in quantitative research

  4. Quantitative Analysis

    data analysis strategies in quantitative research

  5. Quantitative research tools for data analysis

    data analysis strategies in quantitative research

  6. Week 12: Quantitative Research Methods

    data analysis strategies in quantitative research

VIDEO

  1. Qualitative Research Data Analysis

  2. Data Analysis

  3. Research Methodology and Data Analysis-Refresher Course

  4. Research Methodology and Data Analysis-Refresher Course

  5. Quantitative vs qualitative data presentation

  6. Qualitative Data Analysis Procedures

COMMENTS

  1. Quantitative Data Analysis Methods & Techniques 101

    Recap & summary What is quantitative data analysis? Despite being a mouthful, quantitative data analysis simply means analysing data that is numbers-based - or data that can be easily "converted" into numbers without losing any meaning.

  2. Data Analysis in Quantitative Research

    Data Analysis in Quantitative Research Yong Moon Jung Reference work entry First Online: 13 January 2019 1641 Accesses 1 Citations Abstract Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences.

  3. A Really Simple Guide to Quantitative Data Analysis

    Data analysis is important as it paves way to drawing conclusions of a research study. Despite being a mouthful, quantitative data analysis simply means analyzing data that is...

  4. Data Analysis Techniques for Quantitative Study

    Abstract. This chapter describes the types of data analysis techniques in quantitative research and sampling strategies suitable for quantitative studies, particularly probability sampling, to produce credible and trustworthy explanations of a phenomenon. Initially, it briefly describes the measurement levels of variables.

  5. Quantitative Data Analysis: A Comprehensive Guide

    Step 1: Data Collection Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as interviews, focus groups, surveys, and questionnaires. Step 2: Data Cleaning

  6. What Is Quantitative Research?

    Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

  7. 15 Data Analysis I: Overview of Data Analysis Strategies

    Expand Multimethod and Mixed Methods Research Analysis Strategies: ... it shows the complexities and challenges involved in integrating qualitative and quantitative data: issues about linking data sets, similarity (or not) of units of analysis, and concepts and meaning. It draws some conclusions and sets out some future directions for MMMR.

  8. Quantitative Data Analysis: A Complete Guide

    Here's how to make sense of your company's numbers in just four steps: 1. Collect data. Before you can actually start the analysis process, you need data to analyze. This involves conducting quantitative research and collecting numerical data from various sources, including: Interviews or focus groups.

  9. The 7 Most Useful Data Analysis Techniques [2024 Guide]

    4. The data analysis process. In order to gain meaningful insights from data, data analysts will perform a rigorous step-by-step process. We go over this in detail in our step by step guide to the data analysis process —but, to briefly summarize, the data analysis process generally consists of the following phases: Defining the question

  10. Research Design: Decide on your Data Analysis Strategy

    The last step of designing your research is planning your data analysis strategies. In this video, we'll take a look at some common approaches for both quant...

  11. A Practical Guide to Writing Quantitative and Qualitative Research

    A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. ... In quantitative research, ... While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the ...

  12. Part II: Data Analysis Methods in Quantitative Research

    Appendix G2 Search Strategy Table. ... Part II: Data Analysis Methods in Quantitative Research Data Analysis Methods in Quantitative Research. We started this module with levels of measurement as a way to categorize our data. Data analysis is directed toward answering the original research question and achieving the study purpose (or aim).

  13. Quantitative Research

    Here are some commonly used quantitative research analysis methods: Statistical Analysis Statistical analysis is the most common quantitative research analysis method. It involves using statistical tools and techniques to analyze the numerical data collected during the research process.

  14. 101 Guide to Quantitative Data Analysis [Methods + Techniques]

    Quantitative data analysis comes with the challenge of analyzing large datasets consisting of numeric variables and statistics. Researchers often get overwhelmed by various techniques, methods, and data sources. At the same time, the importance of data collection and analysis drastically increases. It helps improve current products/services, identify the potential for a new product, understand ...

  15. PDF Data Analysis Techniques for Quantitative Study

    Statistical analysis techniques for quantitative studies can describe data, generate hypotheses, or test hypotheses. Figure 16.3 shows the schematic representation of the data analysis process for quantitative study design. Descriptive statistics summarize and describe a group's characteristics or compare groups and are described in the next ...

  16. Data Collection

    Data Collection | Definition, Methods & Examples. Published on June 5, 2020 by Pritha Bhandari.Revised on June 21, 2023. Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

  17. Quantitative Data Analysis

    Offers a guide through the essential steps required in quantitative data analysis. Helps in choosing the right method before starting the data collection process. ... executing and reporting appropriate data analysis methods to answer their research questions. It provides readers with a basic understanding of the steps that each method involves ...

  18. What is data analysis? Methods, techniques, types & how-to

    Gaining a better understanding of different techniques and methods in quantitative research as well as qualitative insights will give your analyzing efforts a more clearly defined direction, so it's worth taking the time to allow this particular knowledge to sink in. Additionally, you will be able to create a comprehensive analytical report that...

  19. (PDF) Quantitative Data Analysis

    Descriptive analysis is a quantitative data analysis approach that assists researchers in presenting data in an easily understood, quantitative format, assisting in the interpretation and ...

  20. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  21. Qualitative vs. Quantitative Research

    Methodology Qualitative vs. Quantitative Research | Differences, Examples & Methods Qualitative vs. Quantitative Research | Differences, Examples & Methods Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.

  22. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense.

  23. Data Analysis Techniques In Research

    2) Quantitative Analysis: Data Analysis Techniques in Research Examples Research Objective: Data Collection: Data Analysis Techniques Applied: Data Analysis Techniques in Quantitative Research 1) Descriptive Statistics: 2) Inferential Statistics: 3) Regression Analysis: 4) Correlation Analysis: 5) Factor Analysis: 6) Time Series Analysis:

  24. Physician resilience and perceived quality of care among medical

    Qualitative data analysis (QDA) and an additional frequency analysis were conducted using the statistical software program MAXQDA 2022. ... This study combines quantitative and qualitative data collected during a particularly challenging period of time for physicians in Austria and can thus be regarded an important contribution to research on ...

  25. Research on intercropping from 1995 to 2021: a worldwide ...

    This study carried out a quantitative bibliometric analysis to explore the characteristics of publications, research hotspots, and future frontiers regarding intercropping globally from 1995 to 2021. Data of the publications were obtained from the Web of Science Core Collection database.