Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Quantitative Research? | Definition, Uses & Methods

What Is Quantitative Research? | Definition, Uses & Methods

Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

quantitative research definition articles

Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved March 27, 2024, from https://www.scribbr.com/methodology/quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Banner Image

Quantitative and Qualitative Research

  • I NEED TO . . .

What is Quantitative Research?

  • What is Qualitative Research?
  • Quantitative vs Qualitative
  • Step 1: Accessing CINAHL
  • Step 2: Create a Keyword Search
  • Step 3: Create a Subject Heading Search
  • Step 4: Repeat Steps 1-3 for Second Concept
  • Step 5: Repeat Steps 1-3 for Quantitative Terms
  • Step 6: Combining All Searches
  • Step 7: Adding Limiters
  • Step 8: Save Your Search!
  • What Kind of Article is This?
  • More Research Help This link opens in a new window

Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns . Quantitative research gathers a range of numeric data. Some of the numeric data is intrinsically quantitative (e.g. personal income), while in other cases the numeric structure is  imposed (e.g. ‘On a scale from 1 to 10, how depressed did you feel last week?’). The collection of quantitative information allows researchers to conduct simple to extremely sophisticated statistical analyses that aggregate the data (e.g. averages, percentages), show relationships among the data (e.g. ‘Students with lower grade point averages tend to score lower on a depression scale’) or compare across aggregated data (e.g. the USA has a higher gross domestic product than Spain). Quantitative research includes methodologies such as questionnaires, structured observations or experiments and stands in contrast to qualitative research. Qualitative research involves the collection and analysis of narratives and/or open-ended observations through methodologies such as interviews, focus groups or ethnographies.

Coghlan, D., Brydon-Miller, M. (2014).  The SAGE encyclopedia of action research  (Vols. 1-2). London, : SAGE Publications Ltd doi: 10.4135/9781446294406

What is the purpose of quantitative research?

The purpose of quantitative research is to generate knowledge and create understanding about the social world. Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a way to learn about a particular group of people, known as a sample population. Using scientific inquiry, quantitative research relies on data that are observed or measured to examine questions about the sample population.

Allen, M. (2017).  The SAGE encyclopedia of communication research methods  (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc doi: 10.4135/9781483381411

How do I know if the study is a quantitative design?  What type of quantitative study is it?

Quantitative Research Designs: Descriptive non-experimental, Quasi-experimental or Experimental?

Studies do not always explicitly state what kind of research design is being used.  You will need to know how to decipher which design type is used.  The following video will help you determine the quantitative design type.

  • << Previous: I NEED TO . . .
  • Next: What is Qualitative Research? >>
  • Last Updated: Dec 8, 2023 10:05 PM
  • URL: https://libguides.uta.edu/quantitative_and_qualitative_research

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

Quantitative research

Affiliation.

  • 1 Faculty of Health and Social Care, University of Hull, Hull, England.
  • PMID: 25828021
  • DOI: 10.7748/ns.29.31.44.e8681

This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

Keywords: Experiments; measurement; nursing research; quantitative research; reliability; surveys; validity.

  • Biomedical Research / methods*
  • Double-Blind Method
  • Evaluation Studies as Topic
  • Longitudinal Studies
  • Randomized Controlled Trials as Topic
  • United Kingdom

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Quantitative Research? | Definition & Methods

What Is Quantitative Research? | Definition & Methods

Published on 4 April 2022 by Pritha Bhandari . Revised on 10 October 2022.

Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analysing non-numerical data (e.g. text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalised to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Prevent plagiarism, run a free check.

Once data is collected, you may need to process it before it can be analysed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualise your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalisations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardise data collection and generalise findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardised data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analysed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalised and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardised procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). What Is Quantitative Research? | Definition & Methods. Scribbr. Retrieved 25 March 2024, from https://www.scribbr.co.uk/research-methods/introduction-to-quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

  • Other Journals

Chapter 3. Introduction to Quantitative Research and Data

T he foundation of any e-book analysis framework rests on knowledge of the general e-book landscape and the existing information needs of a local user community. From this starting point, quantitative methods, such as cost analysis, can provide evidence for collection development initiatives and demonstrate how they align with patrons’ needs and the overarching goals of library administrators or funding agencies.

Essentially, “data stands in place of reality we wish to study. We cannot simply know a phenomenon, but we can attempt to capture it as data which represents the reality we have experienced . . . and are trying to explain.” 1 The data collected through quantitative investigations provides a baseline for future evaluation, evidence for when and how patrons make use of electronic collections, and promotes data-driven decisions throughout collection development departments. To get the most mileage out of the time and resources invested into quantitative investigations, it is essential to first understand what quantitative research is and what types of questions it can answer.

What Is Quantitative Research?

In the most basic terms, quantitative research methods are concerned with collecting and analyzing data that is structured and can be represented numerically. 2 One of the central goals is to build accurate and reliable measurements that allow for statistical analysis.

Because quantitative research focuses on data that can be measured, it is very effective at answering the “what” or “how” of a given situation. Questions are direct, quantifiable, and often contain phrases such as what percentage? what proportion? to what extent? how many? how much?

Quantitative research allows librarians to learn more about the demographics of a population, measure how many patrons use a service or product, examine attitudes and behaviors, document trends, or explain what is known anecdotally. Measurements like frequencies (i.e., counts), percentages, proportions, and relationships provide means to quantify and provide evidence for the variables listed above.

Findings generated from quantitative research uncover behaviors and trends. However, it is important to note that they do not provide insight into why people think, feel, or act in certain ways. In other words, quantitative research highlights trends across data sets or study groups, but not the motivation behind observed behaviors. To fill in these knowledge gaps, qualitative studies like focus groups, interviews, or open-ended survey questions are effective.

Whenever I sit down to a new quantitative research project and begin to think about my goals and objectives, I like to keep a small cheat sheet on my desk to remind me of the trends quantitative data can uncover and the stories that I can tell with study conclusions. This serves as one quick strategy that keeps my thoughts focused and prevents scope creep as I discuss project plans with various stakeholders.

Quantitative Research Cheat Sheet

Six key characteristics of quantitative research:

  • It deals with numbers to assess information.
  • Data can be measured and quantified.
  • It aims to be objective.
  • Findings can be evaluated using statistical analysis.
  • It represents complex problems through variables.
  • Results can be summarized, compared, or generalized.

Quantitative findings can provide evidence or answers in the following areas:

  • Demonstrate to what extent services and collection are used and accessed.
  • Back up claims about use and impact.
  • Provide evidence for how the budget is spent and whether adjustments should be made.
  • Demonstrate return on investment when presenting budget figures.
  • Inform decisions regarding packages and subscriptions that are or are not worth pursuing.
  • Demonstrate evidence for trends and prove or discount what is known anecdotally.
  • Provide a method to make information accessible to audiences.
  • Provide evidence of success and highlight areas where unmet information needs exist.

Main advantages of quantitative research:

  • Findings can be generalized to a specific population.
  • Data sets are large, and findings are representative of a population.
  • Documentation regarding the research framework and methods can be shared and replicated.
  • Standardized approaches permit the study to be replicated over time.

Main limitations of quantitative research:

  • Data does not provide evidence for why populations think, feel, or act in certain ways.
  • Specific demographic groups, particularly vulnerable or disadvantaged groups, may be difficult to reach.
  • Studies can be time consuming and require data collection over long periods of time. 3

Quantitative Research in Information Management Environments

In the current information landscape, a wealth of quantitative data sources is available to librarians. One of the challenges surrounding quantitative research in the information management profession is “how to make sense of all these data sources and use them in a way that supports effective decision-making.” 4

Most libraries pay for and receive materials through multiple routes. As a result, a quantitative research framework for e-book collections often consist of two central components: an examination of resource allocations and expenditures from funds, endowments, or gifts; and an examination of titles received through firm orders, subscriptions, packages, and large aggregated databases. 5 In many cases, examining funds and titles according to subject areas adds an extra layer of knowledge that can provide evidence for teaching, learning, or research activities in a specific field or justify requests for budget increases. 6

Many of the quantitative research projects that I have conducted over the past four years are in direct response to an inquiry from library administrators. In most cases, I have been asked to provide evidence for collection development activities that support expressed information needs, justify expenditures, or project annual increases in preparation for a new fiscal year. Study results are often expected to describe or weigh several courses of action in the short and long term. Essentially, my work is categorized into three basic concepts related to library management:

  • Distinguish between recurrent and capital expenditure and projects, and between past, present, and future states.
  • Accommodate priorities and determine how resources are spread across collections.
  • Indicate the ways of allocating resources at input, monitor performance, and assess performance at output. 7

To assist in my prep work for a quantitative research project, I put together a file of background information about my library system and local user community to ensure that the project supports institutional goals and aligns with the general direction of programs and services on campus. Below are seven categories of information that I have on file at all times:

  • the institutional identity of the library
  • the stakeholder groups to be served
  • collection resources
  • financial resources
  • library personnel
  • facilities and equipment
  • the various programs and services related to the quantitative investigation 8

Typically, I take a day or two at the beginning of each fiscal year to update this information and ensure that it accurately reflects the landscape of collections and services available at CUL. From this starting point, it is simple to look at new project descriptions and think about the data required to support high-level decisions regarding the allocation of resources, to assess the effectiveness of collections and services, or to measure the value and impact of collections.

A wealth of local and external data sources is available to librarians, and each one can be used to tell a story about collection size, value, and impact. All that is required is an understanding of what the data measures and how different sources can be combined to tell a story about a user community.

Definitions of Local and External Data Sources

The remaining sections of this issue of Library Technology Reports discuss how I use quantitative data, what evidence I have uncovered to support e-book collection decisions, and how I apply quantitative findings in practical library settings. For the purposes of these discussions, I will use the following terminology:

Bibliographic record: A library catalog record that represents a specific title or resource.

Catalog clickthroughs: Counts of patron use of the catalog to access electronic full texts.

Citation analysis: Measurement of the impact of an article based on the number of times it has been cited.

Consortia reports: Consolidated usage reports for consortia. Often used to view usage linked to each individual consortia member.

COUNTER (Counting Online Usage of Networked Electronic Resources): An international initiative to improve the reliability of online usage statistics by providing a Code of Practice that standardizes the collection of usage data. It works to ensure vendor usage data is credible and comparable.

Cost data: Factual information concerning the cost of library materials, annual budget allocations, and general acquisitions budget.

FTE (full-time equivalent): The number of full-time faculty and students working or studying at a specific institution.

IP (Internet Protocol) address: A numerical label usually assigned to a library router or firewall that provides access to a private network (e.g., school or library network).

Link resolver statistics: Information regarding the pathways users take to access electronic resources.

Overlap data: Measurement of the degree of duplication across a collection.

Publication analysis: Measurement of impact by counting the research output of an author. Metrics include the number of peer-reviewed articles, coauthor collaborations, publication patterns, and extent of interdisciplinary research.

Title lists: Lists of e-book titles available in subscriptions, databases, or packages. These lists are generated and maintained by vendors and publishers.

Turnaway statistics: The number of patrons denied access to a specific title.

Vendor use data: Electronic use statistics provided by vendors.

Indicators and Performance Measures That Support Quantitative Research

I regularly use several indicators and performance measures to analyze e-book collections. Local and external data sources (listed in the section above) inform these investigations and provide the necessary “ingredients” to conduct cost analysis, examine return on investment, or measure the value of e-book collections to the community at CUL. Below is a breakdown of how I classify data and relate it to different indicators. 9

Input Cost Measures

Data source: Cost data pulled from Voyager reports (or your institution’s ILS system).

In general, cost data demonstrates how funds are allocated across a budget. Analysis can identify areas where additional resources are required, monitor cost changes over time, and flag collection areas where funds can be pulled (e.g., overbudgeted funds, subject areas that no longer support the curriculum, etc.) and “reinvested” in the collection to support current information needs.

Each of the investigations described in the following chapter began with a review of cost data. I relied on a basic knowledge of how e-book acquisition budgets are distributed across subject areas or pooled to purchase interdisciplinary materials. Essentially, these investigations involved the identification of fund codes linked to subject areas, expenditures across set date ranges (e.g., calendar years, fiscal years, academic years), and bulk versus long-tail purchases.

Tip: When working with cost data and examining input cost measures, I have found it helpful to categorize data by fund type. E-book collections at CUL are often built with general income (GI) funds, endowments, and gifts. Policies and procedures regarding how funds can be transferred and what materials can be purchased impact how resources are allocated to build e-book collections. Before beginning a cost analysis project at your institution, it may be helpful to review the policies in place and determine how they relate to overarching institutional goals and collection priorities.

Collection Output Measures

Data sources: Cost data, title lists, overlap data, bibliographic records (particularly subject headings).

Collection output measures are related to the quantity and quality of output. Examples include the number of e-book titles included in a subscription or package deal acquired by a library, the number of e-book records acquired over a given period of time, the number of publishers and unique subject areas represented in an e-book collection, the currency of information (e.g., publication year), and the percentage of title overlap, or duplication, within a collection.

At this stage in my cost analysis projects, it is often necessary to combine data to create a snapshot of how funds flow in and out of subject areas to acquire research and teaching materials. For example, many of our large e-book packages are interdisciplinary. By pulling cost data, I can determine how the total cost was split across subject divisions based on fund code counts. Then, I break title lists apart by subject to determine what percentage of total content relates to each library division. By comparing the cost breakdown and title list breakdown, it is possible to determine what percentage of total content each library division receives and if it is on par with the division’s financial contribution.

Effectiveness Measures and Indicators

Data sources: Cost data, title lists, COUNTER reports, vendor reports, consortia reports, resolver statistics, turnaway statistics, Google Analytics.

Examining input and output measures is an effective way of determining how budgets are allocated and the quantity and quality of materials available to patrons. To develop a quantitative baseline for the general value of e-book collections, measures like rate of use, cost per use, and turnaway rates can be very effective.

Again, this form of analysis relies on data from multiple sources. The ability to combine cost data, title lists, and COUNTER data (or vendor data) has yielded actionable results at my library. For instance, I combine data from these three sources to measure the value of databases. By pulling cost data covering three fiscal years and matching title lists against COUNTER reports, I have been able to examine trends in annual increase rates, examine overlap between subscriptions in the same subject area, and calculate cost per use to determine what percentage of the user community makes use of subscriptions.

Finally, by looking at turnaway statistics (also found in COUNTER data), it is possible to determine if sufficient access is provided to users. For instance, I look at turnaway statistics to evaluate if e-books listed on course reading lists provide sufficient access to a class of students over a semester. In cases where access is limited to a single user, I may look at the budget to find areas where funds can be shifted to purchase simultaneous usage instead.

Together, the data sets mentioned above provide evidence for how funds are invested, if they are invested in materials that are heavily used by patrons, and if access models are suited to the needs of the local user community.

In some cases, particularly when dealing with foreign language materials, I have encountered challenges because COUNTER data is not provided, and in some cases, it is difficult to obtain vendor reports as well. In the absence of usage data, I have experimented with link resolver statistics to determine what information they provide about user activities and the value of e-book materials.

Link resolver statistics provide information about the pathways users take to access electronic resources. 10 Resolver statistics show that a patron made a “request” via the link resolver and started the process of trying to view a full text. If the patron successfully accesses the full text, this is counted as a “clickthrough.”

It is important to note that link resolver statistics and usage statistics (like COUNTER) are not comparable because they measure different activities. Link resolvers measure attempts to connect while usage data measures usage activity. However, comparing sets of link resolver statistics against each other may provide insight into which resources patrons attempt to access most frequently. This can provide a ballpark idea of resource value in cases where usage statistics are not available.

Domain Measures

Data sources: FTE (full-time equivalent), IP address, demographic information.

Domain measures relate to the user community served by a library. They include total population, demographic information, attributes (e.g., undergraduate level, graduate level), and information needs.

In my work, domain measures impact subscription or package costs because campus-wide access is often priced according to FTE. Due to the size of CUL’s student body, access to essential collections can become extremely expensive and fall outside of the budget range. When this occurs, examining patron access by IP address has opened the door to negotiation, particularly when dealing with content that is discipline-specific. For instance, when negotiating subscription prices for science materials, IP data provided evidence that usage is concentrated at the library router located in the Science and Engineering Library. This allowed science selectors to negotiate pricing models based around the FTE of natural science programs as opposed to the campus community as a whole.

Cost-Effectiveness Indicators

Data sources: COUNTER reports, vendor reports, turnaway statistics, citation analysis, publication analysis.

Cost-effectiveness indicators are related to measures like cost per use and ultimately examine the general return on investment. They evaluate the financial resources invested in a product and determine if the investment brings added value to the existing collection.

In my work, I often combine cost data with usage data to calculate cost per use and also capture usage trends spanning at least three calendar years. The results provide a benchmark regarding whether the financial investment in the product is equivalent to its general “demand” within the user community. A recent project with colleagues at the science and medical science libraries has examined how to use citation and publication data to determine general impact of electronic resources.

Challenges Presented by Quantitative Research

One of the challenges surrounding quantitative research in library environments is a lack of standardization across data sets, particularly vendor reports. The general situation has improved in recent years due to widespread compliance with the COUNTER Code of Practice, but there is still work to be done. It is difficult to interpret the meaning of vendor usage data that is still not COUNTER-compliant because clear definitions of use do not exist. This can create significant roadblocks when running quantitative projects that examine multiple e-book collections to get a sense of comparative value.

Also, usage data is generated outside of libraries by publishers or aggregators and vendors. Factors like turnover, company mergers, or password changes result in significant time lags between when usage statistics are generated and when libraries receive them. Also, some vendors pull down usage statistics after a period of months. In most cases, librarians need statistics captured over two or three years to meet reporting requirements, and data dating back this far can be difficult to obtain. Finally, annual usage statistics are provided according to calendar year. However, librarians look at usage by fiscal year and academic year as well. In many cases, this means that multiple usage reports have to be stitched together in order to capture the appropriate timeframe for reporting purposes. This process is labor intensive and takes a considerable amount of time to complete.

These challenges emphasize an ongoing need to build positive working relationships with publishers, aggregators, and vendors to discuss challenges and develop solutions that benefit all stakeholders. It is important to note that libraries have valuable information that is not available to content providers, namely how e-books are discovered and used. Strong relationships allow for the transparent exchange of information between all parties, which ultimately benefits patrons by providing a seamless e-book experience.

Designing a Quantitative Research Framework

As mentioned earlier in this chapter, data stands in place of a reality we wish to study, quantify, and explain. In order to prevent scope creep and pull together bodies of data that add value to local work environments, it is essential to begin any quantitative research project with a set of clearly defined objectives, a strong understanding of the stakeholder group or audience, and knowledge of local information needs. These bits of information serve as markers to measure progress and ensure the project stays on track.

It is tempting to dive straight into a project and investigate if anecdotal information or assumptions are correct, but time spent developing a project outline is never wasted. The development of a successful plan requires “a clear idea of what it is to be achieved among the stakeholders. Clearly articulated objectives are the engine that drives the assessment process. This is one of the most difficult but most rewarding stages of the assessment process.” 11 Creating a roadmap for research projects can save countless hours down the line and ensures the correct quantitative method is selected. The plan also provides focus when the analysis phase of a project begins. Keep in mind that the data set you end up working with will be large; approaching it with stated goals and objectives saves significant amounts of time, which is especially important when working under a tight deadline!

Below is a checklist that I use at the beginning of any research project. It is based on recommendations made by Bakkalbasi, Sundre, and Fulcher. 12

While goals and objectives are closely related, they are not the same. Project goals should state exactly what you hope to learn or demonstrate through your research. Objectives state what you will assess or measure in order to achieve your overarching project goal.

Example of a project goal:

Example of project objectives:

  • To learn what activities local patrons engage in when using library facilities.
  • Consider how results may support improvement of collection development initiatives or lead to evaluation of existing workflows, policies, and procedures.
  • What questions and/or evidence are required by stakeholders?
  • What information do stakeholders require to make decisions?
  • How will results support the improvement of collection development initiatives?
  • How will results be made accessible to stakeholders?
  • Are the results intended for internal use, or will they be shared with the professional community?
  • Will findings be used to support grant or funding applications?
  • Is there a stated project deadline? What methods or resources will allow you to collect data, conduct analysis, and provide findings within the stated timeframe?
  • Does the project coincide with other activities that may require your attention (e.g., fiscal year, subscription renewal period)?
  • Are there colleagues at the library who may be able to provide assistance given the timeline of the project?
  • What data collected through the study cannot be shared with external stakeholders (e.g., cost data, FOIP compliance, etc.)?
  • Are there any permissions required before study results can be disseminated to external stakeholders?
  • Is clearance required to collect data from a user community?
  • What data sources are most valued and meaningful to your library?
  • What data sources will allow results to be applied at your library?
  • What data collection methods will be most effective?
  • What data collection methods will provide valid and reliable results?
  • Are there parameters such as specific fiscal years, calendar years, or academic years that you are required to report on?
  • How will data be summarized and described?
  • What features of the data set are most relevant to project objectives and goals?
  • What are the relationships between different data sets?
  • How is data evaluated?
  • How is data interpreted into meaningful results and conclusions?
  • What are the recommendations for action or improvements?
  • How will findings be communicated to stakeholders?

The data sets collected through quantitative methods are large and can easily be examined from a variety of perspectives. As the project develops, mentally frame emerging trends into a story that can be shared with stakeholders. This process determines how results will ultimately be applied to collection development initiatives. Background knowledge of the local patron community and institutional goals serves as a compass; use it to shape results that bring value to your library or the greater professional community.

From my experience, each quantitative project that I work on allows me to expand my skill sets and understand how I can structure my daily activities to support overarching institutional goals. During many projects, I have encountered unexpected challenges or had to improvise when quantitative methods did not yield expected results (e.g., low survey response rates). However, each challenge equipped me to take on larger projects, better understand how our budget is structured, or build stronger relationships with patrons and colleagues.

One skill that has been invaluable to my work is the ability to develop a quantitative research plan. I hope that by sharing this structure, along with performance measures and data sources that I use, readers have a behind-the-scenes view of my process and all of the moving parts that I work with to conduct e-book collection analysis. And of course, now to the fun part! It is time to get down to the nitty-gritty and demonstrate how I conduct analysis to inform budget decisions and collection development activities at CUL.

  • Bob Matthews and Liz Ross, Research Methods: A Practical Guide for the Social Sciences (Harlow, UK: Pearson Education, 2010), 45.
  • Ibid., 465.
  • Based on information provided by Stephen A. Roberts, Financial and Cost Management for Libraries and Information Management Services (London: Bowker-Saur, 1998), 140–41.
  • Darby Orcutt, Library Data: Empowering Practice and Persuasion (Santa Barbara, CA: Libraries Unlimited, 2009), 106.
  • Northwestern University Libraries, “DataBank: How to Interpret Your Data: Financial Support,” LibGuide, last updated December 8 2015, http://libguides.northwestern.edu/c.php?g=115065&p=748741 .
  • Roberts, Financial and Cost Management , 132.
  • For further information regarding indicators and performance measures, please see Roberts, Financial and Cost Management , 140–41.
  • Orcutt, Library Data , 107.
  • Nisa Bakkalbasi, Donna Sundre, and Kenton Fulcher, “Assessing Assessment: A Framework to Evaluate Assessment Practices and Progress for Library Collections and Services,” in Proceedings of the 2012 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment, October 29–31, 2012, Charlottesville, VA , ed. Steve Hiller, Martha Kyrillidou, Angela Pappalardo, Jim Self, and Amy Yeager (Washington, DC: Association of Research Libraries, 2013), 538-545.
  • Based on information provided by Matthews and Ross, Research Methods , 345.
  • There are currently no refbacks.

Book cover

Research for Medical Imaging and Radiation Sciences pp 71–96 Cite as

Quantitative and Qualitative Research Methods

  • Andrew England 5  
  • First Online: 03 January 2022

572 Accesses

Quantitative research uses methods that seek to explain phenomena by collecting numerical data, which are then analysed mathematically, typically by statistics. With quantitative approaches, the data produced are always numerical; if there are no numbers, then the methods are not quantitative. Many phenomena lend themselves to quantitative methods because the relevant information is already available numerically. Qualitative methods provide a mechanism to provide answers based on the collection of non-numerical data ‘i.e words, actions, behaviours’. Both quantitative and qualitative methodologies are important in medical imaging and radiation therapy.   In some instances, both quantitative and qualitative approaches can be combined into a mixed-methods approach. This chapter discusses all methodological approaches to research from both medical imaging and radiation therapy perspectives.  

  • Quantitative research
  • Qualitative research
  • Mixed methods research
  • Experimental studies
  • Randomised controlled trials
  • Quasi-experimental studies
  • Thematic analysis
  • Statistical analysis

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Alzyoud, K., Hogg, P., Snaith, B., Flintham, K., & England, A. (2019). Impact of body part thickness on AP pelvis radiographic image quality and effective dose. Radiography, 25 (1), e11–e17. https://doi.org/10.1016/j.radi.2018.09.001

Article   CAS   PubMed   Google Scholar  

Banks, E., Beral, V., Cmeron, R., et al. (2001). Comparison of various characteristics of women who do and do not attend for breast cancer screening. Breast Cancer Research, 4 , R1. https://doi.org/10.1186/br418

Article   PubMed   PubMed Central   Google Scholar  

Benfield, S., Hewis, J. D., & Hayre, C. M. (2021). Investigating perceptions of ‘dose creep’ amongst student radiographers: A grounded theory study. Radiography, 27 (2), 605–610. https://doi.org/10.1016/j.radi.2020.11.023

Booth, L., Henwood, S., & Millker, P. K. (2017). Leadership and the everyday practice of Consultant Radiographers in the UK: Transformational ideals and the generation of self-efficacy. Radiography, 23 (2), 125–129. https://doi.org/10.1016/j.radi.2016.12.003

Bristowe, K., Selman, L., & Murtagh, F. E. M. (2015). Qualitative research methods in renal medicine: An introduction. Nephrology, Dialysis, Transplantation, 30 (9), 1424–1431. https://doi.org/10.1093/ndt/gfu410

Article   PubMed   Google Scholar  

Cuthbertson, L. M. (2019). The journey to radiographer advanced practice: A methodological reflection on the use of interpretative phenomenological analysis to explore perceptions and experiences. Journal of Radiotherapy in Practice, 19 , 116–121. https://doi.org/10.1017/S1460396919000621

Article   Google Scholar  

Decker, S. (2009). The lived experience of newly qualified radiographers (1950–1985): An oral history of radiography. Radiography, 15 (1), e72–e77. https://doi.org/10.1016/j.radi.2009.09.009

Dillman, J. R., Ellis, J. H., Cohan, R. H., Strouse, P. J., & Jan, S. C. (2007). Frequency and severity of acute allergic-like reactions to gadolinium-containing IV contrast media in children and adults. American Journal of Roentgenology, 189 (6), 1533–1538. https://doi.org/10.2214/AJR.078.2554

Hart, D., Hillier, M. C., & Wall, B. F. (2009). National reference doses for common radiographic, fluoroscopic and dental X-ray examinations in the UK. The British Journal of Radiology, 82 , 1–12. https://doi.org/10.1259/bjr/12568539

Hayre, C. M., Blackman, S., Carlton, K., & Eyden, A. (2018). Attitudes and perceptions of radioigraphers applying lead (Pb) in general radiography: An ethnographic study. Radiography, 24 (1), e13–e18. https://doi.org/10.1016/j.radi.2017.07.010

Mercer, C. E., Hogg, P., Lawson, R., Diffey, J., & Denton, E. R. E. (2013). Practitioner compression force variability in mammography: A preliminary study. The British Journal of Radiology, 86 (1022), 20110596. https://doi.org/10.1259/bjr.20110596

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nijssen, E. C., Rennenberg, R. J., Nelemans, P. J., Essers, B. A., Jannseen, M. M., Vermeeren, M. A., et al. (2017). Prophylactic hydration to protect renal function from intravascular iodinated contrast materials in patients at high risk of contrast-induced nephropathy (AMACING): A prospective, randomised, phase 3, controlled trial, open-label, non-inferiority trail. Lancet, 389 (10076), 1312–1322. https://doi.org/10.1016/S0140-6736(17):30057-0

Rosenkrantz, A. B., & Pysarenko, K. (2016). The patient experience in radiology: Observations from over 3,500 patient feedback reports in a single institution. Journal of the American College of Radiology, 13 (11), 1371–1377. https://doi.org/10.1016/j.jacr.2016.04.034

Sternberg, C. N., Hawkins, R. E., Wagstaff, J., Salman, P., Mardiak, J., Barrios, C. H., et al. (2013). A randomised, double-blind phase III study of pazopanib in patients with advanced and/or metastatic renal cell carcinoma: Final overall survival results and safety update. European Journal of Cancer, 49 (6), 1287–1296. https://doi.org/10.1016/j.ejca.2012.12.010

Download references

Author information

Authors and affiliations.

Discipline of Medical Imaging, School of Medicine, University College Cork, Cork, Ireland

Andrew England

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Andrew England .

Editor information

Editors and affiliations.

Medical Imaging, Faculty of Health, University of Canberra, Burnaby, BC, Canada

Euclid Seeram

Faculty of Health, University of Canberra, Canberra, ACT, Australia

Robert Davidson

Brookfield Health Sciences, University College Cork, Cork, Ireland

Mark F. McEntee

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

England, A. (2021). Quantitative and Qualitative Research Methods. In: Seeram, E., Davidson, R., England, A., McEntee, M.F. (eds) Research for Medical Imaging and Radiation Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-79956-4_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-79956-4_5

Published : 03 January 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-79955-7

Online ISBN : 978-3-030-79956-4

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Qualitative vs Quantitative Research Methods & Data Analysis

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

What is the difference between quantitative and qualitative?

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.

Qualitative research, on the other hand, collects non-numerical data such as words, images, and sounds. The focus is on exploring subjective experiences, opinions, and attitudes, often through observation and interviews.

Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography.

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis.

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded.

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

quantitative research definition articles

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMJ Glob Health
  • v.4(Suppl 1); 2019

Logo of bmjgh

Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods

1 School of Social Sciences, Bangor University, Wales, UK

Andrew Booth

2 School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Graham Moore

3 School of Social Sciences, Cardiff University, Wales, UK

Kate Flemming

4 Department of Health Sciences, The University of York, York, UK

Özge Tunçalp

5 Department of Reproductive Health and Research including UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Elham Shakibazadeh

6 Department of Health Education and Promotion, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran

Associated Data

bmjgh-2018-000893supp001.pdf

bmjgh-2018-000893supp002.pdf

bmjgh-2018-000893supp003.pdf

bmjgh-2018-000893supp005.pdf

bmjgh-2018-000893supp004.pdf

Guideline developers are increasingly dealing with more difficult decisions concerning whether to recommend complex interventions in complex and highly variable health systems. There is greater recognition that both quantitative and qualitative evidence can be combined in a mixed-method synthesis and that this can be helpful in understanding how complexity impacts on interventions in specific contexts. This paper aims to clarify the different purposes, review designs, questions, synthesis methods and opportunities to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of guidelines developed by WHO, which incorporated quantitative and qualitative evidence, are used to illustrate possible uses of mixed-method reviews and evidence. Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Consideration is given to the opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence. Recommendations are made concerning the future development of methods to better address questions in systematic reviews and guidelines that adopt a complexity perspective.

Summary box

  • When combined in a mixed-method synthesis, quantitative and qualitative evidence can potentially contribute to understanding how complex interventions work and for whom, and how the complex health systems into which they are implemented respond and adapt.
  • The different purposes and designs for combining quantitative and qualitative evidence in a mixed-method synthesis for a guideline process are described.
  • Questions relevant to gaining an understanding of the complexity of complex interventions and the wider health systems within which they are implemented that can be addressed by mixed-method syntheses are presented.
  • The practical methodological guidance in this paper is intended to help guideline producers and review authors commission and conduct mixed-method syntheses where appropriate.
  • If more mixed-method syntheses are conducted, guideline developers will have greater opportunities to access this evidence to inform decision-making.

Introduction

Recognition has grown that while quantitative methods remain vital, they are usually insufficient to address complex health systems related research questions. 1 Quantitative methods rely on an ability to anticipate what must be measured in advance. Introducing change into a complex health system gives rise to emergent reactions, which cannot be fully predicted in advance. Emergent reactions can often only be understood through combining quantitative methods with a more flexible qualitative lens. 2 Adopting a more pluralist position enables a diverse range of research options to the researcher depending on the research question being investigated. 3–5 As a consequence, where a research study sits within the multitude of methods available is driven by the question being asked, rather than any particular methodological or philosophical stance. 6

Publication of guidance on designing complex intervention process evaluations and other works advocating mixed-methods approaches to intervention research have stimulated better quality evidence for synthesis. 1 7–13 Methods for synthesising qualitative 14 and mixed-method evidence have been developed or are in development. Mixed-method research and review definitions are outlined in box 1 .

Defining mixed-method research and reviews

Pluye and Hong 52 define mixed-methods research as “a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results”.A mixed-method synthesis can integrate quantitative, qualitative and mixed-method evidence or data from primary studies.† Mixed-method primary studies are usually disaggregated into quantitative and qualitative evidence and data for the purposes of synthesis. Thomas and Harden further define three ways in which reviews are mixed. 53

  • The types of studies included and hence the type of findings to be synthesised (ie, qualitative/textual and quantitative/numerical).
  • The types of synthesis method used (eg, statistical meta-analysis and qualitative synthesis).
  • The mode of analysis: theory testing AND theory building.

*A qualitative study is one that uses qualitative methods of data collection and analysis to produce a narrative understanding of the phenomena of interest. Qualitative methods of data collection may include, for example, interviews, focus groups, observations and analysis of documents.

†The Cochrane Qualitative and Implementation Methods group coined the term ‘qualitative evidence synthesis’ to mean that the synthesis could also include qualitative data. For example, qualitative data from case studies, grey literature reports and open-ended questions from surveys. ‘Evidence’ and ‘data’ are used interchangeably in this paper.

This paper is one of a series that aims to explore the implications of complexity for systematic reviews and guideline development, commissioned by WHO. This paper is concerned with the methodological implications of including quantitative and qualitative evidence in mixed-method systematic reviews and guideline development for complex interventions. The guidance was developed through a process of bringing together experts in the field, literature searching and consensus building with end users (guideline developers, clinicians and reviewers). We clarify the different purposes, review designs, questions and synthesis methods that may be applicable to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of WHO guidelines that incorporated quantitative and qualitative evidence are used to illustrate possible uses of mixed-method reviews and mechanisms of integration ( table 1 , online supplementary files 1–3 ). Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process are presented. Specific considerations when using an evidence to decision framework such as the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) framework 15 or the new WHO-INTEGRATE evidence to decision framework 16 at the review design and evidence to decision stage are outlined. See online supplementary file 4 for an example of a health systems DECIDE framework and Rehfuess et al 16 for the new WHO-INTEGRATE framework. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence in guidelines of complex interventions that take a complexity perspective and health systems focus.

Designs and methods and their use or applicability in guidelines and systematic reviews taking a complexity perspective

Supplementary data

Taking a complexity perspective.

The first paper in this series 17 outlines aspects of complexity associated with complex interventions and health systems that can potentially be explored by different types of evidence, including synthesis of quantitative and qualitative evidence. Petticrew et al 17 distinguish between a complex interventions perspective and a complex systems perspective. A complex interventions perspective defines interventions as having “implicit conceptual boundaries, representing a flexible, but common set of practices, often linked by an explicit or implicit theory about how they work”. A complex systems perspective differs in that “ complexity arises from the relationships and interactions between a system’s agents (eg, people, or groups that interact with each other and their environment), and its context. A system perspective conceives the intervention as being part of the system, and emphasises changes and interconnections within the system itself”. Aspects of complexity associated with implementation of complex interventions in health systems that could potentially be addressed with a synthesis of quantitative and qualitative evidence are summarised in table 2 . Another paper in the series outlines criteria used in a new evidence to decision framework for making decisions about complex interventions implemented in complex systems, against which the need for quantitative and qualitative evidence can be mapped. 16 A further paper 18 that explores how context is dealt with in guidelines and reviews taking a complexity perspective also recommends using both quantitative and qualitative evidence to better understand context as a source of complexity. Mixed-method syntheses of quantitative and qualitative evidence can also help with understanding of whether there has been theory failure and or implementation failure. The Cochrane Qualitative and Implementation Methods Group provide additional guidance on exploring implementation and theory failure that can be adapted to address aspects of complexity of complex interventions when implemented in health systems. 19

Health-system complexity-related questions that a synthesis of quantitative and qualitative evidence could address (derived from Petticrew et al 17 )

It may not be apparent which aspects of complexity or which elements of the complex intervention or health system can be explored in a guideline process, or whether combining qualitative and quantitative evidence in a mixed-method synthesis will be useful, until the available evidence is scoped and mapped. 17 20 A more extensive lead in phase is typically required to scope the available evidence, engage with stakeholders and to refine the review parameters and questions that can then be mapped against potential review designs and methods of synthesis. 20 At the scoping stage, it is also common to decide on a theoretical perspective 21 or undertake further work to refine a theoretical perspective. 22 This is also the stage to begin articulating the programme theory of the complex intervention that may be further developed to refine an understanding of complexity and show how the intervention is implemented in and impacts on the wider health system. 17 23 24 In practice, this process can be lengthy, iterative and fluid with multiple revisions to the review scope, often developing and adapting a logic model 17 as the available evidence becomes known and the potential to incorporate different types of review designs and syntheses of quantitative and qualitative evidence becomes better understood. 25 Further questions, propositions or hypotheses may emerge as the reviews progress and therefore the protocols generally need to be developed iteratively over time rather than a priori.

Following a scoping exercise and definition of key questions, the next step in the guideline development process is to identify existing or commission new systematic reviews to locate and summarise the best available evidence in relation to each question. For example, case study 2, ‘Optimising health worker roles for maternal and newborn health through task shifting’, included quantitative reviews that did and did not take an additional complexity perspective, and qualitative evidence syntheses that were able to explain how specific elements of complexity impacted on intervention outcomes within the wider health system. Further understanding of health system complexity was facilitated through the conduct of additional country-level case studies that contributed to an overall understanding of what worked and what happened when lay health worker interventions were implemented. See table 1 online supplementary file 2 .

There are a few existing examples, which we draw on in this paper, but integrating quantitative and qualitative evidence in a mixed-method synthesis is relatively uncommon in a guideline process. Box 2 includes a set of key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in mixed-methods design might ask. Subsequent sections provide more information and signposting to further reading to help address these key questions.

Key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in a mixed-methods design might ask

Compound questions requiring both quantitative and qualitative evidence?

Questions requiring mixed-methods studies?

Separate quantitative and qualitative questions?

Separate quantitative and qualitative research studies?

Related quantitative and qualitative research studies?

Mixed-methods studies?

Quantitative unpublished data and/or qualitative unpublished data, eg, narrative survey data?

Throughout the review?

Following separate reviews?

At the question point?

At the synthesis point?

At the evidence to recommendations stage?

Or a combination?

Narrative synthesis or summary?

Quantitising approach, eg, frequency analysis?

Qualitising approach, eg, thematic synthesis?

Tabulation?

Logic model?

Conceptual model/framework?

Graphical approach?

  • WHICH: Which mixed-method designs, methodologies and methods best fit into a guideline process to inform recommendations?

Complexity-related questions that a synthesis of quantitative and qualitative evidence can potentially address

Petticrew et al 17 define the different aspects of complexity and examples of complexity-related questions that can potentially be explored in guidelines and systematic reviews taking a complexity perspective. Relevant aspects of complexity outlined by Petticrew et al 17 are summarised in table 2 below, together with the corresponding questions that could be addressed in a synthesis combining qualitative and quantitative evidence. Importantly, the aspects of complexity and their associated concepts of interest have however yet to be translated fully in primary health research or systematic reviews. There are few known examples where selected complexity concepts have been used to analyse or reanalyse a primary intervention study. Most notable is Chandler et al 26 who specifically set out to identify and translate a set of relevant complexity theory concepts for application in health systems research. Chandler then reanalysed a trial process evaluation using selected complexity theory concepts to better understand the complex causal pathway in the health system that explains some aspects of complexity in table 2 .

Rehfeuss et al 16 also recommends upfront consideration of the WHO-INTEGRATE evidence to decision criteria when planning a guideline and formulating questions. The criteria reflect WHO norms and values and take account of a complexity perspective. The framework can be used by guideline development groups as a menu to decide which criteria to prioritise, and which study types and synthesis methods can be used to collect evidence for each criterion. Many of the criteria and their related questions can be addressed using a synthesis of quantitative and qualitative evidence: the balance of benefits and harms, human rights and sociocultural acceptability, health equity, societal implications and feasibility (see table 3 ). Similar aspects in the DECIDE framework 15 could also be addressed using synthesis of qualitative and quantitative evidence.

Integrate evidence to decision framework criteria, example questions and types of studies to potentially address these questions (derived from Rehfeuss et al 16 )

GIS, Geographical Information System; RCT, randomised controlled trial.

Questions as anchors or compasses

Questions can serve as an ‘anchor’ by articulating the specific aspects of complexity to be explored (eg, Is successful implementation of the intervention context dependent?). 27 Anchor questions such as “How does intervention x impact on socioeconomic inequalities in health behaviour/outcome x” are the kind of health system question that requires a synthesis of both quantitative and qualitative evidence and hence a mixed-method synthesis. Quantitative evidence can quantify the difference in effect, but does not answer the question of how . The ‘how’ question can be partly answered with quantitative and qualitative evidence. For example, quantitative evidence may reveal where socioeconomic status and inequality emerges in the health system (an emergent property) by exploring questions such as “ Does patterning emerge during uptake because fewer people from certain groups come into contact with an intervention in the first place? ” or “ are people from certain backgrounds more likely to drop out, or to maintain effects beyond an intervention differently? ” Qualitative evidence may help understand the reasons behind all of these mechanisms. Alternatively, questions can act as ‘compasses’ where a question sets out a starting point from which to explore further and to potentially ask further questions or develop propositions or hypotheses to explore through a complexity perspective (eg, What factors enhance or hinder implementation?). 27 Other papers in this series provide further guidance on developing questions for qualitative evidence syntheses and guidance on question formulation. 14 28

For anchor and compass questions, additional application of a theory (eg, complexity theory) can help focus evidence synthesis and presentation to explore and explain complexity issues. 17 21 Development of a review specific logic model(s) can help to further refine an initial understanding of any complexity-related issues of interest associated with a specific intervention, and if appropriate the health system or section of the health system within which to contextualise the review question and analyse data. 17 23–25 Specific tools are available to help clarify context and complex interventions. 17 18

If a complexity perspective, and certain criteria within evidence to decision frameworks, is deemed relevant and desirable by guideline developers, it is only possible to pursue a complexity perspective if the evidence is available. Careful scoping using knowledge maps or scoping reviews will help inform development of questions that are answerable with available evidence. 20 If evidence of effect is not available, then a different approach to develop questions leading to a more general narrative understanding of what happened when complex interventions were implemented in a health system will be required (such as in case study 3—risk communication guideline). This should not mean that the original questions developed for which no evidence was found when scoping the literature were not important. An important function of creating a knowledge map is also to identify gaps to inform a future research agenda.

Table 2 and online supplementary files 1–3 outline examples of questions in the three case studies, which were all ‘COMPASS’ questions for the qualitative evidence syntheses.

Types of integration and synthesis designs in mixed-method reviews

The shift towards integration of qualitative and quantitative evidence in primary research has, in recent years, begun to be mirrored within research synthesis. 29–31 The natural extension to undertaking quantitative or qualitative reviews has been the development of methods for integrating qualitative and quantitative evidence within reviews, and within the guideline process using evidence to decision-frameworks. Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.

One of the early approaches to integrating qualitative and quantitative evidence detailed by Sandelowski et al 32 advocated three basic review designs: segregated, integrated and contingent designs, which have been further developed by Heyvaert et al 33 ( box 3 ).

Segregated, integrated and contingent designs 32 33

Segregated design.

Conventional separate distinction between quantitative and qualitative approaches based on the assumption they are different entities and should be treated separately; can be distinguished from each other; their findings warrant separate analyses and syntheses. Ultimately, the separate synthesis results can themselves be synthesised.

Integrated design

The methodological differences between qualitative and quantitative studies are minimised as both are viewed as producing findings that can be readily synthesised into one another because they address the same research purposed and questions. Transformation involves either turning qualitative data into quantitative (quantitising) or quantitative findings are turned into qualitative (qualitising) to facilitate their integration.

Contingent design

Takes a cyclical approach to synthesis, with the findings from one synthesis informing the focus of the next synthesis, until all the research objectives have been addressed. Studies are not necessarily grouped and categorised as qualitative or quantitative.

A recent review of more than 400 systematic reviews 34 combining quantitative and qualitative evidence identified two main synthesis designs—convergent and sequential. In a convergent design, qualitative and quantitative evidence is collated and analysed in a parallel or complementary manner, whereas in a sequential synthesis, the collation and analysis of quantitative and qualitative evidence takes place in a sequence with one synthesis informing the other ( box 4 ). 6 These designs can be seen to build on the work of Sandelowski et al , 32 35 particularly in relation to the transformation of data from qualitative to quantitative (and vice versa) and the sequential synthesis design, with a cyclical approach to reviewing that evokes Sandelowski’s contingent design.

Convergent and sequential synthesis designs 34

Convergent synthesis design.

Qualitative and quantitative research is collected and analysed at the same time in a parallel or complementary manner. Integration can occur at three points:

a. Data-based convergent synthesis design

All included studies are analysed using the same methods and results presented together. As only one synthesis method is used, data transformation occurs (qualitised or quantised). Usually addressed one review question.

b. Results-based convergent synthesis design

Qualitative and quantitative data are analysed and presented separately but integrated using a further synthesis method; eg, narratively, tables, matrices or reanalysing evidence. The results of both syntheses are combined in a third synthesis. Usually addresses an overall review question with subquestions.

c. Parallel-results convergent synthesis design

Qualitative and quantitative data are analysed and presented separately with integration occurring in the interpretation of results in the discussion section. Usually addresses two or more complimentary review questions.

Sequential synthesis design

A two-phase approach, data collection and analysis of one type of evidence (eg, qualitative), occurs after and is informed by the collection and analysis of the other type (eg, quantitative). Usually addresses an overall question with subquestions with both syntheses complementing each other.

The three case studies ( table 1 , online supplementary files 1–3 ) illustrate the diverse combination of review designs and synthesis methods that were considered the most appropriate for specific guidelines.

Methods for conducting mixed-method reviews in the context of guidelines for complex interventions

In this section, we draw on examples where specific review designs and methods have been or can be used to explore selected aspects of complexity in guidelines or systematic reviews. We also identify other review methods that could potentially be used to explore aspects of complexity. Of particular note, we could not find any specific examples of systematic methods to synthesise highly diverse research designs as advocated by Petticrew et al 17 and summarised in tables 2 and 3 . For example, we could not find examples of methods to synthesise qualitative studies, case studies, quantitative longitudinal data, possibly historical data, effectiveness studies providing evidence of differential effects across different contexts, and system modelling studies (eg, agent-based modelling) to explore system adaptivity.

There are different ways that quantitative and qualitative evidence can be integrated into a review and then into a guideline development process. In practice, some methods enable integration of different types of evidence in a single synthesis, while in other methods, the single systematic review may include a series of stand-alone reviews or syntheses that are then combined in a cross-study synthesis. Table 1 provides an overview of the characteristics of different review designs and methods and guidance on their applicability for a guideline process. Designs and methods that have already been used in WHO guideline development are described in part A of the table. Part B outlines a design and method that can be used in a guideline process, and part C covers those that have the potential to integrate quantitative, qualitative and mixed-method evidence in a single review design (such as meta-narrative reviews and Bayesian syntheses), but their application in a guideline context has yet to be demonstrated.

Points of integration when integrating quantitative and qualitative evidence in guideline development

Depending on the review design (see boxes 3 and 4 ), integration can potentially take place at a review team and design level, and more commonly at several key points of the review or guideline process. The following sections outline potential points of integration and associated practical considerations when integrating quantitative and qualitative evidence in guideline development.

Review team level

In a guideline process, it is common for syntheses of quantitative and qualitative evidence to be done separately by different teams and then to integrate the evidence. A practical consideration relates to the organisation, composition and expertise of the review teams and ways of working. If the quantitative and qualitative reviews are being conducted separately and then brought together by the same team members, who are equally comfortable operating within both paradigms, then a consistent approach across both paradigms becomes possible. If, however, a team is being split between the quantitative and qualitative reviews, then the strengths of specialisation can be harnessed, for example, in quality assessment or synthesis. Optimally, at least one, if not more, of the team members should be involved in both quantitative and qualitative reviews to offer the possibility of making connexions throughout the review and not simply at re-agreed junctures. This mirrors O’Cathain’s conclusion that mixed-methods primary research tends to work only when there is a principal investigator who values and is able to oversee integration. 9 10 While the above decisions have been articulated in the context of two types of evidence, variously quantitative and qualitative, they equally apply when considering how to handle studies reporting a mixed-method study design, where data are usually disaggregated into quantitative and qualitative for the purposes of synthesis (see case study 3—risk communication in humanitarian disasters).

Question formulation

Clearly specified key question(s), derived from a scoping or consultation exercise, will make it clear if quantitative and qualitative evidence is required in a guideline development process and which aspects will be addressed by which types of evidence. For the remaining stages of the process, as documented below, a review team faces challenges as to whether to handle each type of evidence separately, regardless of whether sequentially or in parallel, with a view to joining the two products on completion or to attempt integration throughout the review process. In each case, the underlying choice is of efficiencies and potential comparability vs sensitivity to the underlying paradigm.

Once key questions are clearly defined, the guideline development group typically needs to consider whether to conduct a single sensitive search to address all potential subtopics (lumping) or whether to conduct specific searches for each subtopic (splitting). 36 A related consideration is whether to search separately for qualitative, quantitative and mixed-method evidence ‘streams’ or whether to conduct a single search and then identify specific study types at the subsequent sifting stage. These two considerations often mean a trade-off between a single search process involving very large numbers of records or a more protracted search process retrieving smaller numbers of records. Both approaches have advantages and choice may depend on the respective availability of resources for searching and sifting.

Screening and selecting studies

Closely related to decisions around searching are considerations relating to screening and selecting studies for inclusion in a systematic review. An important consideration here is whether the review team will screen records for all review types, regardless of their subsequent involvement (‘altruistic sifting’), or specialise in screening for the study type with which they are most familiar. The risk of missing relevant reports might be minimised by whole team screening for empirical reports in the first instance and then coding them for a specific quantitative, qualitative or mixed-methods report at a subsequent stage.

Assessment of methodological limitations in primary studies

Within a guideline process, review teams may be more limited in their choice of instruments to assess methodological limitations of primary studies as there are mandatory requirements to use the Cochrane risk of bias tool 37 to feed into Grading of Recommendations Assessment, Development and Evaluation (GRADE) 38 or to select from a small pool of qualitative appraisal instruments in order to apply GRADE; Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) 39 to assess the overall certainty or confidence in findings. The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods studies. 41 Other options include using corresponding instruments from within the same ‘stable’, for example, using different Critical Appraisal Skills Programme instruments. 42 While using instruments developed by the same team or organisation may achieve a degree of epistemological consonance, benefits may come more from consistency of approach and reporting rather than from a shared view of quality. Alternatively, a more paradigm-sensitive approach would involve selecting the best instrument for each respective review while deferring challenges from later heterogeneity of reporting.

Data extraction

The way in which data and evidence are extracted from primary research studies for review will be influenced by the type of integrated synthesis being undertaken and the review purpose. Initially, decisions need to be made regarding the nature and type of data and evidence that are to be extracted from the included studies. Method-specific reporting guidelines 43 44 provide a good template as to what quantitative and qualitative data it is potentially possible to extract from different types of method-specific study reports, although in practice reporting quality varies. Online supplementary file 5 provides a hypothetical example of the different types of studies from which quantitative and qualitative evidence could potentially be extracted for synthesis.

The decisions around what data or evidence to extract will be guided by how ‘integrated’ the mixed-method review will be. For those reviews where the quantitative and qualitative findings of studies are synthesised separately and integrated at the point of findings (eg, segregated or contingent approaches or sequential synthesis design), separate data extraction approaches will likely be used.

Where integration occurs during the process of the review (eg, integrated approach or convergent synthesis design), an integrated approach to data extraction may be considered, depending on the purpose of the review. This may involve the use of a data extraction framework, the choice of which needs to be congruent with the approach to synthesis chosen for the review. 40 45 The integrative or theoretical framework may be decided on a priori if a pre-developed theoretical or conceptual framework is available in the literature. 27 The development of a framework may alternatively arise from the reading of the included studies, in relation to the purpose of the review, early in the process. The Cochrane Qualitative and Implementation Methods Group provide further guidance on extraction of qualitative data, including use of software. 40

Synthesis and integration

Relatively few synthesis methods start off being integrated from the beginning, and these methods have generally been subject to less testing and evaluation particularly in a guideline context (see table 1 ). A review design that started off being integrated from the beginning may be suitable for some guideline contexts (such as in case study 3—risk communication in humanitarian disasters—where there was little evidence of effect), but in general if there are sufficient trials then a separate systematic review and meta-analysis will be required for a guideline. Other papers in this series offer guidance on methods for synthesising quantitative 46 and qualitative evidence 14 in reviews that take a complexity perspective. Further guidance on integrating quantitative and qualitative evidence in a systematic review is provided by the Cochrane Qualitative and Implementation Methods Group. 19 27 29 40 47

Types of findings produced by specific methods

It is highly likely (unless there are well-designed process evaluations) that the primary studies may not themselves seek to address the complexity-related questions required for a guideline process. In which case, review authors will need to configure the available evidence and transform the evidence through the synthesis process to produce explanations, propositions and hypotheses (ie, findings) that were not obvious at primary study level. It is important that guideline commissioners, developers and review authors are aware that specific methods are intended to produce a type of finding with a specific purpose (such as developing new theory in the case of meta-ethnography). 48 Case study 1 (antenatal care guideline) provides an example of how a meta-ethnography was used to develop a new theory as an end product, 48 49 as well as framework synthesis which produced descriptive and explanatory findings that were more easily incorporated into the guideline process. 27 The definitions ( box 5 ) may be helpful when defining the different types of findings.

Different levels of findings

Descriptive findings —qualitative evidence-driven translated descriptive themes that do not move beyond the primary studies.

Explanatory findings —may either be at a descriptive or theoretical level. At the descriptive level, qualitative evidence is used to explain phenomena observed in quantitative results, such as why implementation failed in specific circumstances. At the theoretical level, the transformed and interpreted findings that go beyond the primary studies can be used to explain the descriptive findings. The latter description is generally the accepted definition in the wider qualitative community.

Hypothetical or theoretical finding —qualitative evidence-driven transformed themes (or lines of argument) that go beyond the primary studies. Although similar, Thomas and Harden 56 make a distinction in the purposes between two types of theoretical findings: analytical themes and the product of meta-ethnographies, third-order interpretations. 48

Analytical themes are a product of interrogating descriptive themes by placing the synthesis within an external theoretical framework (such as the review question and subquestions) and are considered more appropriate when a specific review question is being addressed (eg, in a guideline or to inform policy). 56

Third-order interpretations come from translating studies into one another while preserving the original context and are more appropriate when a body of literature is being explored in and of itself with broader or emergent review questions. 48

Bringing mixed-method evidence together in evidence to decision (EtD) frameworks

A critical element of guideline development is the formulation of recommendations by the Guideline Development Group, and EtD frameworks help to facilitate this process. 16 The EtD framework can also be used as a mechanism to integrate and display quantitative and qualitative evidence and findings mapped against the EtD framework domains with hyperlinks to more detailed evidence summaries from contributing reviews (see table 1 ). It is commonly the EtD framework that enables the findings of the separate quantitative and qualitative reviews to be brought together in a guideline process. Specific challenges when populating the DECIDE evidence to decision framework 15 were noted in case study 3 (risk communication in humanitarian disasters) as there was an absence of intervention effect data and the interventions to communicate public health risks were context specific and varied. These problems would not, however, have been addressed by substitution of the DECIDE framework with the new INTEGRATE 16 evidence to decision framework. A d ifferent type of EtD framework needs to be developed for reviews that do not include sufficient evidence of intervention effect.

Mixed-method review and synthesis methods are generally the least developed of all systematic review methods. It is acknowledged that methods for combining quantitative and qualitative evidence are generally poorly articulated. 29 50 There are however some fairly well-established methods for using qualitative evidence to explore aspects of complexity (such as contextual, implementation and outcome complexity), which can be combined with evidence of effect (see sections A and B of table 1 ). 14 There are good examples of systematic reviews that use these methods to combine quantitative and qualitative evidence, and examples of guideline recommendations that were informed by evidence from both quantitative and qualitative reviews (eg, case studies 1–3). With the exception of case study 3 (risk communication), the quantitative and qualitative reviews for these specific guidelines have been conducted separately, and the findings subsequently brought together in an EtD framework to inform recommendations.

Other mixed-method review designs have potential to contribute to understanding of complex interventions and to explore aspects of wider health systems complexity but have not been sufficiently developed and tested for this specific purpose, or used in a guideline process (section C of table 1 ). Some methods such as meta-narrative reviews also explore different questions to those usually asked in a guideline process. Methods for processing (eg, quality appraisal) and synthesising the highly diverse evidence suggested in tables 2 and 3 that are required to explore specific aspects of health systems complexity (such as system adaptivity) and to populate some sections of the INTEGRATE EtD framework remain underdeveloped or in need of development.

In addition to the required methodological development mentioned above, there is no GRADE approach 38 for assessing confidence in findings developed from combined quantitative and qualitative evidence. Another paper in this series outlines how to deal with complexity and grading different types of quantitative evidence, 51 and the GRADE CERQual approach for qualitative findings is described elsewhere, 39 but both these approaches are applied to method-specific and not mixed-method findings. An unofficial adaptation of GRADE was used in the risk communication guideline that reported mixed-method findings. Nor is there a reporting guideline for mixed-method reviews, 47 and for now reports will need to conform to the relevant reporting requirements of the respective method-specific guideline. There is a need to further adapt and test DECIDE, 15 WHO-INTEGRATE 16 and other types of evidence to decision frameworks to accommodate evidence from mixed-method syntheses which do not set out to determine the statistical effects of interventions and in circumstances where there are no trials.

When conducting quantitative and qualitative reviews that will subsequently be combined, there are specific considerations for managing and integrating the different types of evidence throughout the review process. We have summarised different options for combining qualitative and quantitative evidence in mixed-method syntheses that guideline developers and systematic reviewers can choose from, as well as outlining the opportunities to integrate evidence at different stages of the review and guideline development process.

Review commissioners, authors and guideline developers generally have less experience of combining qualitative and evidence in mixed-methods reviews. In particular, there is a relatively small group of reviewers who are skilled at undertaking fully integrated mixed-method reviews. Commissioning additional qualitative and mixed-method reviews creates an additional cost. Large complex mixed-method reviews generally take more time to complete. Careful consideration needs to be given as to which guidelines would benefit most from additional qualitative and mixed-method syntheses. More training is required to develop capacity and there is a need to develop processes for preparing the guideline panel to consider and use mixed-method evidence in their decision-making.

This paper has presented how qualitative and quantitative evidence, combined in mixed-method reviews, can help understand aspects of complex interventions and the systems within which they are implemented. There are further opportunities to use these methods, and to further develop the methods, to look more widely at additional aspects of complexity. There is a range of review designs and synthesis methods to choose from depending on the question being asked or the questions that may emerge during the conduct of the synthesis. Additional methods need to be developed (or existing methods further adapted) in order to synthesise the full range of diverse evidence that is desirable to explore the complexity-related questions when complex interventions are implemented into health systems. We encourage review commissioners and authors, and guideline developers to consider using mixed-methods reviews and synthesis in guidelines and to report on their usefulness in the guideline development process.

Handling editor: Soumyadeep Bhaumik

Contributors: JN, AB, GM, KF, ÖT and ES drafted the manuscript. All authors contributed to paper development and writing and agreed the final manuscript. Anayda Portela and Susan Norris from WHO managed the series. Helen Smith was series Editor. We thank all those who provided feedback on various iterations.

Funding: Funding provided by the World Health Organization Department of Maternal, Newborn, Child and Adolescent Health through grants received from the United States Agency for International Development and the Norwegian Agency for Development Cooperation.

Disclaimer: ÖT is a staff member of WHO. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of WHO.

Competing interests: No financial interests declared. JN, AB and ÖT have an intellectual interest in GRADE CERQual; and JN has an intellectual interest in the iCAT_SR tool.

Patient consent: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: No additional data are available.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

IMAGES

  1. 10 Easy Steps to Find a Quantitative Article

    quantitative research definition articles

  2. Qualitative vs. Quantitative Research: Definition and Types

    quantitative research definition articles

  3. PPT

    quantitative research definition articles

  4. Quantitative Research

    quantitative research definition articles

  5. Tips on Making a Successful Quantitative Research

    quantitative research definition articles

  6. What Are The Characteristics Of Quantitative Research? Characteristics

    quantitative research definition articles

VIDEO

  1. Quantitative research process

  2. Quantitative Research:Its definition, types , purposes and goals

  3. Quantitative Research

  4. Quantitative Research, Types and Examples Latest

  5. What is Quantitative Research

  6. Quantitative and Qualitative research in research psychology

COMMENTS

  1. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  2. Quantitative Research

    Quantitative research, in contrast to qualitative research, deals with data that are numerical or that can be converted into numbers. The basic methods used to investigate numerical data are called 'statistics'. Statistical techniques are concerned with the organisation, analysis, interpretation and presentation of numerical data.

  3. Quantitative and Qualitative Research

    What is Quantitative Research? Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns.Quantitative research gathers a range of numeric data.

  4. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  5. Quantitative Research

    Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery.

  6. PDF Introduction to quantitative research

    Quantitative research is 'Explaining phenomena by collecting numerical data that are analysed using mathematically based methods (in particu-lar statistics)'. Let's go through this definition step by step. The first element is explaining phenomena. This is a key element of all research, be it quantitative or quali-tative.

  7. Quantitative research

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys - the principal research designs in quantitative research - are described ...

  8. Quantitative research

    Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.. Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of ...

  9. What Is Quantitative Research?

    Quantitative research is the opposite of qualitative research, which involves collecting and analysing non-numerical data (e.g. text, video, or audio). Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc. Quantitative research question examples.

  10. Quantitative Methods

    Definition. Quantitative method is the collection and analysis of numerical data to answer scientific research questions. Quantitative method is used to summarize, average, find patterns, make predictions, and test causal associations as well as generalizing results to wider populations.

  11. What is Quantitative Research? Definition, Methods, Types, and Examples

    Quantitative research is the process of collecting and analyzing numerical data to describe, predict, or control variables of interest. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations. The purpose of quantitative research is to test a predefined ...

  12. (PDF) Quantitative Research Methods : A Synopsis Approach

    The study established that. quantitative research de als with quantifying and analyzing variables in o rder to get results. It. involves the utilization and analysis of numerical data using ...

  13. Quantitative Research

    Quantitative Research. Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions.This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected.

  14. Chapter 3. Introduction to Quantitative Research and Data

    Six key characteristics of quantitative research: It deals with numbers to assess information. Data can be measured and quantified. It aims to be objective. Findings can be evaluated using statistical analysis. It represents complex problems through variables. Results can be summarized, compared, or generalized.

  15. Quantitative and Qualitative Research Methods

    Quantitative research uses methods that seek to explain phenomena by collecting numerical data, which are then analysed mathematically, typically by statistics. With quantitative approaches, the data produced are always numerical; if there are no numbers, then the methods are not quantitative. Many phenomena lend themselves to quantitative ...

  16. Public and patient involvement in quantitative health research: A

    Quantitative research usually aims to provide precise, unbiased estimates of parameters of interest for the entire population which requires a large, randomly selected sample. Brett et al 4 reported a positive impact of PPI on recruitment in studies, but the representativeness of the sample is as important in quantitative research as sample ...

  17. Quantitative and Qualitative Approaches to Generalization and

    We conclude that quantitative research may benefit from a bottom-up generalization strategy as it is employed in most qualitative research programs. Inductive reasoning forces us to think about the boundary conditions of our theories and provides a framework for generalization beyond statistical testing. In this perspective, failed replications ...

  18. Quantitative Data Analysis—In the Graduate Curriculum

    A quantitative research study collects numerical data that must be analyzed to help draw the study's conclusions. Teaching quantitative data analysis is not teaching number crunching, but teaching a way of critical thinking for how to analyze the data. The goal of data analysis is to reveal the underlying patterns, trends, and relationships ...

  19. Qualitative vs Quantitative Research: What's the Difference?

    The main difference between quantitative and qualitative research is the type of data they collect and analyze. Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms.

  20. (PDF) Quantitative Research Designs

    Abstract and Figures. In this chapter, we will explore several types of research designs. The designs in this chapter are survey design, descriptive design, correlational design, experimental ...

  21. Synthesising quantitative and qualitative evidence to inform guidelines

    Introduction. Recognition has grown that while quantitative methods remain vital, they are usually insufficient to address complex health systems related research questions. 1 Quantitative methods rely on an ability to anticipate what must be measured in advance. Introducing change into a complex health system gives rise to emergent reactions, which cannot be fully predicted in advance.

  22. (PDF) Quantitative Research Method

    A quantitative approach was seen as the best method to reach a large portion of the targeted population. According to Adedoyin (2020), quantitative research is known as the study of phenomena ...