News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 3, 2023 3:14 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

What Is Research, and Why Do People Do It?

  • Open Access
  • First Online: 03 December 2022

Cite this chapter

You have full access to this open access chapter

Book cover

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  

Part of the book series: Research in Mathematics Education ((RME))

15k Accesses

Abstractspiepr Abs1

Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain, and by its commitment to learn from everyone else seriously engaged in research. We call this kind of research scientific inquiry and define it as “formulating, testing, and revising hypotheses.” By “hypotheses” we do not mean the hypotheses you encounter in statistics courses. We mean predictions about what you expect to find and rationales for why you made these predictions. Throughout this and the remaining chapters we make clear that the process of scientific inquiry applies to all kinds of research studies and data, both qualitative and quantitative.

You have full access to this open access chapter,  Download chapter PDF

Part I. What Is Research?

Have you ever studied something carefully because you wanted to know more about it? Maybe you wanted to know more about your grandmother’s life when she was younger so you asked her to tell you stories from her childhood, or maybe you wanted to know more about a fertilizer you were about to use in your garden so you read the ingredients on the package and looked them up online. According to the dictionary definition, you were doing research.

Recall your high school assignments asking you to “research” a topic. The assignment likely included consulting a variety of sources that discussed the topic, perhaps including some “original” sources. Often, the teacher referred to your product as a “research paper.”

Were you conducting research when you interviewed your grandmother or wrote high school papers reviewing a particular topic? Our view is that you were engaged in part of the research process, but only a small part. In this book, we reserve the word “research” for what it means in the scientific world, that is, for scientific research or, more pointedly, for scientific inquiry .

Exercise 1.1

Before you read any further, write a definition of what you think scientific inquiry is. Keep it short—Two to three sentences. You will periodically update this definition as you read this chapter and the remainder of the book.

This book is about scientific inquiry—what it is and how to do it. For starters, scientific inquiry is a process, a particular way of finding out about something that involves a number of phases. Each phase of the process constitutes one aspect of scientific inquiry. You are doing scientific inquiry as you engage in each phase, but you have not done scientific inquiry until you complete the full process. Each phase is necessary but not sufficient.

In this chapter, we set the stage by defining scientific inquiry—describing what it is and what it is not—and by discussing what it is good for and why people do it. The remaining chapters build directly on the ideas presented in this chapter.

A first thing to know is that scientific inquiry is not all or nothing. “Scientificness” is a continuum. Inquiries can be more scientific or less scientific. What makes an inquiry more scientific? You might be surprised there is no universally agreed upon answer to this question. None of the descriptors we know of are sufficient by themselves to define scientific inquiry. But all of them give you a way of thinking about some aspects of the process of scientific inquiry. Each one gives you different insights.

An image of the book's description with the words like research, science, and inquiry and what the word research meant in the scientific world.

Exercise 1.2

As you read about each descriptor below, think about what would make an inquiry more or less scientific. If you think a descriptor is important, use it to revise your definition of scientific inquiry.

Creating an Image of Scientific Inquiry

We will present three descriptors of scientific inquiry. Each provides a different perspective and emphasizes a different aspect of scientific inquiry. We will draw on all three descriptors to compose our definition of scientific inquiry.

Descriptor 1. Experience Carefully Planned in Advance

Sir Ronald Fisher, often called the father of modern statistical design, once referred to research as “experience carefully planned in advance” (1935, p. 8). He said that humans are always learning from experience, from interacting with the world around them. Usually, this learning is haphazard rather than the result of a deliberate process carried out over an extended period of time. Research, Fisher said, was learning from experience, but experience carefully planned in advance.

This phrase can be fully appreciated by looking at each word. The fact that scientific inquiry is based on experience means that it is based on interacting with the world. These interactions could be thought of as the stuff of scientific inquiry. In addition, it is not just any experience that counts. The experience must be carefully planned . The interactions with the world must be conducted with an explicit, describable purpose, and steps must be taken to make the intended learning as likely as possible. This planning is an integral part of scientific inquiry; it is not just a preparation phase. It is one of the things that distinguishes scientific inquiry from many everyday learning experiences. Finally, these steps must be taken beforehand and the purpose of the inquiry must be articulated in advance of the experience. Clearly, scientific inquiry does not happen by accident, by just stumbling into something. Stumbling into something unexpected and interesting can happen while engaged in scientific inquiry, but learning does not depend on it and serendipity does not make the inquiry scientific.

Descriptor 2. Observing Something and Trying to Explain Why It Is the Way It Is

When we were writing this chapter and googled “scientific inquiry,” the first entry was: “Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work.” The emphasis is on studying, or observing, and then explaining . This descriptor takes the image of scientific inquiry beyond carefully planned experience and includes explaining what was experienced.

According to the Merriam-Webster dictionary, “explain” means “(a) to make known, (b) to make plain or understandable, (c) to give the reason or cause of, and (d) to show the logical development or relations of” (Merriam-Webster, n.d. ). We will use all these definitions. Taken together, they suggest that to explain an observation means to understand it by finding reasons (or causes) for why it is as it is. In this sense of scientific inquiry, the following are synonyms: explaining why, understanding why, and reasoning about causes and effects. Our image of scientific inquiry now includes planning, observing, and explaining why.

An image represents the observation required in the scientific inquiry including planning and explaining.

We need to add a final note about this descriptor. We have phrased it in a way that suggests “observing something” means you are observing something in real time—observing the way things are or the way things are changing. This is often true. But, observing could mean observing data that already have been collected, maybe by someone else making the original observations (e.g., secondary analysis of NAEP data or analysis of existing video recordings of classroom instruction). We will address secondary analyses more fully in Chap. 4 . For now, what is important is that the process requires explaining why the data look like they do.

We must note that for us, the term “data” is not limited to numerical or quantitative data such as test scores. Data can also take many nonquantitative forms, including written survey responses, interview transcripts, journal entries, video recordings of students, teachers, and classrooms, text messages, and so forth.

An image represents the data explanation as it is not limited and takes numerous non-quantitative forms including an interview, journal entries, etc.

Exercise 1.3

What are the implications of the statement that just “observing” is not enough to count as scientific inquiry? Does this mean that a detailed description of a phenomenon is not scientific inquiry?

Find sources that define research in education that differ with our position, that say description alone, without explanation, counts as scientific research. Identify the precise points where the opinions differ. What are the best arguments for each of the positions? Which do you prefer? Why?

Descriptor 3. Updating Everyone’s Thinking in Response to More and Better Information

This descriptor focuses on a third aspect of scientific inquiry: updating and advancing the field’s understanding of phenomena that are investigated. This descriptor foregrounds a powerful characteristic of scientific inquiry: the reliability (or trustworthiness) of what is learned and the ultimate inevitability of this learning to advance human understanding of phenomena. Humans might choose not to learn from scientific inquiry, but history suggests that scientific inquiry always has the potential to advance understanding and that, eventually, humans take advantage of these new understandings.

Before exploring these bold claims a bit further, note that this descriptor uses “information” in the same way the previous two descriptors used “experience” and “observations.” These are the stuff of scientific inquiry and we will use them often, sometimes interchangeably. Frequently, we will use the term “data” to stand for all these terms.

An overriding goal of scientific inquiry is for everyone to learn from what one scientist does. Much of this book is about the methods you need to use so others have faith in what you report and can learn the same things you learned. This aspect of scientific inquiry has many implications.

One implication is that scientific inquiry is not a private practice. It is a public practice available for others to see and learn from. Notice how different this is from everyday learning. When you happen to learn something from your everyday experience, often only you gain from the experience. The fact that research is a public practice means it is also a social one. It is best conducted by interacting with others along the way: soliciting feedback at each phase, taking opportunities to present work-in-progress, and benefitting from the advice of others.

A second implication is that you, as the researcher, must be committed to sharing what you are doing and what you are learning in an open and transparent way. This allows all phases of your work to be scrutinized and critiqued. This is what gives your work credibility. The reliability or trustworthiness of your findings depends on your colleagues recognizing that you have used all appropriate methods to maximize the chances that your claims are justified by the data.

A third implication of viewing scientific inquiry as a collective enterprise is the reverse of the second—you must be committed to receiving comments from others. You must treat your colleagues as fair and honest critics even though it might sometimes feel otherwise. You must appreciate their job, which is to remain skeptical while scrutinizing what you have done in considerable detail. To provide the best help to you, they must remain skeptical about your conclusions (when, for example, the data are difficult for them to interpret) until you offer a convincing logical argument based on the information you share. A rather harsh but good-to-remember statement of the role of your friendly critics was voiced by Karl Popper, a well-known twentieth century philosopher of science: “. . . if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can” (Popper, 1968, p. 27).

A final implication of this third descriptor is that, as someone engaged in scientific inquiry, you have no choice but to update your thinking when the data support a different conclusion. This applies to your own data as well as to those of others. When data clearly point to a specific claim, even one that is quite different than you expected, you must reconsider your position. If the outcome is replicated multiple times, you need to adjust your thinking accordingly. Scientific inquiry does not let you pick and choose which data to believe; it mandates that everyone update their thinking when the data warrant an update.

Doing Scientific Inquiry

We define scientific inquiry in an operational sense—what does it mean to do scientific inquiry? What kind of process would satisfy all three descriptors: carefully planning an experience in advance; observing and trying to explain what you see; and, contributing to updating everyone’s thinking about an important phenomenon?

We define scientific inquiry as formulating , testing , and revising hypotheses about phenomena of interest.

Of course, we are not the only ones who define it in this way. The definition for the scientific method posted by the editors of Britannica is: “a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments” (Britannica, n.d. ).

An image represents the scientific inquiry definition given by the editors of Britannica and also defines the hypothesis on the basis of the experiments.

Notice how defining scientific inquiry this way satisfies each of the descriptors. “Carefully planning an experience in advance” is exactly what happens when formulating a hypothesis about a phenomenon of interest and thinking about how to test it. “ Observing a phenomenon” occurs when testing a hypothesis, and “ explaining ” what is found is required when revising a hypothesis based on the data. Finally, “updating everyone’s thinking” comes from comparing publicly the original with the revised hypothesis.

Doing scientific inquiry, as we have defined it, underscores the value of accumulating knowledge rather than generating random bits of knowledge. Formulating, testing, and revising hypotheses is an ongoing process, with each revised hypothesis begging for another test, whether by the same researcher or by new researchers. The editors of Britannica signaled this cyclic process by adding the following phrase to their definition of the scientific method: “The modified hypothesis is then retested, further modified, and tested again.” Scientific inquiry creates a process that encourages each study to build on the studies that have gone before. Through collective engagement in this process of building study on top of study, the scientific community works together to update its thinking.

Before exploring more fully the meaning of “formulating, testing, and revising hypotheses,” we need to acknowledge that this is not the only way researchers define research. Some researchers prefer a less formal definition, one that includes more serendipity, less planning, less explanation. You might have come across more open definitions such as “research is finding out about something.” We prefer the tighter hypothesis formulation, testing, and revision definition because we believe it provides a single, coherent map for conducting research that addresses many of the thorny problems educational researchers encounter. We believe it is the most useful orientation toward research and the most helpful to learn as a beginning researcher.

A final clarification of our definition is that it applies equally to qualitative and quantitative research. This is a familiar distinction in education that has generated much discussion. You might think our definition favors quantitative methods over qualitative methods because the language of hypothesis formulation and testing is often associated with quantitative methods. In fact, we do not favor one method over another. In Chap. 4 , we will illustrate how our definition fits research using a range of quantitative and qualitative methods.

Exercise 1.4

Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers that has some revised hypotheses that remain untested. Identify a revised hypothesis that you might like to test.

Unpacking the Terms Formulating, Testing, and Revising Hypotheses

To get a full sense of the definition of scientific inquiry we will use throughout this book, it is helpful to spend a little time with each of the key terms.

We first want to make clear that we use the term “hypothesis” as it is defined in most dictionaries and as it used in many scientific fields rather than as it is usually defined in educational statistics courses. By “hypothesis,” we do not mean a null hypothesis that is accepted or rejected by statistical analysis. Rather, we use “hypothesis” in the sense conveyed by the following definitions: “An idea or explanation for something that is based on known facts but has not yet been proved” (Cambridge University Press, n.d. ), and “An unproved theory, proposition, or supposition, tentatively accepted to explain certain facts and to provide a basis for further investigation or argument” (Agnes & Guralnik, 2008 ).

We distinguish two parts to “hypotheses.” Hypotheses consist of predictions and rationales . Predictions are statements about what you expect to find when you inquire about something. Rationales are explanations for why you made the predictions you did, why you believe your predictions are correct. So, for us “formulating hypotheses” means making explicit predictions and developing rationales for the predictions.

“Testing hypotheses” means making observations that allow you to assess in what ways your predictions were correct and in what ways they were incorrect. In education research, it is rarely useful to think of your predictions as either right or wrong. Because of the complexity of most issues you will investigate, most predictions will be right in some ways and wrong in others.

By studying the observations you make (data you collect) to test your hypotheses, you can revise your hypotheses to better align with the observations. This means revising your predictions plus revising your rationales to justify your adjusted predictions. Even though you might not run another test, formulating revised hypotheses is an essential part of conducting a research study. Comparing your original and revised hypotheses informs everyone of what you learned by conducting your study. In addition, a revised hypothesis sets the stage for you or someone else to extend your study and accumulate more knowledge of the phenomenon.

We should note that not everyone makes a clear distinction between predictions and rationales as two aspects of hypotheses. In fact, common, non-scientific uses of the word “hypothesis” may limit it to only a prediction or only an explanation (or rationale). We choose to explicitly include both prediction and rationale in our definition of hypothesis, not because we assert this should be the universal definition, but because we want to foreground the importance of both parts acting in concert. Using “hypothesis” to represent both prediction and rationale could hide the two aspects, but we make them explicit because they provide different kinds of information. It is usually easier to make predictions than develop rationales because predictions can be guesses, hunches, or gut feelings about which you have little confidence. Developing a compelling rationale requires careful thought plus reading what other researchers have found plus talking with your colleagues. Often, while you are developing your rationale you will find good reasons to change your predictions. Developing good rationales is the engine that drives scientific inquiry. Rationales are essentially descriptions of how much you know about the phenomenon you are studying. Throughout this guide, we will elaborate on how developing good rationales drives scientific inquiry. For now, we simply note that it can sharpen your predictions and help you to interpret your data as you test your hypotheses.

An image represents the rationale and the prediction for the scientific inquiry and different types of information provided by the terms.

Hypotheses in education research take a variety of forms or types. This is because there are a variety of phenomena that can be investigated. Investigating educational phenomena is sometimes best done using qualitative methods, sometimes using quantitative methods, and most often using mixed methods (e.g., Hay, 2016 ; Weis et al. 2019a ; Weisner, 2005 ). This means that, given our definition, hypotheses are equally applicable to qualitative and quantitative investigations.

Hypotheses take different forms when they are used to investigate different kinds of phenomena. Two very different activities in education could be labeled conducting experiments and descriptions. In an experiment, a hypothesis makes a prediction about anticipated changes, say the changes that occur when a treatment or intervention is applied. You might investigate how students’ thinking changes during a particular kind of instruction.

A second type of hypothesis, relevant for descriptive research, makes a prediction about what you will find when you investigate and describe the nature of a situation. The goal is to understand a situation as it exists rather than to understand a change from one situation to another. In this case, your prediction is what you expect to observe. Your rationale is the set of reasons for making this prediction; it is your current explanation for why the situation will look like it does.

You will probably read, if you have not already, that some researchers say you do not need a prediction to conduct a descriptive study. We will discuss this point of view in Chap. 2 . For now, we simply claim that scientific inquiry, as we have defined it, applies to all kinds of research studies. Descriptive studies, like others, not only benefit from formulating, testing, and revising hypotheses, but also need hypothesis formulating, testing, and revising.

One reason we define research as formulating, testing, and revising hypotheses is that if you think of research in this way you are less likely to go wrong. It is a useful guide for the entire process, as we will describe in detail in the chapters ahead. For example, as you build the rationale for your predictions, you are constructing the theoretical framework for your study (Chap. 3 ). As you work out the methods you will use to test your hypothesis, every decision you make will be based on asking, “Will this help me formulate or test or revise my hypothesis?” (Chap. 4 ). As you interpret the results of testing your predictions, you will compare them to what you predicted and examine the differences, focusing on how you must revise your hypotheses (Chap. 5 ). By anchoring the process to formulating, testing, and revising hypotheses, you will make smart decisions that yield a coherent and well-designed study.

Exercise 1.5

Compare the concept of formulating, testing, and revising hypotheses with the descriptions of scientific inquiry contained in Scientific Research in Education (NRC, 2002 ). How are they similar or different?

Exercise 1.6

Provide an example to illustrate and emphasize the differences between everyday learning/thinking and scientific inquiry.

Learning from Doing Scientific Inquiry

We noted earlier that a measure of what you have learned by conducting a research study is found in the differences between your original hypothesis and your revised hypothesis based on the data you collected to test your hypothesis. We will elaborate this statement in later chapters, but we preview our argument here.

Even before collecting data, scientific inquiry requires cycles of making a prediction, developing a rationale, refining your predictions, reading and studying more to strengthen your rationale, refining your predictions again, and so forth. And, even if you have run through several such cycles, you still will likely find that when you test your prediction you will be partly right and partly wrong. The results will support some parts of your predictions but not others, or the results will “kind of” support your predictions. A critical part of scientific inquiry is making sense of your results by interpreting them against your predictions. Carefully describing what aspects of your data supported your predictions, what aspects did not, and what data fell outside of any predictions is not an easy task, but you cannot learn from your study without doing this analysis.

An image represents the cycle of events that take place before making predictions, developing the rationale, and studying the prediction and rationale multiple times.

Analyzing the matches and mismatches between your predictions and your data allows you to formulate different rationales that would have accounted for more of the data. The best revised rationale is the one that accounts for the most data. Once you have revised your rationales, you can think about the predictions they best justify or explain. It is by comparing your original rationales to your new rationales that you can sort out what you learned from your study.

Suppose your study was an experiment. Maybe you were investigating the effects of a new instructional intervention on students’ learning. Your original rationale was your explanation for why the intervention would change the learning outcomes in a particular way. Your revised rationale explained why the changes that you observed occurred like they did and why your revised predictions are better. Maybe your original rationale focused on the potential of the activities if they were implemented in ideal ways and your revised rationale included the factors that are likely to affect how teachers implement them. By comparing the before and after rationales, you are describing what you learned—what you can explain now that you could not before. Another way of saying this is that you are describing how much more you understand now than before you conducted your study.

Revised predictions based on carefully planned and collected data usually exhibit some of the following features compared with the originals: more precision, more completeness, and broader scope. Revised rationales have more explanatory power and become more complete, more aligned with the new predictions, sharper, and overall more convincing.

Part II. Why Do Educators Do Research?

Doing scientific inquiry is a lot of work. Each phase of the process takes time, and you will often cycle back to improve earlier phases as you engage in later phases. Because of the significant effort required, you should make sure your study is worth it. So, from the beginning, you should think about the purpose of your study. Why do you want to do it? And, because research is a social practice, you should also think about whether the results of your study are likely to be important and significant to the education community.

If you are doing research in the way we have described—as scientific inquiry—then one purpose of your study is to understand , not just to describe or evaluate or report. As we noted earlier, when you formulate hypotheses, you are developing rationales that explain why things might be like they are. In our view, trying to understand and explain is what separates research from other kinds of activities, like evaluating or describing.

One reason understanding is so important is that it allows researchers to see how or why something works like it does. When you see how something works, you are better able to predict how it might work in other contexts, under other conditions. And, because conditions, or contextual factors, matter a lot in education, gaining insights into applying your findings to other contexts increases the contributions of your work and its importance to the broader education community.

Consequently, the purposes of research studies in education often include the more specific aim of identifying and understanding the conditions under which the phenomena being studied work like the observations suggest. A classic example of this kind of study in mathematics education was reported by William Brownell and Harold Moser in 1949 . They were trying to establish which method of subtracting whole numbers could be taught most effectively—the regrouping method or the equal additions method. However, they realized that effectiveness might depend on the conditions under which the methods were taught—“meaningfully” versus “mechanically.” So, they designed a study that crossed the two instructional approaches with the two different methods (regrouping and equal additions). Among other results, they found that these conditions did matter. The regrouping method was more effective under the meaningful condition than the mechanical condition, but the same was not true for the equal additions algorithm.

What do education researchers want to understand? In our view, the ultimate goal of education is to offer all students the best possible learning opportunities. So, we believe the ultimate purpose of scientific inquiry in education is to develop understanding that supports the improvement of learning opportunities for all students. We say “ultimate” because there are lots of issues that must be understood to improve learning opportunities for all students. Hypotheses about many aspects of education are connected, ultimately, to students’ learning. For example, formulating and testing a hypothesis that preservice teachers need to engage in particular kinds of activities in their coursework in order to teach particular topics well is, ultimately, connected to improving students’ learning opportunities. So is hypothesizing that school districts often devote relatively few resources to instructional leadership training or hypothesizing that positioning mathematics as a tool students can use to combat social injustice can help students see the relevance of mathematics to their lives.

We do not exclude the importance of research on educational issues more removed from improving students’ learning opportunities, but we do think the argument for their importance will be more difficult to make. If there is no way to imagine a connection between your hypothesis and improving learning opportunities for students, even a distant connection, we recommend you reconsider whether it is an important hypothesis within the education community.

Notice that we said the ultimate goal of education is to offer all students the best possible learning opportunities. For too long, educators have been satisfied with a goal of offering rich learning opportunities for lots of students, sometimes even for just the majority of students, but not necessarily for all students. Evaluations of success often are based on outcomes that show high averages. In other words, if many students have learned something, or even a smaller number have learned a lot, educators may have been satisfied. The problem is that there is usually a pattern in the groups of students who receive lower quality opportunities—students of color and students who live in poor areas, urban and rural. This is not acceptable. Consequently, we emphasize the premise that the purpose of education research is to offer rich learning opportunities to all students.

One way to make sure you will be able to convince others of the importance of your study is to consider investigating some aspect of teachers’ shared instructional problems. Historically, researchers in education have set their own research agendas, regardless of the problems teachers are facing in schools. It is increasingly recognized that teachers have had trouble applying to their own classrooms what researchers find. To address this problem, a researcher could partner with a teacher—better yet, a small group of teachers—and talk with them about instructional problems they all share. These discussions can create a rich pool of problems researchers can consider. If researchers pursued one of these problems (preferably alongside teachers), the connection to improving learning opportunities for all students could be direct and immediate. “Grounding a research question in instructional problems that are experienced across multiple teachers’ classrooms helps to ensure that the answer to the question will be of sufficient scope to be relevant and significant beyond the local context” (Cai et al., 2019b , p. 115).

As a beginning researcher, determining the relevance and importance of a research problem is especially challenging. We recommend talking with advisors, other experienced researchers, and peers to test the educational importance of possible research problems and topics of study. You will also learn much more about the issue of research importance when you read Chap. 5 .

Exercise 1.7

Identify a problem in education that is closely connected to improving learning opportunities and a problem that has a less close connection. For each problem, write a brief argument (like a logical sequence of if-then statements) that connects the problem to all students’ learning opportunities.

Part III. Conducting Research as a Practice of Failing Productively

Scientific inquiry involves formulating hypotheses about phenomena that are not fully understood—by you or anyone else. Even if you are able to inform your hypotheses with lots of knowledge that has already been accumulated, you are likely to find that your prediction is not entirely accurate. This is normal. Remember, scientific inquiry is a process of constantly updating your thinking. More and better information means revising your thinking, again, and again, and again. Because you never fully understand a complicated phenomenon and your hypotheses never produce completely accurate predictions, it is easy to believe you are somehow failing.

The trick is to fail upward, to fail to predict accurately in ways that inform your next hypothesis so you can make a better prediction. Some of the best-known researchers in education have been open and honest about the many times their predictions were wrong and, based on the results of their studies and those of others, they continuously updated their thinking and changed their hypotheses.

A striking example of publicly revising (actually reversing) hypotheses due to incorrect predictions is found in the work of Lee J. Cronbach, one of the most distinguished educational psychologists of the twentieth century. In 1955, Cronbach delivered his presidential address to the American Psychological Association. Titling it “Two Disciplines of Scientific Psychology,” Cronbach proposed a rapprochement between two research approaches—correlational studies that focused on individual differences and experimental studies that focused on instructional treatments controlling for individual differences. (We will examine different research approaches in Chap. 4 ). If these approaches could be brought together, reasoned Cronbach ( 1957 ), researchers could find interactions between individual characteristics and treatments (aptitude-treatment interactions or ATIs), fitting the best treatments to different individuals.

In 1975, after years of research by many researchers looking for ATIs, Cronbach acknowledged the evidence for simple, useful ATIs had not been found. Even when trying to find interactions between a few variables that could provide instructional guidance, the analysis, said Cronbach, creates “a hall of mirrors that extends to infinity, tormenting even the boldest investigators and defeating even ambitious designs” (Cronbach, 1975 , p. 119).

As he was reflecting back on his work, Cronbach ( 1986 ) recommended moving away from documenting instructional effects through statistical inference (an approach he had championed for much of his career) and toward approaches that probe the reasons for these effects, approaches that provide a “full account of events in a time, place, and context” (Cronbach, 1986 , p. 104). This is a remarkable change in hypotheses, a change based on data and made fully transparent. Cronbach understood the value of failing productively.

Closer to home, in a less dramatic example, one of us began a line of scientific inquiry into how to prepare elementary preservice teachers to teach early algebra. Teaching early algebra meant engaging elementary students in early forms of algebraic reasoning. Such reasoning should help them transition from arithmetic to algebra. To begin this line of inquiry, a set of activities for preservice teachers were developed. Even though the activities were based on well-supported hypotheses, they largely failed to engage preservice teachers as predicted because of unanticipated challenges the preservice teachers faced. To capitalize on this failure, follow-up studies were conducted, first to better understand elementary preservice teachers’ challenges with preparing to teach early algebra, and then to better support preservice teachers in navigating these challenges. In this example, the initial failure was a necessary step in the researchers’ scientific inquiry and furthered the researchers’ understanding of this issue.

We present another example of failing productively in Chap. 2 . That example emerges from recounting the history of a well-known research program in mathematics education.

Making mistakes is an inherent part of doing scientific research. Conducting a study is rarely a smooth path from beginning to end. We recommend that you keep the following things in mind as you begin a career of conducting research in education.

First, do not get discouraged when you make mistakes; do not fall into the trap of feeling like you are not capable of doing research because you make too many errors.

Second, learn from your mistakes. Do not ignore your mistakes or treat them as errors that you simply need to forget and move past. Mistakes are rich sites for learning—in research just as in other fields of study.

Third, by reflecting on your mistakes, you can learn to make better mistakes, mistakes that inform you about a productive next step. You will not be able to eliminate your mistakes, but you can set a goal of making better and better mistakes.

Exercise 1.8

How does scientific inquiry differ from everyday learning in giving you the tools to fail upward? You may find helpful perspectives on this question in other resources on science and scientific inquiry (e.g., Failure: Why Science is So Successful by Firestein, 2015).

Exercise 1.9

Use what you have learned in this chapter to write a new definition of scientific inquiry. Compare this definition with the one you wrote before reading this chapter. If you are reading this book as part of a course, compare your definition with your colleagues’ definitions. Develop a consensus definition with everyone in the course.

Part IV. Preview of Chap. 2

Now that you have a good idea of what research is, at least of what we believe research is, the next step is to think about how to actually begin doing research. This means how to begin formulating, testing, and revising hypotheses. As for all phases of scientific inquiry, there are lots of things to think about. Because it is critical to start well, we devote Chap. 2 to getting started with formulating hypotheses.

Agnes, M., & Guralnik, D. B. (Eds.). (2008). Hypothesis. In Webster’s new world college dictionary (4th ed.). Wiley.

Google Scholar  

Britannica. (n.d.). Scientific method. In Encyclopaedia Britannica . Retrieved July 15, 2022 from https://www.britannica.com/science/scientific-method

Brownell, W. A., & Moser, H. E. (1949). Meaningful vs. mechanical learning: A study in grade III subtraction . Duke University Press..

Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V., Cirillo, M., Kramer, S. L., & Hiebert, J. (2019b). Posing significant research questions. Journal for Research in Mathematics Education, 50 (2), 114–120. https://doi.org/10.5951/jresematheduc.50.2.0114

Article   Google Scholar  

Cambridge University Press. (n.d.). Hypothesis. In Cambridge dictionary . Retrieved July 15, 2022 from https://dictionary.cambridge.org/us/dictionary/english/hypothesis

Cronbach, J. L. (1957). The two disciplines of scientific psychology. American Psychologist, 12 , 671–684.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30 , 116–127.

Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 83–107). University of Chicago Press.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Merriam-Webster. (n.d.). Explain. In Merriam-Webster.com dictionary . Retrieved July 15, 2022, from https://www.merriam-webster.com/dictionary/explain

National Research Council. (2002). Scientific research in education . National Academy Press.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weisner, T. S. (Ed.). (2005). Discovering successful pathways in children’s development: Mixed methods in the study of childhood and family life . University of Chicago Press.

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). What Is Research, and Why Do People Do It?. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_1

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

use of research in study

Home Market Research

What is Research: Definition, Methods, Types & Examples

What is Research

The search for knowledge is closely linked to the object of study; that is, to the reconstruction of the facts that will provide an explanation to an observed event and that at first sight can be considered as a problem. It is very human to seek answers and satisfy our curiosity. Let’s talk about research.

Content Index

What is Research?

What are the characteristics of research.

  • Comparative analysis chart

Qualitative methods

Quantitative methods, 8 tips for conducting accurate research.

Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, “research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.”

Inductive methods analyze an observed event, while deductive methods verify the observed event. Inductive approaches are associated with qualitative research , and deductive methods are more commonly associated with quantitative analysis .

Research is conducted with a purpose to:

  • Identify potential and new customers
  • Understand existing customers
  • Set pragmatic goals
  • Develop productive market strategies
  • Address business challenges
  • Put together a business expansion plan
  • Identify new business opportunities
  • Good research follows a systematic approach to capture accurate data. Researchers need to practice ethics and a code of conduct while making observations or drawing conclusions.
  • The analysis is based on logical reasoning and involves both inductive and deductive methods.
  • Real-time data and knowledge is derived from actual observations in natural settings.
  • There is an in-depth analysis of all data collected so that there are no anomalies associated with it.
  • It creates a path for generating new questions. Existing data helps create more research opportunities.
  • It is analytical and uses all the available data so that there is no ambiguity in inference.
  • Accuracy is one of the most critical aspects of research. The information must be accurate and correct. For example, laboratories provide a controlled environment to collect data. Accuracy is measured in the instruments used, the calibrations of instruments or tools, and the experiment’s final result.

What is the purpose of research?

There are three main purposes:

  • Exploratory: As the name suggests, researchers conduct exploratory studies to explore a group of questions. The answers and analytics may not offer a conclusion to the perceived problem. It is undertaken to handle new problem areas that haven’t been explored before. This exploratory data analysis process lays the foundation for more conclusive data collection and analysis.

LEARN ABOUT: Descriptive Analysis

  • Descriptive: It focuses on expanding knowledge on current issues through a process of data collection. Descriptive research describe the behavior of a sample population. Only one variable is required to conduct the study. The three primary purposes of descriptive studies are describing, explaining, and validating the findings. For example, a study conducted to know if top-level management leaders in the 21st century possess the moral right to receive a considerable sum of money from the company profit.

LEARN ABOUT: Best Data Collection Tools

  • Explanatory: Causal research or explanatory research is conducted to understand the impact of specific changes in existing standard procedures. Running experiments is the most popular form. For example, a study that is conducted to understand the effect of rebranding on customer loyalty.

Here is a comparative analysis chart for a better understanding:

It begins by asking the right questions and choosing an appropriate method to investigate the problem. After collecting answers to your questions, you can analyze the findings or observations to draw reasonable conclusions.

When it comes to customers and market studies, the more thorough your questions, the better the analysis. You get essential insights into brand perception and product needs by thoroughly collecting customer data through surveys and questionnaires . You can use this data to make smart decisions about your marketing strategies to position your business effectively.

To make sense of your study and get insights faster, it helps to use a research repository as a single source of truth in your organization and manage your research data in one centralized data repository .

Types of research methods and Examples

what is research

Research methods are broadly classified as Qualitative and Quantitative .

Both methods have distinctive properties and data collection methods.

Qualitative research is a method that collects data using conversational methods, usually open-ended questions . The responses collected are essentially non-numerical. This method helps a researcher understand what participants think and why they think in a particular way.

Types of qualitative methods include:

  • One-to-one Interview
  • Focus Groups
  • Ethnographic studies
  • Text Analysis

Quantitative methods deal with numbers and measurable forms . It uses a systematic way of investigating events or data. It answers questions to justify relationships with measurable variables to either explain, predict, or control a phenomenon.

Types of quantitative methods include:

  • Survey research
  • Descriptive research
  • Correlational research

LEARN MORE: Descriptive Research vs Correlational Research

Remember, it is only valuable and useful when it is valid, accurate, and reliable. Incorrect results can lead to customer churn and a decrease in sales.

It is essential to ensure that your data is:

  • Valid – founded, logical, rigorous, and impartial.
  • Accurate – free of errors and including required details.
  • Reliable – other people who investigate in the same way can produce similar results.
  • Timely – current and collected within an appropriate time frame.
  • Complete – includes all the data you need to support your business decisions.

Gather insights

What is a research - tips

  • Identify the main trends and issues, opportunities, and problems you observe. Write a sentence describing each one.
  • Keep track of the frequency with which each of the main findings appears.
  • Make a list of your findings from the most common to the least common.
  • Evaluate a list of the strengths, weaknesses, opportunities, and threats identified in a SWOT analysis .
  • Prepare conclusions and recommendations about your study.
  • Act on your strategies
  • Look for gaps in the information, and consider doing additional inquiry if necessary
  • Plan to review the results and consider efficient methods to analyze and interpret results.

Review your goals before making any conclusions about your study. Remember how the process you have completed and the data you have gathered help answer your questions. Ask yourself if what your analysis revealed facilitates the identification of your conclusions and recommendations.

LEARN MORE ABOUT OUR SOFTWARE         FREE TRIAL

MORE LIKE THIS

employee lifecycle management software

Employee Lifecycle Management Software: Top of 2024

Apr 15, 2024

Sentiment analysis software

Top 15 Sentiment Analysis Software That Should Be on Your List

A/B testing software

Top 13 A/B Testing Software for Optimizing Your Website

Apr 12, 2024

contact center experience software

21 Best Contact Center Experience Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Academic Success Center

Research Writing and Analysis

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Defense Schedule - Commons Calendar This link opens in a new window
  • Research Process Flow Chart
  • Research Alignment This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Literature Review This link opens in a new window
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Jump to DSE Guide

Purpose statement overview.

The purpose statement succinctly explains (on no more than 1 page) the objectives of the research study. These objectives must directly address the problem and help close the stated gap. Expressed as a formula:

use of research in study

Good purpose statements:

  • Flow from the problem statement and actually address the proposed problem
  • Are concise and clear
  • Answer the question ‘Why are you doing this research?’
  • Match the methodology (similar to research questions)
  • Have a ‘hook’ to get the reader’s attention
  • Set the stage by clearly stating, “The purpose of this (qualitative or quantitative) study is to ...

In PhD studies, the purpose usually involves applying a theory to solve the problem. In other words, the purpose tells the reader what the goal of the study is, and what your study will accomplish, through which theoretical lens. The purpose statement also includes brief information about direction, scope, and where the data will come from.

A problem and gap in combination can lead to different research objectives, and hence, different purpose statements. In the example from above where the problem was severe underrepresentation of female CEOs in Fortune 500 companies and the identified gap related to lack of research of male-dominated boards; one purpose might be to explore implicit biases in male-dominated boards through the lens of feminist theory. Another purpose may be to determine how board members rated female and male candidates on scales of competency, professionalism, and experience to predict which candidate will be selected for the CEO position. The first purpose may involve a qualitative ethnographic study in which the researcher observes board meetings and hiring interviews; the second may involve a quantitative regression analysis. The outcomes will be very different, so it’s important that you find out exactly how you want to address a problem and help close a gap!

The purpose of the study must not only align with the problem and address a gap; it must also align with the chosen research method. In fact, the DP/DM template requires you to name the  research method at the very beginning of the purpose statement. The research verb must match the chosen method. In general, quantitative studies involve “closed-ended” research verbs such as determine , measure , correlate , explain , compare , validate , identify , or examine ; whereas qualitative studies involve “open-ended” research verbs such as explore , understand , narrate , articulate [meanings], discover , or develop .

A qualitative purpose statement following the color-coded problem statement (assumed here to be low well-being among financial sector employees) + gap (lack of research on followers of mid-level managers), might start like this:

In response to declining levels of employee well-being, the purpose of the qualitative phenomenology was to explore and understand the lived experiences related to the well-being of the followers of novice mid-level managers in the financial services industry. The levels of follower well-being have been shown to correlate to employee morale, turnover intention, and customer orientation (Eren et al., 2013). A combined framework of Leader-Member Exchange (LMX) Theory and the employee well-being concept informed the research questions and supported the inquiry, analysis, and interpretation of the experiences of followers of novice managers in the financial services industry.

A quantitative purpose statement for the same problem and gap might start like this:

In response to declining levels of employee well-being, the purpose of the quantitative correlational study was to determine which leadership factors predict employee well-being of the followers of novice mid-level managers in the financial services industry. Leadership factors were measured by the Leader-Member Exchange (LMX) assessment framework  by Mantlekow (2015), and employee well-being was conceptualized as a compound variable consisting of self-reported turnover-intent and psychological test scores from the Mental Health Survey (MHS) developed by Johns Hopkins University researchers.

Both of these purpose statements reflect viable research strategies and both align with the problem and gap so it’s up to the researcher to design a study in a manner that reflects personal preferences and desired study outcomes. Note that the quantitative research purpose incorporates operationalized concepts  or variables ; that reflect the way the researcher intends to measure the key concepts under study; whereas the qualitative purpose statement isn’t about translating the concepts under study as variables but instead aim to explore and understand the core research phenomenon.  

Best Practices for Writing your Purpose Statement

Always keep in mind that the dissertation process is iterative, and your writing, over time, will be refined as clarity is gradually achieved. Most of the time, greater clarity for the purpose statement and other components of the Dissertation is the result of a growing understanding of the literature in the field. As you increasingly master the literature you will also increasingly clarify the purpose of your study.

The purpose statement should flow directly from the problem statement. There should be clear and obvious alignment between the two and that alignment will get tighter and more pronounced as your work progresses.

The purpose statement should specifically address the reason for conducting the study, with emphasis on the word specifically. There should not be any doubt in your readers’ minds as to the purpose of your study. To achieve this level of clarity you will need to also insure there is no doubt in your mind as to the purpose of your study.

Many researchers benefit from stopping your work during the research process when insight strikes you and write about it while it is still fresh in your mind. This can help you clarify all aspects of a dissertation, including clarifying its purpose.

Your Chair and your committee members can help you to clarify your study’s purpose so carefully attend to any feedback they offer.

The purpose statement should reflect the research questions and vice versa. The chain of alignment that began with the research problem description and continues on to the research purpose, research questions, and methodology must be respected at all times during dissertation development. You are to succinctly describe the overarching goal of the study that reflects the research questions. Each research question narrows and focuses the purpose statement. Conversely, the purpose statement encompasses all of the research questions.

Identify in the purpose statement the research method as quantitative, qualitative or mixed (i.e., “The purpose of this [qualitative/quantitative/mixed] study is to ...)

Avoid the use of the phrase “research study” since the two words together are redundant.

Follow the initial declaration of purpose with a brief overview of how, with what instruments/data, with whom and where (as applicable) the study will be conducted. Identify variables/constructs and/or phenomenon/concept/idea. Since this section is to be a concise paragraph, emphasis must be placed on the word brief. However, adding these details will give your readers a very clear picture of the purpose of your research.

Developing the purpose section of your dissertation is usually not achieved in a single flash of insight. The process involves a great deal of reading to find out what other scholars have done to address the research topic and problem you have identified. The purpose section of your dissertation could well be the most important paragraph you write during your academic career, and every word should be carefully selected. Think of it as the DNA of your dissertation. Everything else you write should emerge directly and clearly from your purpose statement. In turn, your purpose statement should emerge directly and clearly from your research problem description. It is good practice to print out your problem statement and purpose statement and keep them in front of you as you work on each part of your dissertation in order to insure alignment.

It is helpful to collect several dissertations similar to the one you envision creating. Extract the problem descriptions and purpose statements of other dissertation authors and compare them in order to sharpen your thinking about your own work.  Comparing how other dissertation authors have handled the many challenges you are facing can be an invaluable exercise. Keep in mind that individual universities use their own tailored protocols for presenting key components of the dissertation so your review of these purpose statements should focus on content rather than form.

Once your purpose statement is set it must be consistently presented throughout the dissertation. This may require some recursive editing because the way you articulate your purpose may evolve as you work on various aspects of your dissertation. Whenever you make an adjustment to your purpose statement you should carefully follow up on the editing and conceptual ramifications throughout the entire document.

In establishing your purpose you should NOT advocate for a particular outcome. Research should be done to answer questions not prove a point. As a researcher, you are to inquire with an open mind, and even when you come to the work with clear assumptions, your job is to prove the validity of the conclusions reached. For example, you would not say the purpose of your research project is to demonstrate that there is a relationship between two variables. Such a statement presupposes you know the answer before your research is conducted and promotes or supports (advocates on behalf of) a particular outcome. A more appropriate purpose statement would be to examine or explore the relationship between two variables.

Your purpose statement should not imply that you are going to prove something. You may be surprised to learn that we cannot prove anything in scholarly research for two reasons. First, in quantitative analyses, statistical tests calculate the probability that something is true rather than establishing it as true. Second, in qualitative research, the study can only purport to describe what is occurring from the perspective of the participants. Whether or not the phenomenon they are describing is true in a larger context is not knowable. We cannot observe the phenomenon in all settings and in all circumstances.

Writing your Purpose Statement

It is important to distinguish in your mind the differences between the Problem Statement and Purpose Statement.

The Problem Statement is why I am doing the research

The Purpose Statement is what type of research I am doing to fit or address the problem

The Purpose Statement includes:

  • Method of Study
  • Specific Population

Remember, as you are contemplating what to include in your purpose statement and then when you are writing it, the purpose statement is a concise paragraph that describes the intent of the study, and it should flow directly from the problem statement.  It should specifically address the reason for conducting the study, and reflect the research questions.  Further, it should identify the research method as qualitative, quantitative, or mixed.  Then provide a brief overview of how the study will be conducted, with what instruments/data collection methods, and with whom (subjects) and where (as applicable). Finally, you should identify variables/constructs and/or phenomenon/concept/idea.

Qualitative Purpose Statement

Creswell (2002) suggested for writing purpose statements in qualitative research include using deliberate phrasing to alert the reader to the purpose statement. Verbs that indicate what will take place in the research and the use of non-directional language that do not suggest an outcome are key. A purpose statement should focus on a single idea or concept, with a broad definition of the idea or concept. How the concept was investigated should also be included, as well as participants in the study and locations for the research to give the reader a sense of with whom and where the study took place. 

Creswell (2003) advised the following script for purpose statements in qualitative research:

“The purpose of this qualitative_________________ (strategy of inquiry, such as ethnography, case study, or other type) study is (was? will be?) to ________________ (understand? describe? develop? discover?) the _________________(central phenomenon being studied) for ______________ (the participants, such as the individual, groups, organization) at __________(research site). At this stage in the research, the __________ (central phenomenon being studied) will be generally defined as ___________________ (provide a general definition)” (pg. 90).

Quantitative Purpose Statement

Creswell (2003) offers vast differences between the purpose statements written for qualitative research and those written for quantitative research, particularly with respect to language and the inclusion of variables. The comparison of variables is often a focus of quantitative research, with the variables distinguishable by either the temporal order or how they are measured. As with qualitative research purpose statements, Creswell (2003) recommends the use of deliberate language to alert the reader to the purpose of the study, but quantitative purpose statements also include the theory or conceptual framework guiding the study and the variables that are being studied and how they are related. 

Creswell (2003) suggests the following script for drafting purpose statements in quantitative research:

“The purpose of this _____________________ (experiment? survey?) study is (was? will be?) to test the theory of _________________that _________________ (compares? relates?) the ___________(independent variable) to _________________________(dependent variable), controlling for _______________________ (control variables) for ___________________ (participants) at _________________________ (the research site). The independent variable(s) _____________________ will be generally defined as _______________________ (provide a general definition). The dependent variable(s) will be generally defined as _____________________ (provide a general definition), and the control and intervening variables(s), _________________ (identify the control and intervening variables) will be statistically controlled in this study” (pg. 97).

Sample Purpose Statements

  • The purpose of this qualitative study was to determine how participation in service-learning in an alternative school impacted students academically, civically, and personally.  There is ample evidence demonstrating the failure of schools for students at-risk; however, there is still a need to demonstrate why these students are successful in non-traditional educational programs like the service-learning model used at TDS.  This study was unique in that it examined one alternative school’s approach to service-learning in a setting where students not only serve, but faculty serve as volunteer teachers.  The use of a constructivist approach in service-learning in an alternative school setting was examined in an effort to determine whether service-learning participation contributes positively to academic, personal, and civic gain for students, and to examine student and teacher views regarding the overall outcomes of service-learning.  This study was completed using an ethnographic approach that included observations, content analysis, and interviews with teachers at The David School.
  • The purpose of this quantitative non-experimental cross-sectional linear multiple regression design was to investigate the relationship among early childhood teachers’ self-reported assessment of multicultural awareness as measured by responses from the Teacher Multicultural Attitude Survey (TMAS) and supervisors’ observed assessment of teachers’ multicultural competency skills as measured by the Multicultural Teaching Competency Scale (MTCS) survey. Demographic data such as number of multicultural training hours, years teaching in Dubai, curriculum program at current school, and age were also examined and their relationship to multicultural teaching competency. The study took place in the emirate of Dubai where there were 14,333 expatriate teachers employed in private schools (KHDA, 2013b).
  • The purpose of this quantitative, non-experimental study is to examine the degree to which stages of change, gender, acculturation level and trauma types predicts the reluctance of Arab refugees, aged 18 and over, in the Dearborn, MI area, to seek professional help for their mental health needs. This study will utilize four instruments to measure these variables: University of Rhode Island Change Assessment (URICA: DiClemente & Hughes, 1990); Cumulative Trauma Scale (Kira, 2012); Acculturation Rating Scale for Arabic Americans-II Arabic and English (ARSAA-IIA, ARSAA-IIE: Jadalla & Lee, 2013), and a demographic survey. This study will examine 1) the relationship between stages of change, gender, acculturation levels, and trauma types and Arab refugees’ help-seeking behavior, 2) the degree to which any of these variables can predict Arab refugee help-seeking behavior.  Additionally, the outcome of this study could provide researchers and clinicians with a stage-based model, TTM, for measuring Arab refugees’ help-seeking behavior and lay a foundation for how TTM can help target the clinical needs of Arab refugees. Lastly, this attempt to apply the TTM model to Arab refugees’ condition could lay the foundation for future research to investigate the application of TTM to clinical work among refugee populations.
  • The purpose of this qualitative, phenomenological study is to describe the lived experiences of LLM for 10 EFL learners in rural Guatemala and to utilize that data to determine how it conforms to, or possibly challenges, current theoretical conceptions of LLM. In accordance with Morse’s (1994) suggestion that a phenomenological study should utilize at least six participants, this study utilized semi-structured interviews with 10 EFL learners to explore why and how they have experienced the motivation to learn English throughout their lives. The methodology of horizontalization was used to break the interview protocols into individual units of meaning before analyzing these units to extract the overarching themes (Moustakas, 1994). These themes were then interpreted into a detailed description of LLM as experienced by EFL students in this context. Finally, the resulting description was analyzed to discover how these learners’ lived experiences with LLM conformed with and/or diverged from current theories of LLM.
  • The purpose of this qualitative, embedded, multiple case study was to examine how both parent-child attachment relationships are impacted by the quality of the paternal and maternal caregiver-child interactions that occur throughout a maternal deployment, within the context of dual-military couples. In order to examine this phenomenon, an embedded, multiple case study was conducted, utilizing an attachment systems metatheory perspective. The study included four dual-military couples who experienced a maternal deployment to Operation Iraqi Freedom (OIF) or Operation Enduring Freedom (OEF) when they had at least one child between 8 weeks-old to 5 years-old.  Each member of the couple participated in an individual, semi-structured interview with the researcher and completed the Parenting Relationship Questionnaire (PRQ). “The PRQ is designed to capture a parent’s perspective on the parent-child relationship” (Pearson, 2012, para. 1) and was used within the proposed study for this purpose. The PRQ was utilized to triangulate the data (Bekhet & Zauszniewski, 2012) as well as to provide some additional information on the parents’ perspective of the quality of the parent-child attachment relationship in regards to communication, discipline, parenting confidence, relationship satisfaction, and time spent together (Pearson, 2012). The researcher utilized the semi-structured interview to collect information regarding the parents' perspectives of the quality of their parental caregiver behaviors during the deployment cycle, the mother's parent-child interactions while deployed, the behavior of the child or children at time of reunification, and the strategies or behaviors the parents believe may have contributed to their child's behavior at the time of reunification. The results of this study may be utilized by the military, and by civilian providers, to develop proactive and preventive measures that both providers and parents can implement, to address any potential adverse effects on the parent-child attachment relationship, identified through the proposed study. The results of this study may also be utilized to further refine and understand the integration of attachment theory and systems theory, in both clinical and research settings, within the field of marriage and family therapy.

Was this resource helpful?

  • << Previous: Problem Statement
  • Next: Quantitative Research Questions >>
  • Last Updated: Apr 12, 2024 11:40 AM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

  • Research Grants on Reducing Inequality
  • Research Grants on Improving the Use of Research Evidence
  • William T. Grant Scholars Program
  • Institutional Challenge Grant
  • Youth Service Capacity-Building Grants
  • Youth Service Improvement Grants
  • Grantee Forms
  • Reducing Inequality
  • Improving the use of Research Evidence
  • Special Topics
  • Annual Reports
  • Awarded Grants
  • Publications

Why Should We Study the Use of Research Evidence as a Behavior?

Research on the use of research in policy and practice is rapidly growing in recent years across a broad range of academic disciplines and professional fields. As a result, we now know more than ever before about what use of research looks like and what kinds of things we can do to promote research use among policymakers and practitioners.

At the same time, further progress on this front is impeded by the persistent challenge of studying and measuring use of research evidence. Elizabeth Farley-Ripple has argued that research use remains rather elusive as a target of measurement largely because it is not well defined or fully connected with actual practice. In particular, research use appears to be a multi-dimensional construct that involves a sequence of actions (e.g., searching, filtering, and interpreting research), different levels of use (specific vs. general), and different decision making contexts (e.g., individual vs. group decisions). This, she suggests, makes it difficult to employ a standard measure of research evidence use that can be used to compare and synthesize findings of research studies on this topic.

Drew Gitomer and colleagues have stated that standardization of methods and measures for studying the use of research evidence makes little sense , since different methods and measures are needed to illuminate our understanding of research evidence use across different actors, settings, and decision making processes. Taken together, they argue, the scientific research on this topic can illuminate our understanding of the processes and contexts of research use, how research is used to inform decisions, the impact of research use, and the structures and relationships that influence research use. However, it is clear that in order to influence conditions that improve the use of research in policy and practice we must find a way to connect these pieces together.

Think “Use,” Not “Evidence”

I’d like to propose that a crucial first step in this direction is to recognize that research use is largely under-conceptualized and under-operationalized in current research on this topic. The primary reason for this is that use is often measured by tracking the move of evidence from research producers to users. That is, we infer research use when research evidence is present in records and documents created by users or in the accounts they provide.

Such conception and operationalization of research use has a number of important limitations. First, it tends to reproduce an artificial dichotomization of use vs. non-use, whereas use is more appropriately measured on a continuum—for example, as a function of dimensions of engagement with research evidence (e.g., systematic, critical, deliberate, generalized, habitual, etc.)—much like the way we rank research evidence as a function of rigor and consensus among scientists. Second, it imposes a normative expectation regarding what counts as use, which is derived from the norms and practices of research producers, but is often divorced from the constraints imposed on such use of research evidence in reality, such as the feasibility, acceptability, and perceived utility of using evidence. Put differently, research use is determined by the needs, capabilities, and circumstances of users, not by the characteristics and availability of research evidence. Third, any measure of use that is limited to research evidence necessarily excludes user-generated evidence that may be equally or more consequential to our understanding of users’ decisions and actions. This is particularly true regarding experiential evidence, or evidence that is based on professional insights, un¬derstanding, skills, and expertise that users accumulate over time and form their tacit knowledge. Lastly, it seems rather intuitive to align the measurement of research use with what users actually do with research evidence. In many policy and practice fields, for example, use of research evidence is not limited to that which informs the thoughts and actions of individuals. Rather, it is also frequently shared or exchanged with others, whether to make them aware of problems, persuade or influence their thoughts and actions, or to negotiate solutions. Capturing and representing the “social life of research evidence” requires measures of use that are far more dynamic than the ones commonly used.

Evidence Use is Best Measured as Behavior

If the use of research evidence is inherently about what users do with evidence, then it makes complete sense to study it as a behavior. The most obvious advantage of doing so is that we already have valid and reliable tools—theories, models, frameworks, methods, and measures—to measure and analyze behavior in a systematic way. These can be rather easily adapted to create measures of the who , what , why , how , when , and where aspects of evidence use that are comparable across different actors, settings, and circumstances as well as over time.

Beyond offering potentially effective solutions to core measurement challenges, a behavioral approach to research evidence use has considerable synergetic power as a framework of reference for connecting and organizing results of different studies and decoupling evidence use both from the factors that influence use and the outcomes of using research evidence. In essence, all human behaviors can be predicted from the combination of three elements : capacity, motivation, and opportunity to act. Capacity is generally defined as the individual’s psychological and physical capacity to engage in the activity concerned. It is a function of having the necessary knowledge and skills, but also the tools needed to perform the behavior. Motivation is defined as the cognitive and affective processes that energize and guide a person’s behavior. It is a function of held attitudes, perceptions, and emotions regarding the enactment of a specific behavior, but can also be induced externally through the use of incentives and disincentives. Opportunity is defined as objective factors in a person’s environment (physical, legal, economic, social, and cultural) that enable or impede the enactment of the behavior. This framework can be productively employed to organize and synthesize major findings of studies that examine use of research evidence, as I recently did with regard to data-informed decision making in educational settings . It can be equally useful for guiding the development, implementation, and evaluation of programs and interventions to improve use of research evidence, for example, capacity-building interventions.

Finally, a behavioral approach to the conceptualization and operationalization of research evidence use has significant potential to facilitate a more complete and nuanced understanding of the mechanisms that underlie use of research evidence. These include cognitive mechanisms (e.g., information processing and learning), social or relational mechanisms (e.g., diffusion, contagion, and social influence), and structural mechanisms (e.g., institutionalization of evidence use in policies, procedures, professional norms, and interactions with the external environment), as well potential interactions among these mechanisms. The insights generated from studying these mechanisms is already informing the development of decision and behavior support systems that accelerate the transfer of research-based knowledge into practice.

The frameworks and tools of behavioral science have significant potential to overcome persistent challenges regarding the measurement, tracking, and analysis of research evidence use. The same frameworks and tools can be employed synergistically to connect and synthesize existing pools of scientific knowledge on the topic and develop effective interventions to improve research evidence use.

Related content

Webinar: research grants on reducing inequality: an overview of the program and how to apply, studying ways to improve the use of research evidence: is your proposal a good fit.

Studying Ways to Improve the Use of Research Evidence: Is Your Proposal a Good Fit?

Fresh Insights on Measuring Research Use: Policymaker Perspectives on How Theory Falls Short

Fresh Insights on Measuring Research Use: Policymaker Perspectives on How Theory Falls Short

Studying the Use of Research Evidence: Methods and Measures in a Complex Field

Studying the Use of Research Evidence: Methods and Measures in a Complex Field

Can a Coordinated Knowledge System Improve the Use of Research Evidence in School-Based Mental Health Settings?

Can a Coordinated Knowledge System Improve the Use of Research Evidence in School-Based Mental Health Settings?

  • Log In / Apply
  • Issue 9: Winter 2023-24
  • Issue 8: Winter 2022-23
  • Issue 7: Winter 2021-22
  • Issue 6: Winter 2020-21
  • Issue 5: Winter 2019-20

Popular Searches

Subscribe for updates.

Join our mailing list and receive updates on funding opportunities, grant announcements, events, and new resources

  • Improving the Use of Research Evidence
  • Introduction
  • Conclusions
  • Article Information

eAppendix. Survey Instrument

Data Sharing Statement

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Jaffe K , Greene AK , Chen L, et al. Genetic Researchers’ Use of and Interest in Research With Diverse Ancestral Groups. JAMA Netw Open. 2024;7(4):e246805. doi:10.1001/jamanetworkopen.2024.6805

Manage citations:

© 2024

  • Permissions

Genetic Researchers’ Use of and Interest in Research With Diverse Ancestral Groups

  • 1 Department of Health Promotion and Policy, University of Massachusetts, Amherst
  • 2 Center for Bioethics and Social Sciences in Medicine, University of Michigan Medical School, Ann Arbor
  • 3 Department of Obstetrics and Gynecology, University of Michigan Medical School, Ann Arbor
  • 4 Department of Health Behavior and Health Education, University of Michigan School of Public Health, Ann Arbor
  • 5 Department of Internal Medicine, University of Michigan Medical School, Ann Arbor
  • 6 Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas
  • 7 Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor

Question   Are genetic researchers interested in research with diverse ancestral groups, and how can data stewards encourage that use?

Findings   In this survey study of 294 genetic researchers, significantly more respondents reported working with data from European ancestral populations than any other ancestral population, and European samples were more likely to be considered by researchers as adequate across data-steward type. Most researchers were interested in using more diverse ancestral populations and reported that increasing ancestral diversity of existing databases would enable such research.

Meaning   These findings suggest that there are specific gaps in access to and composition of genetic databases, underscoring the need to boost diversity in existing research samples to improve inclusivity in genetic research practices.

Importance   Genetic researchers must have access to databases populated with data from diverse ancestral groups to ensure research is generalizable or targeted for historically excluded communities.

Objective   To determine genetic researchers’ interest in doing research with diverse ancestral populations, which database stewards offer adequate samples, and additional facilitators for use of diverse ancestral data.

Design, Setting, and Participants   This survey study was conducted from June to December 2022 and was part of an exploratory sequential mixed-methods project in which previous qualitative results informed survey design. Eligible participants included genetic researchers who held US academic affiliations and conducted research using human genetic databases.

Exposure   Internet-administered survey to genetic research professionals.

Main Outcomes and Measures   The survey assessed respondents’ experience and interest in research with diverse ancestral data, perceptions of adequacy of diverse data across database stewards (ie, private, government, or consortia), and identified facilitators for encouraging use of diverse ancestral data. Descriptive statistics, χ 2 tests, and z tests were used to describe respondents’ perspectives and experiences.

Results   A total of 294 researchers (171 men [58.5%]; 121 women [41.2%]) were included in the study, resulting in a response rate of 20.4%. Across seniority level, 109 respondents (37.1%) were senior researchers, 85 (28.9%) were mid-level researchers, 71 (24.1%) were junior researchers, and 27 (9.2%) were trainees. Significantly more respondents worked with data from European ancestral populations (261 respondents [88.8%]) compared with any other ancestral population. Respondents who had not done research with Indigenous ancestral groups (210 respondents [71.4%]) were significantly more likely to report interest in doing so than not (121 respondents [41.2%] vs 89 respondents [30.3%]; P  < .001). Respondents reported discrepancies in the adequacy of ancestral populations with significantly more reporting European samples as adequate across consortium (203 respondents [90.6%]), government (200 respondents [89.7%]), and private (42 respondents [80.8%]) databases, compared with any other ancestral population. There were no significant differences in reported adequacy of ancestral populations across database stewards. A majority of respondents without access to adequate diverse samples reported that increasing the ancestral diversity of existing databases (201 respondents [68.4%]) and increasing access to databases that are already diverse (166 respondents [56.5%]) would increase the likelihood of them using a more diverse sample.

Conclusions and Relevance   In this survey study of US genetic researchers, respondents reported existing databases only provide adequate ancestral samples for European populations, despite their interest in other ancestral populations. These findings suggest there are specific gaps in access to and composition of genetic databases, highlighting the urgent need to boost diversity in research samples to improve inclusivity in genetic research practices.

In the era of precision medicine, genomic data are increasingly critical to refining and improving health care delivery. 1 Through advances in translational research, genomic databanks can be used to associate genetic variation with both disease risk and treatment response (eg, pharmacogenomics). 2

However, genomic databases generally lack demographic diversity across a number of variables. For example, the vast majority of genome-wide association studies (GWAS) are conducted with European ancestral populations, 3 with more than 80% of individuals in GWAS being of European descent. 4 In contrast, populations of African ancestral descent represent just 2% of the overall samples. 4 , 5 This is particularly problematic given the high genetic diversity of populations of African ancestry and their data’s unique ability to contribute to advances in genomic medicine. 6 For example, in a recent analysis of the National Human Genome Research Institute and European Bioinformatics Institute GWAS Catalogue, 7 data from the 2.4% of participants of African ancestry contributed to 7% of associations of genetic variance with traits.

It is unclear whether the populations represented in genomic data also generally vary by type of database, including those managed by consortia (eg, 1000 Genomes Project or ENCODE), government (eg, UK Biobank, All of Us Research Hub, or the Database of Genotypes and Phenotypes), or private entities (eg, 23andMe, Ambry, or Ancestry DNA). There is also limited information regarding the ancestral representativeness of private databases; however, past work 7 - 9 has indicated that these databases, like others, are largely comprised of European contributors. These disparities in the ancestral diversity of genomic data can impact the communities to which genomic research will generalize or can be targeted for and, consequently, impact health equity in genomic medicine. 10 , 11

To address this homogeneity, government initiatives have sought to increase the representation of populations historically underrepresented in biomedical research. For example, as part of President Obama’s Precision Medicine Initiative, the $2 billion All of Us research program aims to recruit 1 million participants and build a cohort comprised of more than 45% individuals from self-identified racial and ethnic minority groups (which can be associated with ancestral background) and more than 75% from populations generally underrepresented in research. 12 , 13 Other efforts to diversify genetic databases include the recent publishing of the Pangenome. 14

Nevertheless, researchers remain limited in their access to demographically diverse genomic data. Our previous qualitative work 15 with genetic researchers indicated a reinforcing, cyclical pattern: researchers know genetic databases will not be ancestrally diverse, and therefore do not prioritize such diversity in database selection. Others have normatively argued that racial and ethnic minority scientists are more likely to conduct research with racial and ethnic minority populations, so the lack of diversity in the scientific population compounds the lack of diversity in research sample populations. 16

To support the effort toward increasing the generalizability of genomic research across diverse ancestries, we revisited these challenges through an online survey of academic US genetic researchers who use human genomic data from consortium, government, or private databases. The survey focused on researchers’ perceptions of genomic databases and views on genomic data sharing broadly. Here, we queried respondents’ experience and interest in research with diverse ancestral groups because there is potentially a difference between the relative representativeness of ancestries within a database and access to adequate data necessary for diverse genomic analytic methods. 16 , 17 We also assessed the adequacy of representation of diverse ancestral populations by database steward type (ie, private, government, and consortia). Last, we explored facilitators to encourage their use.

This survey study was deemed exempt from full review by the University of Michigan Medical School institutional review board because the study involved a survey and information was collected so that participant identity could not be readily obtained in accordance with the Common Rule. The study followed the American Association for Public Opinion Research ( AAPOR ) reporting guideline for survey studies. Potential respondents were eligible to complete the online Qualtrics survey if they had an affiliation with a US academic institution, published research utilizing a genetic database for human research, and had experience using consortia, government, and/or privately managed databases. The first page of the survey provided information regarding informed consent. A participant was deemed to have consented to the study if they proceeded to the next page.

We used several different methods to recruit genetic researchers. Our first cohort was built through a systematic search of PubMed indexed articles that used human genetic databases. We focused on original research articles using human genetic data published between January 2017 and March 2021, in which the first or last author had a US academic affiliation. From this comprehensive search, we compiled a database of 1993 US genetic researchers, including their institutional contact information. We emailed all researchers in June 2022. We did a second wave of email recruitment to nonresponders in November 2022, including 2 follow-up emails in December 2022. Additionally, we sent a postcard with QR code linking to the survey to approximately one-third of the November recruits, but this approach had a negligible impact on response, so we followed-up with the remainder of our list by email only.

We additionally recruited, via a descriptive link to the survey, in the American Society of Human Genetics (ASHG) September 2022 and November 2022 email newsletters and by distributing postcards at both the ASHG and the American Society for Bioethics and Humanities annual conferences in October 2022. Due to a lack of member tracking at the professional organizational level, we were unable to establish the baseline of readers and attendees who met inclusion criteria through this method. All respondents were offered a $25 gift card for completion.

We designed the survey instrument as part of an exploratory sequential mixed-methods project, in which previous qualitative results, 15 also from US academic genetic researchers, informed the design and inclusion of key measures and the direction of this study. These novel measures assessed (1) general characteristics of the different kinds of genetic databases that respondents used (held by consortium, government, and private stewards [ie, the entities that manage and oversee many data resources]), (2) respondents’ perceptions of those different kinds of genetic databases, (3) perceived obstacles to their own research, and (4) general views on genetic data sharing (full survey available in the eAppendix in Supplement 1 ). We conducted 10 cognitive interviews to further refine the instrument.

Of key relevance to this analysis, we assessed which ancestral populations respondents had analyzed in their research, including African, American Indian or Alaskan Native, Arab or Middle Eastern, Asian, European, Hispanic or Latin American, mixed ancestries, Native Pacific or Pacific Islander, or other Indigenous populations. We also asked all respondents about their research interests related to different ancestral populations, the adequacy of database samples of ancestral populations across different data steward types (ie, consortium, government, and private) in their experience, and facilitators to conducting research with diverse ancestral populations. We also collected respondents’ own demographic characteristics, including self-identified gender, race, ethnicity, and career stage. Race and ethnicity categories for respondents included African American or Black, Asian, Hispanic or Latino, Indigenous, non-Hispanic White, multiracial (reported as mixed race on the survey), or none listed. Race and ethnicity information was collected because we wanted to understand whether respondent race and ethnicity might be associated with deciding to work with data from different ancestral groups. Survey measures are included in the eAppendix in Supplement 1 .

While considering potential measures that would facilitate the incorporation of additional ancestral populations into their work, respondents were presented with several options derived from our initial qualitative analysis 14 including (1) increasing the ancestral diversity of existing databases; (2) increasing access to ancestrally diverse databases; (3) additional methods development to support research in additional populations; (4) additional funding opportunities; (5) additional publication opportunities; (6) additional demographic data being included in the database; or (6) something else, with an option for open-text response.

We used descriptive statistics to describe respondents’ perspectives and experiences, and χ 2 tests to understand whether respondent’s demographics or seniority were associated with those variables. We use z tests to compare the proportion of respondents who have experience and/or interest in research in different ancestral populations. We also assessed the adequacy of genetic databases by ancestral population and by type of data steward as perceived by respondents for their own research, respondents’ interest in working with different ancestral populations, and measures that would aid them in working with more diverse databases. We used SPSS version 28 (IBM) to complete all statistical analyses and to generate figures. All P values were 2-sided, and we considered P  < .05 significant after a Bonferroni adjustment for multiple comparisons if warranted. Data analysis was conducted from April 2023 to March 2024.

Between June and December 2022, 1336 genetic research respondents (sampled via PubMed) opened our email, 373 opened the link to our survey, and 273 eligible respondents (20.4%) completed all questions ( Table 1 ). An additional 21 participants fully completed our survey from our ASHG recruitment. In total, we had an analytic sample of 294 respondents (171 men [58.5%]; 121 women [41.2%]) ( Table 2 ). The sample was majority non-Hispanic White (179 respondents [63.0%]), likely reflective of the field itself. Of all respondents, 4 (1.4%) identified as African American or Black, 70 (24.6%) as Asian, 12 (4.2%) as Hispanic or Latino, 9 (3.2%) as multiracial, 179 (63.0%) as non-Hispanic White, and 10 (3.4%) who did not list a race or ethnicity. Respondents ranged in seniority level, with 109 senior respondents (37.1%), 85 mid-level respondents (28.9%), 71 junior respondents (24.1%), and 27 trainee or student respondents (9.2%).

Respondents reported that most of their genetic research used data from participants of European ancestry (261 respondents [88.8%]). Significantly fewer respondents reported working with any other ancestral population as compared with European ancestry populations ( Figure 1 ).

If a respondent reported that they had not used a specific ancestral population in the past, we queried whether they would be interested in so doing if they could. All descriptive results are displayed in Figure 1 . A total of 210 respondents (71.4%) reported not having used samples from Indigenous ancestral groups (a variable we combined post hoc with Native Pacific or Pacific Islander and other Indigenous populations); these respondents were significantly more likely to report their interest in such research than not interested (121 respondents [41.2%] vs 89 respondents [30.3%]; P  < .001). If respondents had not used samples from European ancestral groups in the past (33 respondents [11.2%]), they were significantly more likely to report not being interested in such research than being interested (31 respondents [10.5%] vs 2 respondents [0.7%]; P  < .001) ( Figure 1 ). Using χ 2 tests, we found no significant association of the respondents’ own reported demographic characteristics (ie, race or ethnicity, gender, and seniority) with reported diversity of ancestral populations represented in their research.

Respondents who reported research experience with a particular ancestral population and data steward were asked to self-evaluate the adequacy of the database steward’s sample for their respective research (ie, adequate, inadequate, or unsure). Respondents reported significant discrepancies in the adequacy of represented ancestral populations with significantly more reporting European samples to be adequate across consortium (203 respondents [90.6%]), government (200 respondents [89.7%]), and private (42 respondents [80.8%]) databases as compared with any other ancestral population. None of the other descriptive differences were statistically significant ( Figure 2 ). In contrast with past reports focusing on the diversity of private databases, we found no significant differences in reported adequacy of ancestral populations across database steward.

For respondents who reported that they had not used a specific ancestral population in the past and indicated that they were interested in doing so (211 respondents [71.8%]), we asked what would increase the probability they could. A majority of respondents overall reported that increasing the ancestral diversity of existing databases (201 respondents [68.4%]) and increasing access to databases that are already ancestrally diverse (166 respondents [56.5%]) would increase the likelihood of them using more diverse ancestral populations. Others reported that additional funding opportunities (138 respondents [46.9%]), additional demographic data being included in the database (113 respondents [38.4%]), additional methods development (100 respondents [34.0%]), and additional publication opportunities (39 respondents [13.3%]) would increase the probability ( Figure 3 ).

In this survey study, the majority of US academic genetic researchers who responded had experience or interest in using data from diverse ancestral groups. However, respondents reported that data sets across steward type were inadequate for their research across all ancestral groups but for European. Researchers thought increasing diversity of existing databases and access to existing diverse databases would be most likely to facilitate this research.

Researchers were significantly more likely to report working with European populations as compared with any other ancestral group, but the majority of respondents reported having worked with, or an interest in working with, all ancestral populations presented. Researchers who reported that they had not worked with diverse ancestral groups but were interested in doing so likely face different kinds of barriers related to the overall adequacy of population samples. For instance, the 0.7% of respondents who reported that they had no experience working with European populations but were interested were unlikely to be impacted by a lack of adequate samples, whereas respondents who reported that they had no experience working with African, Hispanic, and Indigenous populations likely were. In addition, while some have normatively argued that the demographic characteristics of researchers are associated with use of diverse data sets, 17 our results showed no significant association of researcher demographics with the diversity of ancestral populations with whom they had done research. This finding highlights that a desire to work with diverse data may be a common, shared perspective in addition to the lack of access being a shared barrier.

Genetic researchers’ perceptions of the adequacy of ancestrally diverse samples reinforces previous findings demonstrating the lack of ancestral diversity across genomic databases. 3 , 5 , 10 , 18 European populations were resoundingly the only population for whom the majority of researchers reported adequate access across respondents and database steward type. Notably, while government initiatives have dedicated substantial resources to increasing racial and ethnic diversity in genetic databases, few respondents reported databases from non-European ancestral populations to be adequate in government or (often government-funded) consortium databases. Crucially, this suggests that ensuring that there is some representation of diverse ancestral populations in a given genetic data set does not necessarily mean that the samples of those ancestral populations are adequate for researchers’ needs.

Focusing on the population of respondents who were interested in conducting research with an ancestral group with whom they had not worked, the primary reported facilitator was to increase the ancestral diversity of existing databases. In other words, respondents want existing databases to be more diverse—a finding we can contextualize within previous work, 15 in which genetic researchers had an interest in using ancestrally diverse data, but this was not considered a priority when choosing a database. Thus, the research incentive structure is such that if data stewards themselves do not prioritize diversifying databases that researchers are already choosing, that prioritization will not happen at the researcher level. This might limit the impact of databases, such as All of Us, which exist independently, instead of integrating diverse data into more commonly used databases.

Respondents also reported increasing access to existing databases would facilitate more research with diverse ancestral populations. It may be that particular databases are prohibitively expensive or have greater restrictions on use and publishing, rendering them inaccessible. Some respondents also reported that increasing funding opportunities would help them conduct more research with diverse populations. Interpreted in the context of our previous qualitative findings, 15 these funds would likely go toward supporting access to more costly data or providing the research infrastructure and personnel to support intensive data cleaning and analysis required to harmonize data from less represented ancestral populations across databases. 15

This study has some limitations. First, we sampled a small and specific group of US research professionals. It is difficult to establish the total number of researchers using human genetic data with an academic affiliation in the US, so the extent to which these results are generalizable is limited. However, we did recruit respondents through multiple pathways, including the primary organization through which US genetic researchers are affiliated. Second, the potential for self-selection bias was a limitation. Genetic researchers who have stronger opinions about genetic databases may have been more likely to take the survey. That said, we did not disclose the content of the survey (eg, interest in data diversity) in the recruitment email. Third, there may have been some confusion between how we defined our 3 data stewards of interest (government, private, and consortium). To mitigate this impact, we did not use data from partial survey responses because several respondents reported that this confusion was why they did not complete the instrument. Fourth, we did not limit our analysis to only researchers who were specifically doing ancestral research. Fifth, respondents took the survey based on past research experience and their responses may not have captured the most recent government initiatives to emphasize and fund diverse genetic databases.

In this survey study of US genetic researchers, respondents reported existing databases only provide adequate ancestral research data for European populations, despite their interest in other ancestral populations. Adequately representative genetic data from diverse ancestral populations are essential for health equity and for ensuring research outcomes are either directed toward or generalizable to diverse patient populations. Government-funded genetic databases in particular should be representative of the populations they serve and foster health innovations that benefit all communities. Although, it is important to note that diversity of ancestral groups represented in genomic databases is a necessary but not sufficient benchmark of the extent to which databases facilitate meaningful research with historically underrepresented groups. By understanding genetic researchers’ experiences working with data from different ancestral populations—as well as their perceptions of both sample adequacy and facilitators for doing genetic research with non-European populations—we can begin to identify specific gaps in the accessibility and composition of genetic databases and provide more robust support to researchers seeking to work with diverse ancestral populations.

Accepted for Publication: February 18, 2024.

Published: April 16, 2024. doi:10.1001/jamanetworkopen.2024.6805

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Jaffe K et al. JAMA Network Open .

Corresponding Author: Kayte Spector-Bagdady, JD, MBe, Center for Bioethics & Social Sciences in Medicine, University of Michigan Medical School, 2800 Plymouth Rd, Bldg 14, G016, Ann Arbor, MI 48109 ( [email protected] ).

Author Contributions: Professor Spector-Bagdady had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Jaffe, Greene, Roberts, Zikmund-Fisher, McGuire, Spector-Bagdady.

Acquisition, analysis, or interpretation of data: Greene, Chen, Ryan, Krenz, Thomas, Marsh, Spector-Bagdady.

Drafting of the manuscript: Jaffe, Greene, Spector-Bagdady.

Critical review of the manuscript for important intellectual content: Greene, Chen, Ryan, Krenz, Roberts, Zikmund-Fisher, McGuire, Thomas, Marsh, Spector-Bagdady.

Statistical analysis: Greene, Chen, Spector-Bagdady.

Obtained funding: Roberts, Spector-Bagdady.

Administrative, technical, or material support: Greene, Thomas.

Supervision: McGuire, Spector-Bagdady.

Conflict of Interest Disclosures: Dr McGuire reported receiving personal fees from Geisinger Research, Morgridge Institute for Research, Danaher, Greenwall Foundation Board, and Nature Genomics outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by the National Human Genome Research Institute (grant No. K01HG010496), the National Center for Advancing Translational Sciences (grant Nos. UL1TR002240 and R01TR004244), the National Institute of Mental Health (grant No. R01MH126937), the National Cancer Institute (grant No. R01CA237118), and the Center for Bioethics & Social Sciences in Medicine at the University of Michigan Medical School.

Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The contents of this paper do not necessarily reflect the opinion of the funders of the research.

Data Sharing Statement: See Supplement 2 .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, what is your plagiarism score.

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

April 16, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

New guidelines reflect growing use of AI in health care research

by NDORMS, University of Oxford

artificial intelligence

The widespread use of artificial intelligence (AI) in medical decision-making tools has led to an update of the TRIPOD guidelines for reporting clinical prediction models. The new TRIPOD+AI guidelines are launched in the BMJ today.

The TRIPOD guidelines (which stands for Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis) were developed in 2015 to improve tools to aid diagnosis and prognosis that are used by doctors. Widely used, their uptake by medical practitioners to estimate the probability that a specific condition is present or may occur in the future, has helped improve transparency and accuracy of decision-making and significantly improve patient care.

But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods. Transparency is one of the six core principles underpinning the WHO guidance on ethics and governance of artificial intelligence for health. TRIPOD+AI has therefore been developed to provide a framework and set of reporting standards to boost reporting of studies developing and evaluating AI prediction models regardless of the modeling approach.

The TRIPOD+AI guidelines were developed by a consortium of international investigators, led by researchers from the University of Oxford alongside researchers from other leading institutions across the world, health care professionals , industry, regulators, and journal editors. The development of the new guidance was informed by research highlighting poor and incomplete reporting of AI studies, a Delphi survey, and an online consensus meeting.

Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and lead researcher in TRIPOD, says, "There is enormous potential for artificial intelligence to improve health care from earlier diagnosis of patients with lung cancer to identifying people at increased risk of heart attacks. We're only just starting to see how this technology can be used to improve patient outcomes.

"Deciding whether to adopt these tools is predicated on transparent reporting. Transparency enables errors to be identified, facilitates appraisal of methods and ensures effective oversight and regulation. Transparency can also create more trust and influence patient and public acceptability of the use of prediction models in health care."

The TRIPOD+AI statement consists of a 27-item checklist that supersedes TRIPOD 2015. The checklist details reporting recommendations for each item and is designed to help researchers, peer reviewers, editors, policymakers and patients understand and evaluate the quality of the study methods and findings of AI-driven research.

A key change in TRIPOD+AI has been an increased emphasis on trustworthiness and fairness. Prof. Carl Moons, UMC Utrecht said, "While these are not new concepts in prediction modeling, AI has drawn more attention to these as reporting issues. A reason for this is that many AI algorithms are developed on very specific data sets that are sometimes not even from studies or could simply be drawn from the internet.

"We also don't know which groups or subgroups were included. So to ensure that studies do not discriminate against any particular group or create inequalities in health care provision, and to ensure decision-makers can trust the source of the data, these factors become more important."

Dr. Xiaoxuan Liu and Prof Alastair Denniston, Directors of the NIHR Incubator for Regulatory Science in AI & Digital Health care are co-authors of TRIPOD+AI explained, "Many of the most important applications of AI in medicine are based on prediction models. We were delighted to support the development of TRIPOD+AI which is designed to improve the quality of evidence in this important area of AI research."

TRIPOD 2015 helped change the landscape of clinical research reporting bringing minimum reporting standards to prediction models. The original guidelines have been cited over 7500 times, featured in multiple journal instructions to authors, and been included in WHO and NICE briefing documents.

"I hope the TRIPOD+AI will lead to a marked improvement in reporting, reduce waste from incompletely reported research and enable stakeholders to arrive at an informed judgment based on full details on the potential of the AI technology to improve patient care and outcomes that cut through the hype in AI-driven health care innovations," concluded Gary.

Explore further

Feedback to editors

use of research in study

Bacteria behind meningitis in babies explained

use of research in study

Antibiotics reveal a new way to fight cancer

use of research in study

A urine-based test that detects tumor DNA fragments could offer early reliable screening for head and neck cancer

2 hours ago

use of research in study

Multidisciplinary research team creates computational models to predict heart valve leakage in children

use of research in study

Research uncovers new reasons to target neutrophils for tuberculosis therapy

use of research in study

New treatment method using plasma irradiation promotes faster bone healing

3 hours ago

use of research in study

Good blood pressure control could prevent fibroids

4 hours ago

use of research in study

Study suggests the brain's reward system works to make others happy, not just ourselves

use of research in study

Researchers discover cause of rare congenital lung malformations

use of research in study

An effective drug delivery system for next-generation treatments to hitch a ride in cancer cells

5 hours ago

Related Stories

use of research in study

Experts establish checklist detailing key consensus reporting items for primary care studies

Nov 28, 2023

use of research in study

A new standard for reporting epidemic prediction research

Oct 19, 2021

use of research in study

Urology treatment studies show increased reporting of harmful effects

Dec 21, 2023

use of research in study

New reporting guidelines developed to improve AI in health care settings

May 19, 2022

use of research in study

New guidelines to improve reporting standards of studies that investigate causal mechanisms

Sep 21, 2021

use of research in study

New guidelines for reporting clinical trials of biofield therapies released

Feb 8, 2024

Recommended for you

use of research in study

How AI improves physician and nurse collaboration to boost patient care

7 hours ago

use of research in study

GPT-4 matches radiologists in detecting errors in radiology reports

use of research in study

Study reveals AI enhances physician-patient communication

Apr 15, 2024

use of research in study

Study shows AI improves accuracy of skin cancer diagnoses

Apr 12, 2024

use of research in study

New AI method captures uncertainty in medical images

Apr 11, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • See us on facebook
  • See us on twitter
  • See us on youtube
  • See us on linkedin
  • See us on instagram

Two key brain systems are central to psychosis, Stanford Medicine-led study finds

When the brain has trouble filtering incoming information and predicting what’s likely to happen, psychosis can result, Stanford Medicine-led research shows.

April 11, 2024 - By Erin Digitale

test

People with psychosis have trouble filtering relevant information (mesh funnel) and predicting rewarding events (broken crystal ball), creating a complex inner world. Emily Moskal

Inside the brains of people with psychosis, two key systems are malfunctioning: a “filter” that directs attention toward important external events and internal thoughts, and a “predictor” composed of pathways that anticipate rewards.

Dysfunction of these systems makes it difficult to know what’s real, manifesting as hallucinations and delusions. 

The findings come from a Stanford Medicine-led study , published April 11 in  Molecular Psychiatry , that used brain scan data from children, teens and young adults with psychosis. The results confirm an existing theory of how breaks with reality occur.

“This work provides a good model for understanding the development and progression of schizophrenia, which is a challenging problem,” said lead author  Kaustubh Supekar , PhD, clinical associate professor of psychiatry and behavioral sciences.

The findings, observed in individuals with a rare genetic disease called 22q11.2 deletion syndrome who experience psychosis as well as in those with psychosis of unknown origin, advance scientists’ understanding of the underlying brain mechanisms and theoretical frameworks related to psychosis.

During psychosis, patients experience hallucinations, such as hearing voices, and hold delusional beliefs, such as thinking that people who are not real exist. Psychosis can occur on its own and isa hallmark of certain serious mental illnesses, including bipolar disorder and schizophrenia. Schizophrenia is also characterized by social withdrawal, disorganized thinking and speech, and a reduction in energy and motivation.

It is challenging to study how schizophrenia begins in the brain. The condition usually emerges in teens or young adults, most of whom soon begin taking antipsychotic medications to ease their symptoms. When researchers analyze brain scans from people with established schizophrenia, they cannot distinguish the effects of the disease from the effects of the medications. They also do not know how schizophrenia changes the brain as the disease progresses. 

To get an early view of the disease process, the Stanford Medicine team studied young people aged 6 to 39 with 22q11.2 deletion syndrome, a genetic condition with a 30% risk for psychosis, schizophrenia or both. 

test

Kaustubh Supekar

Brain function in 22q11.2 patients who have psychosis is similar to that in people with psychosis of unknown origin, they found. And these brain patterns matched what the researchers had previously theorized was generating psychosis symptoms.

“The brain patterns we identified support our theoretical models of how cognitive control systems malfunction in psychosis,” said senior study author  Vinod Menon , PhD, the Rachael L. and Walter F. Nichols, MD, Professor; a professor of psychiatry and behavioral sciences; and director of the  Stanford Cognitive and Systems Neuroscience Laboratory .

Thoughts that are not linked to reality can capture the brain’s cognitive control networks, he said. “This process derails the normal functioning of cognitive control, allowing intrusive thoughts to dominate, culminating in symptoms we recognize as psychosis.”

Cerebral sorting  

Normally, the brain’s cognitive filtering system — aka the salience network — works behind the scenes to selectively direct our attention to important internal thoughts and external events. With its help, we can dismiss irrational thoughts and unimportant events and focus on what’s real and meaningful to us, such as paying attention to traffic so we avoid a collision.

The ventral striatum, a small brain region, and associated brain pathways driven by dopamine, play an important role in predicting what will be rewarding or important. 

For the study, the researchers assembled as much functional MRI brain-scan data as possible from young people with 22q11.2 deletion syndrome, totaling 101 individuals scanned at three different universities. (The study also included brain scans from several comparison groups without 22q11.2 deletion syndrome: 120 people with early idiopathic psychosis, 101 people with autism, 123 with attention deficit/hyperactivity disorder and 411 healthy controls.) 

The genetic condition, characterized by deletion of part of the 22nd chromosome, affects 1 in every 2,000 to 4,000 people. In addition to the 30% risk of schizophrenia or psychosis, people with the syndrome can also have autism or attention deficit hyperactivity disorder, which is why these conditions were included in the comparison groups.

The researchers used a type of machine learning algorithm called a spatiotemporal deep neural network to characterize patterns of brain function in all patients with 22q11.2 deletion syndrome compared with healthy subjects. With a cohort of patients whose brains were scanned at the University of California, Los Angeles, they developed an algorithmic model that distinguished brain scans from people with 22q11.2 deletion syndrome versus those without it. The model predicted the syndrome with greater than 94% accuracy. They validated the model in additional groups of people with or without the genetic syndrome who had received brain scans at UC Davis and Pontificia Universidad Católica de Chile, showing that in these independent groups, the model sorted brain scans with 84% to 90% accuracy.

The researchers then used the model to investigate which brain features play the biggest role in psychosis. Prior studies of psychosis had not given consistent results, likely because their sample sizes were too small. 

test

Vinod Menon

Comparing brain scans from 22q11.2 deletion syndrome patients who had and did not have psychosis, the researchers showed that the brain areas contributing most to psychosis are the anterior insula (a key part of the salience network or “filter”) and the ventral striatum (the “reward predictor”); this was true for different cohorts of patients.

In comparing the brain features of people with 22q11.2 deletion syndrome and psychosis against people with psychosis of unknown origin, the model found significant overlap, indicating that these brain features are characteristic of psychosis in general.

A second mathematical model, trained to distinguish all subjects with 22q11.2 deletion syndrome and psychosis from those who have the genetic syndrome but without psychosis, selected brain scans from people with idiopathic psychosis with 77.5% accuracy, again supporting the idea that the brain’s filtering and predicting centers are key to psychosis.

Furthermore, this model was specific to psychosis: It could not classify people with idiopathic autism or ADHD.

“It was quite exciting to trace our steps back to our initial question — ‘What are the dysfunctional brain systems in schizophrenia?’ — and to discover similar patterns in this context,” Menon said. “At the neural level, the characteristics differentiating individuals with psychosis in 22q11.2 deletion syndrome are mirroring the pathways we’ve pinpointed in schizophrenia. This parallel reinforces our understanding of psychosis as a condition with identifiable and consistent brain signatures.” However, these brain signatures were not seen in people with the genetic syndrome but no psychosis, holding clues to future directions for research, he added.

Applications for treatment or prevention

In addition to supporting the scientists’ theory about how psychosis occurs, the findings have implications for understanding the condition — and possibly preventing it.

“One of my goals is to prevent or delay development of schizophrenia,” Supekar said. The fact that the new findings are consistent with the team’s prior research on which brain centers contribute most to schizophrenia in adults suggests there may be a way to prevent it, he said. “In schizophrenia, by the time of diagnosis, a lot of damage has already occurred in the brain, and it can be very difficult to change the course of the disease.”

“What we saw is that, early on, functional interactions among brain regions within the same brain systems are abnormal,” he added. “The abnormalities do not start when you are in your 20s; they are evident even when you are 7 or 8.”

Our discoveries underscore the importance of approaching people with psychosis with compassion.

The researchers plan to use existing treatments, such as transcranial magnetic stimulation or focused ultrasound, targeted at these brain centers in young people at risk of psychosis, such as those with 22q11.2 deletion syndrome or with two parents who have schizophrenia, to see if they prevent or delay the onset of the condition or lessen symptoms once they appear. 

The results also suggest that using functional MRI to monitor brain activity at the key centers could help scientists investigate how existing antipsychotic medications are working. 

Although it’s still puzzling why someone becomes untethered from reality — given how risky it seems for one’s well-being — the “how” is now understandable, Supekar said. “From a mechanistic point of view, it makes sense,” he said.

“Our discoveries underscore the importance of approaching people with psychosis with compassion,” Menon said, adding that his team hopes their work not only advances scientific understanding but also inspires a cultural shift toward empathy and support for those experiencing psychosis. 

“I recently had the privilege of engaging with individuals from our department’s early psychosis treatment group,” he said. “Their message was a clear and powerful: ‘We share more similarities than differences. Like anyone, we experience our own highs and lows.’ Their words were a heartfelt appeal for greater empathy and understanding toward those living with this condition. It was a call to view psychosis through a lens of empathy and solidarity.”

Researchers contributed to the study from UCLA, Clinica Alemana Universidad del Desarrollo, Pontificia Universidad Católica de Chile, the University of Oxford and UC Davis.

The study was funded by the Stanford Maternal and Child Health Research Institute’s Uytengsu-Hamilton 22q11 Neuropsychiatry Research Program, FONDEYCT (the National Fund for Scientific and Technological Development of the government of Chile), ANID-Chile (the Chilean National Agency for Research and Development) and the U.S. National Institutes of Health (grants AG072114, MH121069, MH085953 and MH101779).

Erin Digitale

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu .

Artificial intelligence

Exploring ways AI is applied to health care

Stanford Medicine Magazine: AI

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

logo

Office of Science Policy

Biosafety and Biosecurity Policy

Life sciences research is essential to protecting global health security by helping us to understand the fundamental nature of human-pathogen interactions and informing public health and preparedness efforts, such as the development of vaccines and medical countermeasures. OSP develops policies to preserve the benefits of this research while minimizing its potential misuse.

  • Dual Use Research of Concern (DURC)

Research Involving Enhanced Potential Pandemic Pathogens (ePPP)

  • NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules (NIH Guidelines)

Links to Other Biosafety Resources

Faqs and fact sheets.

  • IBC Administration
  • Externally Administered IBCs
  • IBC Meetings and Minutes

Incident Reporting

  • Fact Sheet on OSP Review of Requests to Lower the Minimum Required Biosafety Containment Level for Research Subject to the NIH Guidelines
  • Interim Laboratory Biosafety Guidance for Research with SARS-CoV-2 and IBC Requirements under the  NIH Guidelines
  • Animal Experiments Under the NIH Guidelines
  • Animal Activities Table
  • Factsheet on Release of Client-Owned Animals After Participation in Research Subject to the NIH Guidelines
  • Toxin Experiments
  • Major Actions Under the  NIH Guidelines
  • Lentiviral Containment Guidance
  • Amendments to the NIH Guidelines  Regarding Research Involving Gene Drive Modified Organisms
  • Biosafety Considerations for Contained Research Involving Gene Drive Modified Organisms
  • April 2019 Amendment of the  NIH Guidelines

Dual Use Research of Concern

Dual Use Research of Concern (DURC) is life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security. The United States Government’s oversight of DURC is aimed at preserving the benefits of life sciences research while minimizing the risk of misuse of the knowledge, information, products, or technologies provided by such research.

Watch the video “Dual Use Research: A Dialogue”

U.S. Government DURC Policies

  • United States Government Policy for Oversight of Life Sciences Dual Use Research of Concern (March 2012)
  • United States Government Policy for Institutional Oversight of Life Sciences Dual use Research of Concern (September 2014)
  • Companion Guide to U.S. government policies for oversight of DURC

External Resources

S3: Science, Safety, and Security

The U.S. Government and the Department of Health and Human Services define enhanced potential pandemic pathogen (ePPP) research as research that may be reasonably anticipated to create, transfer or use potential pandemic pathogens resulting from the enhancement of a pathogen’s transmissibility and/or virulence in humans.

ePPP research can help us prepare for the next pandemic, for example by informing public health and preparedness efforts including surveillance and the development of vaccines and medical countermeasures. However, such research requires strict oversight and may only be conducted with appropriate biosafety and biosecurity measures.

The HHS  Framework for Guiding Funding Decisions about Proposed Research Involving Enhanced Potential Pandemic Pathogens (HHS P3CO Framework)  was established in 2017 to guide HHS funding decisions on proposed ePPP research and aims to preserve the benefits of life sciences research involving ePPPs while minimizing potential biosafety and biosecurity risks. The HHS P3CO Framework is responsive to and in accordance with the  Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight  issued by the White House Office of Science and Technology Policy following a three-year, public deliberative process .

Department of Health and Human Services P3CO Framework

Department of Health and Human Services Framework for Guiding Funding Decisions about Proposed Research Involving Enhanced Potential Pandemic Pathogens

U.S. Government Policy on Enhanced PPP Research

Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight

Potential Pandemic Pathogen Care and Oversight (P3CO) Policy Development

NSABB Recommendations for the Evaluation and Oversight of Proposed Gain-of-Function Research

U.S. Government Gain-of-Function Deliberative Process and Research Funding Pause on Selected Gain-of-Function Research Involving Influenza, MERS, and SARS Viruses

FAQs on the U.S. government Gain-of-function Deliberative Process and Research Funding Pause

Symposia Summaries and Commissioned Reports

1st National Academies Symposium Summary (December 15-16, 2014) – Potential Risks and Benefits of Gain-of-Function Research: Summary of a Workshop

2nd National Academies Symposium Summary (March 10-11, 2016) – Gain-of-Function Research: Summary of the Second Symposium

Risk and Benefit Analysis of Gain of Function Research – Final Report  (Gryphon Scientific)

Gain-of-Function Research: Ethical Analysis  (Professor Michael J. Selgelid)

Additional Material

NIH Director’s Statement on Funding Pause on Certain Types of Gain-of-Function Research

NIH Director’s Statement on Lifting of NIH Funding Pause on Gain-of-Function Research

NIH Director’s Statement on NIH’s commitment to transparency on research involving potential pandemic pathogens

Supplemental Information on the Risk and Benefit Analysis of Gain-of-Function Research

Gain-of-Function Deliberative Process Written Public Comments (Nov 10, 2014 – June 8, 2016)

NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules (NIH Guidelines )

  • NIH Guidelines – April 2024 (PDF) (Printer friendly for duplex printing)
  • Federal Register Notice – April 2024

The NIH Guidelines require that any significant problems, violations, or any significant research-related accidents and illnesses” be reported to OSP within 30 days. Appendix G of the NIH Guidelines specifies certain types of accidents that must be reported on a more expedited basis. Specifically, Appendix G-II-B-2-k requires that spills and accidents in BL2 laboratories resulting in an overt exposure must be immediately reported to the OSP (as well as the IBC). In addition, Appendices G-II-C-2-q and G-II-D-2-k require that spills or accidents occurring in high containment (BL3 or BL4) laboratories resulting in an overt or potential exposure must be immediately reported to OSP (as well as the IBC and BSO).

  • Incident Reporting FAQs – December 2023
  • Incident Reporting Template – April 2019

*Incident reports may be released to the public in full. Please note that incident reports should not include personally identifiable information or any information that you do not wish to make public. Proprietary, classified, confidential, or sensitive information should not be included in the report.  If it is necessary to include such information, please clearly mark it as such so that it can be considered for redaction in accordance with Freedom of Information Act exemptions.*

IBC RMS and Registration Information

  • Institutional Biosafety Committee Registration Management System (IBC-RMS)

IBC Self-Assessment Tool

  • IBC Self-Assessment Tool – April 2024

Investigator Brochure

  • Investigator Responsibilities under the  NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules  – October 2021

Additional Resources

  • CDC Biosafety Resources and Tools
  • American Biological Safety Association (ABSA)
  • AIHA Home Page
  • American Society for Microbiology
  • The American Society of Gene and Cell Therapy
  • Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) Website
  • The Centers for Disease Control and Prevention (CDC) Website
  • The US Department of Health and Human Services (HHS)
  • The Office for Human Research Protections (OHRP)
  • The Federal Register Website
  • The Office of Laboratory Animal Welfare
  • The Animal and Plant Health Inspection Service (APHIS) Website
  • Biosafety in Microbiological and Biomedical Laboratories (BMBL)
  • Risk Group Classification for Infectious Agents (ABSA)
  • Select Agent Program
  • Association for the Accreditation of Human Research Protection Programs
  • Biosafety Discussion List
  • Skip to main content
  • Keyboard shortcuts for audio player

White-sounding names get called back for jobs more than Black ones, a new study finds

Joe Hernandez

use of research in study

A sign seeking job applicants is seen in the window of a restaurant in Miami, Florida, on May 5, 2023. Joe Raedle/Getty Images hide caption

A sign seeking job applicants is seen in the window of a restaurant in Miami, Florida, on May 5, 2023.

Twenty years ago, two economists responded to a slew of help-wanted ads in Boston and Chicago newspapers using a set of fictitious names to test for racial bias in the job market.

The watershed study found that applicants with names suggesting they were white got 50% more callbacks from employers than those whose names indicated they were Black.

Researchers at the University of California, Berkeley and the University of Chicago recently took that premise and expanded on it, filing 83,000 fake job applications for 11,000 entry-level positions at a variety of Fortune 500 companies.

Their working paper , published this month and titled "A Discrimination Report Card," found that the typical employer called back the presumably white applicants around 9% more than Black ones. That number rose to roughly 24% for the worst offenders.

The research team initially conducted its experiment in 2021, but their new paper names the 97 companies they included in the study and assigns them grades representing their level of bias, thanks to a new methodology the researchers developed.

"Putting the names out there in the public domain is to move away from a lot of the performative allyship that you see with these companies, saying, 'Oh, we value inclusivity and diversity,'" said Pat Kline, a University of California, Berkeley economics professor who worked on the study. "We're trying to create kind of an objective ground truth here."

From Jobs To Homeownership, Protests Put Spotlight On Racial Economic Divide

America Reckons With Racial Injustice

From jobs to homeownership, protests put spotlight on racial economic divide.

The names that researchers tested include some used in the 2004 study as well as others culled from a database of speeding tickets in North Carolina. A name was classified as "racially distinctive" if more than 90% of people with that name shared the same race.

Applicants with names such as Brad and Greg were up against Darnell and Lamar. Amanda and Kristen competed for jobs with Ebony and Latoya.

What the researchers found was that some firms called back Black applicants considerably less, while race played little to no factor in the hiring processes at other firms.

Dorianne St Fleur, a career coach and workplace consultant, said she wasn't surprised by the findings showing fewer callbacks for presumed Black applicants at some companies.

"I know the study focused on entry-level positions. Unfortunately it doesn't stop there. I've seen it throughout the organization all the way up into the C-suite," she said.

St Fleur, who primarily coaches women of color, said many of her clients have the right credentials and experience for certain jobs but aren't being hired.

"They are sending out dozens, hundreds of resumes and receiving nothing back," she said.

What the researchers found

Much of a company's bias in hiring could be explained by its industry, the study found. Auto dealers and retailers of car parts were the least likely to call back Black applicants, with Genuine Auto Parts (which distributes NAPA products) and the used car retailer AutoNation scoring the worst on the study's "discrimination report card."

"We are always evaluating our practices to ensure inclusivity and break down barriers, and we will continue to do so," Heather Ross, vice president of strategic communications at Genuine Parts Company, said in an email.

AutoNation did not reply to a request for comment.

The companies that performed best in the analysis included Charter/Spectrum, Dr. Pepper, Kroger and Avis-Budget.

Workplace Diversity Goes Far Past Hiring. How Leaders Can Support Employees Of Color

Workplace Diversity Goes Far Past Hiring. How Leaders Can Support Employees Of Color

Several patterns emerged when the researchers looked at the companies that had the lowest "contact gap" between white and Black applicants

Federal contractors and more profitable companies called back applicants from the two racial groups at more similar rates. Firms with more centralized human resources departments and policies also exhibited less racial bias, which Kline says may indicate that a standardized hiring workflow involving multiple employees could help reduce discrimination.

When it came to the sex of applicants, most companies didn't discriminate when calling back job-seekers.

Still, some firms preferred one sex over another in screening applicants. Manufacturing companies called back people with male names at higher rates, and clothing stores showing a bias toward female applicants.

What can workplaces — and workers — do

Kline said the research team hoped the public would focus as much on companies doing a bad job as those doing a good one, since they have potentially found ways to remove or limit racial bias from the hiring process.

"Even if it's true, from these insights in psychology and behavioral economics, that individuals are inevitably going to carry biases along with them, it's not automatic that those individual biases will translate into organizational biases, on average," he said.

St Fleur said there are several strategies companies can use to cut down on bias in the hiring process, including training staff and involving multiple recruiters in callback decisions.

Companies should also collect data about which candidates make it through the hiring process and consider standardizing or anonymizing that process, she added.

St Fleur also said she often tells her job-seeking clients that it's not their fault that they aren't getting called back for open positions they believe they're qualified for.

"The fact that you're not getting callbacks does not mean you suck, you're not a good worker, you don't deserve this thing," she said. "It's just the nature of the systemic forces at play, and this is what we have to deal with."

Still, she said job candidates facing bias in the hiring process can lean on their network for new opportunities, prioritize inclusive companies when applying for work and even consider switching industries or locations.

  • Apr 16 2024

UK study will look at unintended consequences of classifying controlled substances

Kara Cook, a Ph.D. student at the University of Kentucky, began studying the effects of scheduling gabapentin in Kentucky for a deeply personal reason. Ridofranz, iStock/Getty Images Plus

Researchers’ reasons for launching a study are sometimes purely intellectual: They may be intrigued by an inexplicable observation or challenged by a compelling question.

Other times, the spark may be deeply personal.

Such was the case for University of Kentucky doctoral student Kara Cook.

Darren’s story

Cook’s brother-in-law, Darren Noble, suffered from a chronic autoimmune disease called Sjögren’s syndrome. Symptoms of this rare condition can include dry mouth and eyes, fatigue and muscle and joint pain.

In Darren’s case, the pain increased to the point where it became nearly intractable. Only opioid analgesics were up to the task of managing his pain.

During the course of Darren’s treatment, stricter quantity limits were placed on opioid prescriptions. As a result, he could no longer receive an adequate supply of pain medicine. Under his doctors’ care, Darren tried steroids and other drug therapies as well as surgical treatment, but nothing could quash his chronic pain.

Eventually, his doctors admitted him to a hospice program. With hospice’s focus on palliative care, Darren could once again receive doses of opioid medications that were safe and effective for him.

Then Darren came down with COVID-19. Given his already compromised health, the COVID-19 symptoms rapidly grew worse. Overcome with fear that a trip to the hospital would make him ineligible for hospice care and the medicines that made his life bearable, Darren deferred in-patient care.

Days later, he died of complications of the COVID-19 infection.

“Drug scheduling is intended to protect people from the most harmful drugs," Cook said of classifying drugs based on their misuse potential and restricting access depending on a drug’s classification. "Scheduling is meant to make our lives safer, our lives better. In my brother-in-law’s case, increased restrictions on opioid prescribing made his life worse and may have contributed to his death.”

And so the seed was sewn.

Nonmedical Gabapentin use in Kentucky

Shortly after, Cook and her adviser, Assistant Professor  Rachel Vickers-Smith , Ph.D., of the UK College of Public Health’s Department of Epidemiology and Environmental Health, were looking at data derived from the Social Networks among Appalachian People (SNAP) study regarding the use of the drug gabapentin in Kentucky.

Gabapentin is an anticonvulsant drug also used to treat certain types of pain. According to the federal government, gabapentin isn’t a controlled substance. But because its potential for misuse has become increasingly evident, some states (Kentucky being the first) have listed gabapentin as a Schedule V drug with the intent of restricting access and curtailing use without a prescription.

But that’s not what happened. After scheduling, Cook noticed that nonmedical gabapentin use was still increasing in the Commonwealth. What had changed, however, was that people were no longer getting gabapentin from their doctors and pharmacies but rather from family, friends and people who sell drugs.

Although the circumstances surrounding Darren’s situation and post-scheduling gabapentin trends in Kentucky are quite different, both point to Cook’s general hypothesis: While tightening access to a drug seems to be an obvious and eminently sensible approach to reducing misuse, scheduling decisions can have unforeseen consequences.

Cook was recently awarded a UK Substance Use Priority Research Area (SUPRA) Graduate Student Grant to explore this hypothesis in her study titled “The Effects of Scheduling on Nonmedical Use of Gabapentin in Kentucky.”

“My study is about gabapentin, but that’s largely because it’s a recently scheduled drug that we can get a lot of data on,” explained Cook. “The study is really about the effect of scheduling.”

Cook’s study has two aims. The first is to determine whether gabapentin-involved overdose deaths changed in Kentucky after it became a Schedule V drug in July 2017. She will accomplish this by comparing data from 12 months before and 12 months after the drug was scheduled.

For the second aim, Cook will collect qualitative data by interviewing 30 Kentucky residents who have taken gabapentin for nonmedical reasons. Using this novel approach of gathering data directly from people who use drugs, she hopes to gauge post-scheduling changes in accessibility, street price, source, frequency and amount of use, and reason for use.

Cook has taught college-level math and statistics for nearly 30 years, and in addition to being a Ph.D. student in the College of Public Health, she is currently a lecturer in the  UK College of Arts and Sciences’ Department of Statistics .

With her strong background in statistics, Cook is quite comfortable with the quantitative methods required for her first aim. However, the interview-based approach of the second aim opens new territory for Cook.

“I’ve had to learn a lot about qualitative methods, and it’s exciting to step outside my mathy little world and explore the messy real world. It does seem strange,” Cook joked, “to be doing exactly what I tell my statistics students not to do.”

“But the interviews will be potentially illuminating,” Cook added. “Very few qualitative studies have looked at drug scheduling from the perspective of people who use drugs.”

The findings of her study could — in the best possible way — complicate our picture of the effects of drug scheduling. Ultimately, research such as Cook’s could help reshape scheduling practices to maximize their benefits while minimizing unanticipated and potentially unwelcome consequences.

“You can never tell what the impact of your research will be,” Cook said. “But I’m hoping — my whole family is hoping — that something good will come from this research. Then we all can feel that the tragedy of Darren’s death served as the genesis for positive change.”

Research reported in this publication was supported by the National Institute on Drug Abuse of the National Institutes of Health under Award Numbers R01DA024598 and R01DA033862. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Steve Claas (College of Public Health)

  • Substance Use Disorder

You might also like...

use of research in study

More from this series Research Priorities - Substance Use Disorder

use of research in study

  • Share full article

Advertisement

Supported by

Scam or Not

Is Apple Cider Vinegar Really a Cure-All?

It has been said to help with weight loss, blood sugar control, acne and more. But experts say the science is more nuanced.

A photo illustration of a clear, uncorked potion bottle. Apple cider vinegar is in the bottle, and smoke emanates from the top of it.

By Alice Callahan

On TikTok , a man swirls a tablespoon of apple cider vinegar into a cup of water, drinks it and eats two slices of pizza. Then, he tests his blood sugar. “These are the best results of all,” he says, showing a much lower spike on a blood sugar graph than when he ate the pizza without the vinegar.

In other posts, TikTok users rave about apple cider vinegar’s remarkable ability to help them lose weight , settle their stomachs and — when applied to their skin — clear their acne and eczema .

Apple cider vinegar has been used as a home remedy for healing wounds, quelling coughs and soothing stomachaches for thousands of years, said Carol Johnston, a professor of nutrition at Arizona State University.

But while some of apple cider vinegar’s health claims have a little science behind them, Dr. Johnston said, many claims haven’t been studied at all. Here’s what we know about apple cider vinegar — and some important cautions to keep in mind if you try it.

How might apple cider vinegar benefit health?

Apple cider vinegar is made via fermentation, in which yeast and bacteria convert carbohydrates first into alcohol and then into acetic acid, which gives vinegar its pungent taste and odor and potentially, research suggests, its health benefits, Dr. Johnston said.

Social media proponents often recommend using unpasteurized and unfiltered versions, which contain a haze of bacteria and undigested carbohydrates called “the mother,” said Dr. Chris Damman, an associate professor of gastroenterology at the University of Washington School of Medicine. But there’s no evidence that these “raw” apple cider vinegars are healthier than regular ones, he said.

Vinegars made from apples and other fruits also contain compounds called polyphenols, which have antioxidant and anti-inflammatory properties and might contribute to their potential health benefits, he said.

What does the research suggest?

Blood sugar control.

In the early 2000s, Dr. Johnston, who had been studying how certain diets could help manage Type 2 diabetes, came across a study from 1988 showing that acetic acid could lower blood sugar spikes in rats after they were given a starch solution.

She was intrigued and decided to test the idea in people with Type 2 diabetes and insulin resistance .

Since then, Dr. Johnston and other researchers have found in small limited studies that drinking one to two tablespoons of apple cider or other types of vinegar mixed with water just before high-carbohydrate meals resulted in less drastic blood sugar spikes than meals without vinegar did.

Some studies suggest that vinegar may slow the movement of food through the digestive tract and interfere with certain enzymes that break carbohydrates down into simple sugars, resulting in lower blood sugar spikes.

But more research is needed to show that apple cider vinegar is safe and beneficial for long-term use, said Paul Gill, a researcher at Monash University in Australia.

Weight loss

Several small, short-term studies in adults who were classified as overweight or obese have found associations between apple cider vinegar and weight loss.

In a 2009 study of 155 adults in Japan, for instance, researchers found that those who drank two tablespoons of apple cider vinegar in water every day for three months lost about four pounds. And in one 2024 trial of 120 people aged 12 to 25 in Lebanon, researchers reported that those who took one tablespoon of apple cider vinegar with water each morning for three months lost an average of 15 pounds.

But the one study that tracked participants after they stopped taking apple cider vinegar found that, on average, they regained most of the weight within a month. And just as many studies on similar groups of people have found no links to weight loss.

Given the lack of robust data and the short time frames of the studies, Beth Czerwony, a dietitian at the Cleveland Clinic, said that she did not recommend that her patients use apple cider vinegar for weight loss.

If vinegar does indeed help people lose weight, it may do so by slowing digestion, which can make you feel fuller for longer, she said.

Animal research has also shown that acetic acid can reduce the accumulation of fat in certain tissues and may help increase the secretion of hormones that signal fullness. So while the evidence in humans is mixed, it’s plausible that vinegar could help with weight loss, Dr. Damman said.

Tamara Duker Freuman, a dietitian in New York City who specializes in digestive conditions, said that many of her patients remark that drinking apple cider vinegar before or after meals reduces their symptoms of acid reflux .

“I believe them,” she said. But, she noted, “hundreds of other patients with horrible reflux” have said that vinegar worsened their symptoms.

Unfortunately, there’s no good research on vinegar and digestive health, said Dr. Nitin K. Ahuja, a gastroenterologist at Penn Medicine.

People who use vinegar to treat reflux, which is commonly caused by stomach acid escaping into the esophagus, say that the acid from the vinegar prompts the stomach to produce less acid, Dr. Ahuja said. But, he added, there’s no supportive data, and “mechanistically, it doesn’t make sense” that adding acid to the stomach will somehow help to control it.

Studies performed in petri dishes suggest that apple cider vinegar can kill certain microbes , which could potentially create gut microbiome changes that might reduce bloating, Dr. Ahuja said. But again, he added, this has not been studied in humans.

If you have frequent or severe reflux symptoms, get treatment from a doctor, he said.

Skin conditions

Applying dilute apple cider vinegar to the skin has long been used as a home remedy for eczema, Dr. Lydia Luu, a dermatologist at the University of Virginia School of Medicine, said. And after several patients asked about such a treatment, she and her colleagues decided to test it.

In their 2019 study , the researchers asked 22 participants, half of whom had eczema, to soak one arm in tap water and one arm in dilute apple cider vinegar for 10 minutes per day for two weeks. Afterward, there were no differences between the participants’ skin in terms of its pH, microbes or its ability to retain moisture — all of which are typically altered in eczema. Sixteen of the study participants reported symptoms like mild burning or itching, mostly on the arm treated with vinegar; one developed severe itching, moderate burning and a small sore; and another developed a raised rash.

Apple cider vinegar “is not very helpful for eczema, unfortunately,” Dr. Luu said — and could make your symptoms worse.

Some of Dr. Luu’s patients “swear by” apple cider vinegar for wart removal, and TikTok is teeming with videos suggesting such a treatment for acne or dark spots or to remove skin tags . But there aren’t good studies about these uses, Dr. Luu said, and apple cider vinegar can cause chemical burns and skin scarring .

Is it safe to try?

Consuming apple cider vinegar, even when diluted, can interact with certain medications, including some drugs for diabetes and the heart, as well as diuretics. Apple cider vinegar may also lower blood potassium, which can be a problem for those who already have low levels, Ms. Czerwony said. So check with your doctor before trying it, she said.

The same advice goes for using apple cider vinegar on your skin, Dr. Luu said. A primary care doctor or dermatologist can likely recommend safer, more effective treatments.

If you want to use vinegar to control your blood sugar, Dr. Johnston suggested diluting one or two tablespoons of any type of vinegar into water and drinking, but don’t exceed two to four tablespoons in a day. Even when diluted, vinegar can erode tooth enamel , so she recommended drinking the vinegar with a straw.

If you drink it undiluted, you run the risk of corroding your esophagus lining too, Dr. Ahuja said.

“Don’t just shoot it,” Dr. Johnston added.

A safer and tastier approach, Dr. Damman suggested, is to use apple cider vinegar in your cooking. Mix it into a vinaigrette or sushi rice , pair with olive oil as a dip for bread, or incorporate it into a refreshing fizzy drink . If there are any health benefits to be reaped, he said, you’ll likely get them this way, too.

Alice Callahan is a Times reporter covering nutrition and health. She has a Ph.D. in nutrition from the University of California, Davis. More about Alice Callahan

A Guide to Better Nutrition

A viral TikTok trend touts “Oatzempic,” a half cup of rolled oats with a cup of water and the juice of half a lime, as a weight-loss hack. We asked the experts if there’s anything to it .

How much salt is too much? Should I cut back? We asked experts these and other questions about sodium .

Patients were told for years that cutting calories would ease the symptoms of polycystic ovary syndrome. But research suggests dieting may not help at all .

We asked a nutrition expert how she keeps up healthy habits without stressing about food. Here are seven tips  she shared for maintaining that balance.

There are many people who want to lose a few pounds for whom weight loss drugs are not the right choice. Is old-fashioned dieting a good option ?

Read these books to shift into a healthier way of thinking about food .

IMAGES

  1. Types of Research

    use of research in study

  2. Research Process: 8 Steps in Research Process

    use of research in study

  3. 15 Types of Research Methods (2024)

    use of research in study

  4. Understanding Qualitative Research: An In-Depth Study Guide

    use of research in study

  5. Why research? Exploring the reasons for The Education Hub’s raison d

    use of research in study

  6. 💐 How to write up research findings. How to write chapter 4 Research

    use of research in study

VIDEO

  1. Importance of Research

  2. Applications of Research

  3. What is research

  4. HOW TO READ and ANALYZE A RESEARCH STUDY

  5. What is research

  6. What are the pros and Cons of Using Mixed Methods in Research

COMMENTS

  1. Research Methods

    You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.. Primary vs. secondary research. Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys, observations and experiments). Secondary research is data that has already been collected by other researchers (e ...

  2. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  3. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  4. A Beginner's Guide to Starting the Research Process

    To learn how to use these tools responsibly, see our AI writing resources page. Step 4: Create a research design. The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you'll use to collect and analyze it, and the location and timescale of ...

  5. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  6. The Use of Research Methods in Psychological Research: A Systematised

    Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. ...

  7. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  8. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  9. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  10. The use and usefulness of academic research: An EMBA perspective

    As the purpose of the current study is to evaluate perceptions about the use and usefulness of academic research in achieving student learning outcomes and in informing practitioners, the selection of study participants was guided by the necessity to capture student as well as practice-based views on this topic.

  11. PDF Research on Research Use: Building Theory, Empirical

    understanding research use to improving it (William T. Grant Foundation, 2015). In the cumulative fashion of knowledge-building, studies to improve research use must build on what we know from descriptive studies about how research is used, who uses it, and what conditions influence its use. Without that grounding,

  12. What is Research

    Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, "research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.".

  13. What is the Conceptual Use of Research, and Why is it Important?

    The conceptual use of research is a potentially powerful way to inform policy. When used conceptually, research serves to introduce new ideas, help people identify problems and appropriate solutions in new ways, and provide new frameworks to guide thinking and action. What's more, the conceptual use of research can have long-term consequences.

  14. (PDF) Use of Research in the Nursing Practice: from Statistical

    research of the 21 st century considers quality studies through the use of a variety of methodologies, synthesis of research findings, use of this evidence to guide the practice and examine the ...

  15. LibGuides: Research Writing and Analysis: Purpose Statement

    In PhD studies, the purpose usually involves applying a theory to solve the problem. In other words, the purpose tells the reader what the goal of the study is, and what your study will accomplish, through which theoretical lens. The purpose statement also includes brief information about direction, scope, and where the data will come from.

  16. Why Should We Study the Use of Research Evidence as a Behavior?

    Evidence Use is Best Measured as Behavior. If the use of research evidence is inherently about what users do with evidence, then it makes complete sense to study it as a behavior. The most obvious advantage of doing so is that we already have valid and reliable tools—theories, models, frameworks, methods, and measures—to measure and analyze ...

  17. Smartphone use and academic performance: A literature review

    By contrast, the association between smartphone use and academic performance seems to be heterogeneous by (a) the method of data gathering, (b) the measures of academic performance used in the analysis, and (c) the measures of smartphone use adopted in the research. Firstly, all studies in Table 1 are (mainly) based on survey data: Seven rely ...

  18. 6 Basic Types of Research Studies (Plus Pros and Cons)

    Here are six common types of research studies, along with examples that help explain the advantages and disadvantages of each: 1. Meta-analysis. A meta-analysis study helps researchers compile the quantitative data available from previous studies. It's an observational study in which the researchers don't manipulate variables.

  19. Genetic Researchers' Use of and Interest in Research With Diverse

    Key Points. Question Are genetic researchers interested in research with diverse ancestral groups, and how can data stewards encourage that use?. Findings In this survey study of 294 genetic researchers, significantly more respondents reported working with data from European ancestral populations than any other ancestral population, and European samples were more likely to be considered by ...

  20. Types of Research Designs Compared

    The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by: ... But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study. Read more about creating a ...

  21. AI improves accuracy of skin cancer diagnoses in Stanford Medicine-led

    Linos, associate dean of research and the Ben Davenport and Lucy Zhang Professor in Medicine, is the senior author of the study, which was published on April 9 in npj Digital Medicine. Postdoctoral scholar Jiyeong Kim , PhD, and visiting researcher Isabelle Krakowski, MD, are the lead authors of the research.

  22. New guidelines reflect growing use of AI in health care research

    But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods.

  23. Two key brain systems are central to psychosis, Stanford Medicine-led

    The study was funded by the Stanford Maternal and Child Health Research Institute's Uytengsu-Hamilton 22q11 Neuropsychiatry Research Program, FONDEYCT (the National Fund for Scientific and Technological Development of the government of Chile), ANID-Chile (the Chilean National Agency for Research and Development) and the U.S. National ...

  24. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  25. Study links accelerated aging to cancer risk in younger adults

    The study wasn't designed to answer questions about why these cancer types seemed to have the strongest ties to accelerated aging, but Ruiyi Tian, the graduate student who led the research, has ...

  26. Biosafety and Biosecurity Policy

    Dual Use Research of Concern. Dual Use Research of Concern (DURC) is life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the ...

  27. White-sounding names get called for jobs more than Black ones, study

    The research team initially conducted its experiment in 2021, but their new paper names the 97 companies they included in the study and assigns them grades representing their level of bias, thanks ...

  28. UK study will look at unintended consequences of classifying controlled

    Cook was recently awarded a UK Substance Use Priority Research Area (SUPRA) Graduate Student Grant to explore this hypothesis in her study titled "The Effects of Scheduling on Nonmedical Use of Gabapentin in Kentucky." "My study is about gabapentin, but that's largely because it's a recently scheduled drug that we can get a lot of ...

  29. Apple Cider Vinegar Benefits: Experts Break Down Potential Health

    It has been said to help with weight loss, blood sugar control, acne and more. But experts say the science is more nuanced. On TikTok, a man swirls a tablespoon of apple cider vinegar into a cup ...

  30. Suicide rates among college athletes have doubled, study finds

    Among US college athletes, suicide is now the second leading cause of death after accidents — and rates have doubled from 7.6% to 15.3% over the past 20 years, according to a study published ...