• - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs

Guide to ethical approval

  • Related content
  • Peer review
  • Justin Nowell , specialist registrar, London Deanery
  • 1 Department of Cardiothoracic Surgery, St George’s Hospital, London SW17 0QT
  • justin.nowell{at}stgeorges.nhs.uk

For research involving patients, whether directly or indirectly, or involving NHS facilities, the ethical review process is mandatory. Justin Nowell takes you through it

Why read about ethics?

During their career journey many health professionals do a formal period of research. You may wish to improve your personal portfolio or raise the profile of a new department. Or are you thinking about pursuing research in the NHS? If so, you need to think carefully about ethics. As a chief investigator, whether you are a general practitioner or a nurse, a physiotherapist or a professor of medicine, the process is identical. It would be wise to plan your ethics application early, well before your proposed start date. The requirement for ethical approval applies not only to interventions like clinical trials but also to a range of other activities such as questionnaires, case note reviews, telephone surveys, and collecting samples or data. Many people find the process of applying for NHS ethical approval intimidating. A galaxy of red tape is guaranteed to dampen enthusiasm even in the most ardent researchers, however brilliant their ideas. Some knowledge of the application and review process can help ease this burden. Advance preparation may also save you a lot of wasted time.

Do I need ethical approval?

Traditionally, medical students are taught that the cornerstones of good ethics comprise beneficence, non-maleficence, autonomy, justice, dignity, and truthfulness. Therefore activities that damage these six principles undermine ethics. It is not always clear how to translate such lofty ideals into improving research. The General Medical Council advises that research involving people directly or indirectly is vital in improving care and reducing uncertainty for patients now and in the future, and for improving the health of the population as a whole. GMC guidance 1 requires that if you are involved in designing, organising, or doing research you must:

Put the protection of the participants’ interests first

Act with honesty and integrity

Follow the appropriate national research governance guidelines and the guidance in Research: The role and responsibilities of doctors . 2

Ethical review from the appropriate NHS research ethics committee is required for any research involving:

Patients and users of the NHS. This includes all potential research participants recruited by virtue of the patient’s or user’s past or present treatment by, or use of, the NHS. It includes NHS patients treated under contracts with private sector institutions

Individuals identified as potential research participants because of their status as relatives or carers of patients and users of the NHS, as defined above

Access to data, organs, or other bodily material of past and present NHS patients

Fetal material and in vitro fertilisation involving NHS patients

Those who have died recently in NHS premises

The use of, or potential access to, NHS premises or facilities

NHS staff recruited as research participants by virtue of their professional role

Healthy volunteers, if done on NHS premises.

Application form

Obtaining ethical approval is divided into national and local stages. The first task is to complete an application form. This has recently changed from the National Research Ethics Service form to a new Integrated Research Application System. 3 This is much more than just a form; it is an integrated dataset designed to fulfil the requirements of a number of review bodies. Detailed guidance notes are included in the form, but completion is time consuming, so save completed sections as you go and return later.

The aim of the ethics review process is to protect participants and promote good quality research. With this in mind, there are some searching questions to answer. There are four parts A-D and some additional forms. Part A comprises the generic and core information. Answers to part A automatically generate appropriate header sections and datasets for the remainder of the application. Part A has 78 questions, although not all questions will need to be answered and the form sieve will automatically reflect this. Resist the temptation to cut and paste large sections of your protocol—it will be obvious at the review meeting if you do. You are asked to write a comprehensive lay summary and it is worth paying special attention to this task. Since 1 May 2008 lay summaries have been published, and this will soon be extended to include summary ethical opinions. 4

Part B, comprising 25 questions, asks specifically about the product or device to be tested, tissue collection, and information security measures. Part C contains an overview of all research sites. Part D is the declarations section. There are also Research Tissue Bank and Medicines and Healthcare products Regulatory Agency forms to complete if relevant.

Applicant’s checklist

After completing the application form you must complete an applicant’s checklist. The checklist specifies supporting documents to be attached, including patient information sheets, consent forms, sometimes a letter from a statistician, and investigator and subinvestigator CVs. Online guidance is available on the format of protocols and other documents, but if you are unsure ask someone experienced; patient advisory groups are helpful in drafting documents designed to inform patients.

There is no prohibition on asking individual members of your local research ethics committee for advice; several of them are likely to be local health professionals. Probably the most valuable advice is to remember to identify all documents with a date and version number.

Where to apply

If you are unsure, the National Research Ethics Service website 4 contains a list of all local research ethics office contacts, which will be able to advise where to submit and how to book a review slot. Usually this will be via your local research ethics committee. The website also contains the standard operating procedures for ethics committees.

Provided the applicant’s checklist is complete, a reference number is issued, and the forms are locked, printed, and signed. If you do make a mistake you can ring the National Research Ethics Service helpline and they can unlock the form. Deliver signed hard copy forms with supporting documents to the designated research ethics committee office. Local research ethics committees consider around five to 10 studies a month. If they are fully booked you may be asked to return for submission the following month. There is no waiting list system for the research ethics committee —it is first come, first served. If time is tight, you could opt at this stage to submit to another research ethics committee within the same domain (area served by the same strategic health authority). Check local arrangements for submission carefully, because procedures may vary slightly. In some NHS trusts the research and development department wants to scrutinise the application before submission.

Once the application form and checklist have been approved you will receive a letter from the research ethics committee stating that the application is validated and giving a date for review. Ethics committees must provide an opinion within 60 days of validating an application.

Site specific information

The final online forms are the site specific information forms, which are submitted once the application is validated. The form has two purposes: one is to obtain NHS permission (a universal requirement) and the other is to request site specific assessment. Some types of study, such as questionnaires and surveys, are designated site specific assessment exempt. For most studies, however, once ethical approval is obtained local site specific assessment approval is required using the form. This is designed to ensure that individual sites have appropriate local resources to support the study safely.

The committee has a maximum of 18 members and one third of these are lay members. The investigator will receive an invitation to the meeting, and although attendance is not compulsory, it is advisable. The committee will consider the application for up to half an hour and then call in the investigator to answer questions. If you do attend, it will expedite the process as you may be able to clarify points raised by individual research ethics committee members.

Notification of decision

After the review meeting you will be informed of the committee’s provisional opinion, subject to certain conditions or any further information that is required. If the committee has serious concerns, complete resubmission may be requested. The committee will confirm their decision in writing within 10 days.

Request for further information

The committee may request further information or revision of documents before granting a final favourable ethical opinion. You will also be reminded that further consents may be required (site specific approval for other sites, research and development, Medicines and Healthcare products Regulatory Agency) before the study can begin.

Final approval and beyond

The research ethics committee will confirm the final ethical opinion in writing. Subsequent amendments to the study protocol must be formally notified to the committee. You are obliged to start your research within 12 months of a favourable ethical review. You must provide safety and progress reports as specified. You should also notify the committee when your study ends.

Complete the online application form

Complete the applicant’s checklist

Decide where to apply and book a review slot

Make your submission

Validation and review date

Complete site specific information form

Review meeting and provisional ethical opinion

Request for further written information

Final ethical opinion

After approval

Competing interests: None declared.

  • ↵ General Medical Council. Good medical practice . London: GMC, 2006 .
  • ↵ General Medical Council. Research: The role and responsibilities of doctors. London: GMC, 2002 .
  • ↵ Integrated Research Application System. www.myresearchproject.org.uk .
  • ↵ National Research Ethics Service. www.nres.npsa.nhs.uk .

what is ethical review in research

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH
  • Open access
  • Published: 18 August 2017

Improving the process of research ethics review

  • Stacey A. Page   ORCID: orcid.org/0000-0001-6494-3671 1 , 2 &
  • Jeffrey Nyeboer 3  

Research Integrity and Peer Review volume  2 , Article number:  14 ( 2017 ) Cite this article

20k Accesses

31 Citations

15 Altmetric

Metrics details

Research Ethics Boards, or Institutional Review Boards, protect the safety and welfare of human research participants. These bodies are responsible for providing an independent evaluation of proposed research studies, ultimately ensuring that the research does not proceed unless standards and regulations are met.

Concurrent with the growing volume of human participant research, the workload and responsibilities of Research Ethics Boards (REBs) have continued to increase. Dissatisfaction with the review process, particularly the time interval from submission to decision, is common within the research community, but there has been little systematic effort to examine REB processes that may contribute to inefficiencies. We offer a model illustrating REB workflow, stakeholders, and accountabilities.

Better understanding of the components of the research ethics review will allow performance targets to be set, problems identified, and solutions developed, ultimately improving the process.

Peer Review reports

Instances of research misconduct and abuse of research participants have established the need for research ethics oversight to protect the rights and welfare of study participants and the integrity of the research enterprise [ 1 , 2 ]. In response to such egregious events, national and international regulations have emerged that are intended to protect research participants (e.g. [ 3 , 4 , 5 ]).

Research Ethics Boards (REBs) also known as Institutional Review Boards (IRBs) and Research Ethics Committees (RECs) are charged with ensuring that research is planned and conducted in accordance with such laws and regulatory standards. In protecting the rights and welfare of participants, REBs must weigh possible harms to individuals against the plausible societal benefits of the research. They must ensure fair participant selection and, where applicable, confirm that appropriate provisions are in place for obtaining participant consent.

REBs often operate under the auspices of post-secondary institutions. Larger universities may support multiple REBs that serve different research areas, such as medical and health research and social science, psychology, and humanities research. Boards are constituted of people from a variety of backgrounds, each of whom contributes specific expertise to review and discussions. Members are appointed to the Board through established institutional practice. Nevertheless, most Board members bring a sincere interest and commitment to their roles. For university Faculty, Board membership may fulfil a service requirement that is part of their academic responsibilities.

The Canadian Tri-Council Policy Statement (TCPS2) advances a voluntary, self-governing model for REBs and institutions. The TCPS2 is a joint policy of Canada’s three federal research agencies (Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council), and institutional and researcher adherence to the policy standards is a condition of funding. Recognizing the independence of REBs in their decision-making, institutions are required to support their functioning. Central to the agreement is that institutions conducting research must establish an REB and ensure that it has the “necessary and sufficient ongoing financial and administrative resources” to fulfil its duties (TCPS2 [ 3 ] p. 68). A similar requirement for support of IRB functioning is included in the US Common Rule (45 CFR 46.103 [ 5 ]). The operationalization of “necessary and sufficient” is subjective and likely to vary widely. To the extent that the desired outcomes (i.e. timely reviews and approvals) depend on the allocation of these resources, they too will vary.

Time and research ethics review

From the academic hallways to the literature, characterizations of REBs and the research ethics review process are seldom complimentary. While numerous criticisms have been levelled, it is the time to decision that is most consistently maligned [ 6 , 7 , 8 , 9 , 10 , 11 ].

Factors associated with lengthy review time include incomplete or poorly completed applications [ 7 , 12 , 13 ], lack of administrative support [ 14 ], inadequately trained REB members [ 15 ], REB member competing commitments, expanding oversight requirements, and the sheer volume of applications [ 16 , 17 , 18 ]. Nevertheless, objective data on the inner workings of REBs are lacking [ 6 , 19 , 20 ].

Consequences of slow review times include centres’ withdrawing from multisite trials or limiting their participation in available trials [ 21 , 22 ], loss of needed research resources [ 23 ], and recruitment challenges in studies dependent on seasonal factors [ 24 ]. Lengthy time to study approval may ultimately delay patient access to potentially effective therapies [ 8 ].

Some jurisdictions have moved to regionalize or consolidate ethics review, using a centralized ethics review of protocols conducted on several sites. This enhances review efficiency for multisite research by removing the need for repeating reviews across centres [ 9 , 25 , 26 , 27 , 28 ]. Recommendations for systemic improvement include better standardization of review practices, enhanced training for REB members, and requiring accreditation of review boards [ 9 ].

The research ethics review processes are not well understood, and no gold standard exists against which to evaluate board practices [ 19 , 20 ]. Consequently, there is little information on how REBs may systematically improve their methods and outcomes. This paper presents a model based on stakeholder responsibilities in the process of research ethics review and illustrates how each makes contributions to the time an application spends in this process. This model focusses on REBs operating under the auspices of academic institutions, typical in Canada and the USA.

Modelling the research ethics review process

The research ethics review process may appear to some like the proverbial black box. An application is submitted and considered and a decision is made:

SUBMIT > REVIEW > DECISION

In reality, the first step to understanding and improving the process is recognizing that research ethics review involves more than just the REB. Contributing to the overall efficiency—or inefficiency—of the review are other stakeholders and their roles in the development and submission of the application and the subsequent movement of the application back and forth between PIs, administrative staff, reviewers, the Board, and the Chair, until ideally the application is deemed ready for approval.

Identifying how a research ethics review progresses permits better understanding of the workflow, including the administrative and technological supports, roles, and responsibilities. The goal is to determine where challenges in the system exist so they can be remediated and efficiencies gained.

One way of understanding details of the process is to model it. We have used a modelling approach based in part on a method advanced by Ishikawa and further developed by the second author (JN) [ 29 , 30 ]. Traditionally, the Ishikawa “fishbone” or cause and effect diagram has been used to represent the components of a manufacturing enterprise and its application facilitates understanding how the elements of an operation may cause inefficiencies. This modelling provides a means of analysing process dispersion (e.g. who is accountable for what specific outcomes) and is frequently used when trying to understand time delays in undertakings.

In our model (Fig.  1 ), “Categories” represent key role actions that trigger a subsequent series of work activities. The “Artefacts” are the products resulting from a set of completed activities and reflect staged movement in the process. Implicit in the model is a temporal sequence and the passage of time, represented by the arrows.

Basic business activity model

Applying this strategy to facilitate understanding of time delays in ethics review requires that the problem (i.e. time) be considered in the context of all stakeholders. This includes those involved in the development and submission of the application, those involved in the administrative movement of the application through the system, those involved in the substantive consideration and deliberation of the application, and those involved in the final decision-making.

The model developed (Fig.  2 ) was based primarily on a review of the lead author’s (SP) institution’s REB application process. The model is generally consistent with the process and practices of several other REBs with which she has had experience over the past 20 years.

Research ethics activity model

What this model illustrates is that the research ethics review process is complex. There are numerous stakeholders involved, each of whom bears a portion of the responsibility for an application’s time in the system. The model illustrates a temporal sequence of events where, ideally, the movement of an application is unidirectional, left to right. Time is lost when applications stall or backflow in the process.

Stakeholders, accountabilities, and the research ethics review model

There are four main stakeholder groups in the research ethics review process: researchers/research teams, research ethics unit administrative staff, REB members, and the institution. Each plays a role in the transit of an application through the process and how well they undertake their role responsibilities affects the time that the application takes to move through. Table  1 presents a summary of recommendations for best practices.

Researchers

The researcher initiates the process of research ethics review by developing a proposal involving human participants and submitting an application. Across standards, the principal investigator is accountable for the conduct of the study, including adherence to research ethics requirements. Such standards are readily available both from the source (e.g. Panel on Research Ethics [Canada], National Institutes of Health [USA], Food and Drug Administration [USA]) and, typically, through institutional websites. Researchers have an obligation to be familiar with the rules for human participant research. Developing a sound proposal where ethics requirements are met at the outset places the application in a good position at the time of submission. Researchers are accountable for delays in review when ethical standards are not met and the application must be returned for revision. Tracking the reasons for return permits solutions, such as targeted educational activities, to be developed.

Core issues that investigators can address in the development of their applications include an ethical recruitment strategy, a sound consent process, and application of relevant privacy standards and legislation. Most research ethics units associated with institutions maintain websites where key information and resources may be found, such as consent templates, privacy standards, “frequently asked questions,” and application submission checklists [ 31 , 32 , 33 ]. Moreover, consulting with the REB in advance of submission may help researchers to prevent potentially challenging issues [ 15 ]. Investigators who are diligent in knowing about and applying required standards will experience fewer requests for revision and fewer stalls or backtracking once their applications are submitted. Some have suggested that researchers should be required, rather than merely expected, to have an understanding of legal and ethics standards before they are even permitted to submit an application [ 19 ].

The scholarly integrity of proposed research is an essential element of ethically acceptable human participant research. Researchers must be knowledgeable about the relevant scientific literature and present proposals that are justified based on what is known and where knowledge gaps exist. Research methods must be appropriate to the question and studies adequately powered. Novice or inexperienced researchers whose protocols have not undergone formal peer review (e.g. via supervisory committees, internal peer review committees, or competitive grant reviews) should seek consultation and informal peer review prior to ethics review to ensure the scientific validity of their proposals. While it is within the purview of REBs to question methods and design, it is not their primary mandate. Using REB resources for science review is an opportunity cost that can compromise efficient ethics review.

Finally, researchers are advised to review and proof their applications prior to submission to ensure that all required components have been addressed and the information in the application and supporting documents (e.g. consent forms, protocol) is consistent. Missing or discrepant information is causal to application return and therefore to time lost [ 7 ].

Administrators

Prior to submission, administrators may be the first point of contact for researchers seeking assistance with application requirements. Subsequently, they are often responsible for undertaking a preliminary, screening review of applications to make sure they are complete, with all required supporting documents and approvals in place. Once an application is complete, the administrative staff assign it to a reviewer. The reviewer may be a Board member or a subject-matter expert accountable to the Board.

Initial consultation and screening activities work best when staff have good knowledge of both institutional application requirements and ethics standards. Administrative checklists are useful tools to help ensure consistent application of standards in this preliminary application review. Poorly screened applications that reach reviewers may be delayed if the application must be returned to the administrator or the researcher for repair.

Reviewers typically send their completed reviews back to the administrators. In turn, the administrators either forward the applications to the Chair to consider (i.e. for delegated approval) or to a Board meeting agenda. In addition to ensuring that applications are complete, administrators may be accountable for monitoring how long a file is out for review. When reviews are delayed or incomplete for any reason, administrators may need to reassign the file to a different reviewer.

Administrators are therefore key players in the ethics review process, as they may be both initial resources for researchers and subsequently facilitate communication between researchers and Board members. Moreover, given past experience with both research teams and reviewers, they may be aware of areas where applicants struggle and when applications or reviews are likely to be deficient or delinquent. Actively tracking such patterns in the review process may reveal problems to which solutions can be developed. For example, applications consistently deficient in a specific area may signal the need for educational outreach and reviews that are consistently submitted late may provide impetus to recruit new Board members or reviewers.

REB members

The primary responsibility for evaluating the substantive ethics issues in applications and how they are managed rests with the REB members and the Chair. The Board may approve applications, approve pending modifications, or reject them based on their compliance with standards and regulations.

Like administrators, an REB member’s efficiency and review quality are enhanced by the use of standard tools, in this case standardized review templates, intended to guide reviewers and Board members to address a consistent set of criteria. Where possible, matching members’ expertise to the application to be reviewed also contributes to timely, good quality reviews.

REB functioning is enhanced with ongoing member training and education, yielding consistent, efficient application of ethics principles and regulatory standards [ 15 ]. This may be undertaken in a variety of ways, including Board member retreats, regular circulation of current articles, and attending presentations and conferences. REB Chairs are accountable to ensure consistency in the decisions made by the Board (TCPS 2014, Article 6.8). This demands that Chairs thoroughly understand ethical principles and regulatory standards and that they maintain awareness of previous decisions. Much time can be spent at Board meetings covering old ground. The use of REB decision banks has been recommended as a means of systematizing a record of precedents, thus contributing to overall quality improvement [ 34 ].

Institution

Where research ethics review takes place under the auspices of an academic institution, the institutions must typically take responsibility to adequately support the functioning of their Boards and promote a positive culture of research ethics [ 3 , 5 ]. Supporting the financial and human resource costs of participating in ongoing education (e.g. retreats, speakers, workshops, conferences) is therefore the responsibility of the institution.

Operating an REB is costly [ 35 ]. It is reasonable to assume that there is a relationship between the adequacy of resources allocated to the workload and flow and the time to an REB decision. Studies have demonstrated wide variability in times to determination [ 8 , 9 , 10 , 22 ]. However, comparisons are difficult to make because of confounding factors such as application volume, number of staff, number of REB members, application quality, application type (e.g. paper vs. electronic), and protocol complexity. Despite these variables, it appears that setting a modal target turnaround time of 6 weeks (±2 weeks) is reasonable and in line with the targets set in the European Union and the UK’s National Health Service [ 36 , 37 ]. Tracking the time spent at each step in the model may reveal where applications are typically delayed for long periods and may be indicative of areas where more resources need to be allocated or workflows redesigned.

As institutions grow their volumes of research, workloads correspondingly increase for institutional REBs. To maintain service levels, institutions need to ensure that resources allocated to REBs match the volume and intensity of work. Benchmarking costs (primarily human resources) relative to the number of applications and time to a decision will help to inform the allocation of resources needed to maintain desired service levels.

Finally, most REB members typically volunteer their Board services to the institution. Despite their good-faith intent to serve, Board members occasionally find that researchers view them as obstacles to or adversaries in the research enterprise. Board members may believe that researchers do not value the time and effort they contribute to review, while researchers may believe the REB and its members are unreasonable, obstructive, and a “thorn in their side” [ 15 ]. Clearly, relationships can be improved. Nevertheless, improving the timeliness and efficiency of research ethics review should help to soothe fevered brows on both sides of the issue.

Upshur [ 12 ] has previously noted that the contributions to research ethics such as Board membership and application review need to be accorded the same academic prestige as serving on peer review grant panels and editorial boards and undertaking manuscript reviews. In doing so, institutions will help to facilitate a culture of respect for, and shared commitment to, research ethics review, which may only benefit the process.

The activities, roles, and responsibilities identified in the ethics review model illustrate that it is a complex activity and that “the REB” is not a single entity. Multiple stakeholders each bear a portion of the accountability for how smoothly a research ethics application moves through the process. Time is used most efficiently when forward momentum is maintained and the application advances. Delays occur when the artefact (i.e. either the application or the application review) is not advanced as the accountable stakeholders fail to discharge their responsibilities or when the artefact fails to meet a standard and it is sent back. Ensuring that all stakeholders understand and are able to operationalize their responsibilities is essential. Success depends in part on the institutional context, where standards and expectations should be well communicated, and resources like education and administrative support provided, so that capacity to execute responsibilities is assured.

Applying this model will assist in identifying activities, accountabilities, and baseline performance levels. This information will contribute to improving local practice when deficiencies are identified and solutions implemented, such as training opportunities or reduction in duplicate activities. It will also facilitate monitoring as operational improvements over baseline performance could be measured. Where activities and benchmarks are well defined and consistent, comparisons both within and across REBs can be made.

Finally, this paper focused primarily on administrative efficiency in the context of research ethics review time. However, the identified problems and their suggested solutions would contribute not only to enhanced timeliness of review but also to enhanced quality of review and therefore human participant protection.

Beecher HK. Ethics and clinical research. NEMJ. 1966;274(24):1354–60.

Article   Google Scholar  

Kim WO. Institutional review board (IRB) and ethical issues in clinical research. Korean J Anesthesiol. 2012;62(1):3–12.

Canadian Institutes of Health Research, Natural Sciences and Engineering Council of Canada, and Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, December 2014.  http://www.ethics.gc.ca/eng/index/ . Accessed 21 Jun 2017.

World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects as amended by the 64th WMA General Assembly, Fortaleza, Brazil, October 2013 U.S. Department of Health. https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-researchinvolving-human-subjects/ . Accessed 21 Jun 2017.

U.S. Department of Health and Human Services, HHS.gov, Office for Human Research Protections. 45 CFR 46. Code of Federal Regulations. Title 45. Public Welfare. Department of Health and Human Services. Part 46. Protection of Human Subjects. Revised January 15, 2009. Effective July 14, 2009. Subpart A. Basic HHS Policy for Protection of Human Research Subjects. 2009. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/ . Accessed 21 Jun 2017.

Abbott L, Grady C. A systematic review of the empirical literature evaluating IRBs: what we know and what we still need to learn. J Empir Res Hum Res Ethics. 2011;6:3–19.

Egan-Lee E, Freitag S, Leblanc V, Baker L, Reeves S. Twelve tips for ethical approval for research in health professions education. Med Teach. 2011;33(4):268–72.

Hicks SC, James RE, Wong N, Tebbutt NC, Wilson K. A case study evaluation of ethics review systems for multicentre clinical trials. Med J Aust. 2009;191(5):3.

Google Scholar  

Larson E, Bratts T, Zwanziger J, Stone P. A survey of IRB process in 68 U.S. hospitals. J Nurs Scholarsh. 2004;36(3):260–4.

Silberman G, Kahn KL. Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform. Milbank Q. 2011;89(4):599–627.

Whitney SN, Schneider CE. Viewpoint: a method to estimate the cost in lives of ethics board review of biomedical research. J Intern Med. 2011;269(4):396–402.

Upshur REG. Ask not what your REB can do for you; ask what you can do for your REB. Can Fam Physician. 2011;57(10):1113–4.

Taylor H. Moving beyond compliance: measuring ethical quality to enhance the oversight of human subjects research. IRB. 2007;29(5):9–14.

De Vries RG, Forsberg CP. What do IRBs look like? What kind of support do they receive? Account Res. 2002;9(3-4):199–216.

Guillemin M, Gillam L, Rosenthal D, Bolitho A. Human research ethics committees: examining their roles and practices. J Empir Res Hum Res Ethics. 2012;7(3):38–49. doi: 10.1525/jer.2012.7.3.38 .

Grady C. Institutional review boards: purpose and challenges. Chest. 2015;148(5):1148–55.

Burman WJ, Reves RR, Cohn DL, Schooley RT. Breaking the camel’s back: multicenter clinical trials and local institutional review boards. Ann Intern Med. 2001;134(2):152–7.

Whittaker E. Adjudicating entitlements: the emerging discourses of research ethics boards. Health: An Interdisciplinary Journal for the Social Study of Health, Illness and Medicine. 2005;9(4):513–35. doi: 10.1177/1363459305056416 .

Turner L. Ethics board review of biomedical research: improving the process. Drug Discov Today. 2004;9(1):8–12.

Nicholls SG, Hayes TP, Brehaut JC, McDonald M, Weijer C, Saginur R, et al. A Scoping Review of Empirical Research Relating to Quality and Effectiveness of Research Ethics Review. PLoS ONE. 2015;10(7):e0133639. doi: 10.1371/journal.pone.0133639 .

Mansbach J, Acholonu U, Clark S, Camargo CA. Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med. 2007;14(4):377–80.

Christie DRH, Gabriel GS, Dear K. Adverse effects of a multicentre system for ethics approval on the progress of a prospective multicentre trial of cancer treatment: how many patients die waiting? Internal Med J. 2007;37(10):680–6.

Greene SM, Geiger AM. A review finds that multicenter studies face substantial challenges but strategies exist to achieve institutional review board approval. J Clin Epidemiol. 2006;59(8):784–90.

Jester PM, Tilden SJ, Li Y, Whitley RJ, Sullender WM. Regulatory challenges: lessons from recent West Nile virus trials in the United States. Contemp Clin Trials. 2006;27(3):254–9.

Flynn KE, Hahn CL, Kramer JM, Check DK, Dombeck CB, Bang S, et al. Using central IRBs for multicenter clinical trials in the United States. Plos One. 2013;8(1):e54999.

National Institutes of Health (NIH). Final NIH policy on the use of a single institutional review board for multi-site research. 2017. NOT-OD-16-094. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-094.html . Accessed 21 Jun 2017.

Check DK, Weinfurt KP, Dombeck CB, Kramer JM, Flynn KE. Use of central institutional review boards for multicenter clinical trials in the United States: a review of the literature. Clin Trials. 2013;10(4):560–7.

Dove ES, Townend D, Meslin EM, Bobrow M, Littler K, Nicol D, et al. Ethics review for international data-intensive research. Science. 2016;351(6280):1399–400.

Ishikawa K. Introduction to Quality Control. J. H. Loftus (trans.). Tokyo: 3A Corporation; 1990.

Nyeboer N. Early-stage requirements engineering to aid the development of a business process improvement strategy. Oxford: Kellogg College, University of Oxford; 2014.

University of Calgary. Researchers. Ethics and Compliance. CHREB. 2017. http://www.ucalgary.ca/research/researchers/ethics-compliance/chreb . Accessed 21 Jun 2017.

Harvard University. Committee on the Use of Human Subjects. University-area Institutional Review Board at Harvard. 2017. http://cuhs.harvard.edu . Accessed 21 Jun 2017.

Oxford University. UAS Home. Central University Research Ethics Committee (CUREC). 2016. https://www.admin.ox.ac.uk/curec/ . Accessed 21 Jun 2017.

Bean S, Henry B, Kinsey JM, McMurray K, Parry C, Tassopoulos T. Enhancing research ethics decision-making: an REB decision bank. IRB. 2010;32(6):9–12.

Sugarman J, Getz K, Speckman JL, Byrne MM, Gerson J, Emanuel EJ. The cost of institutional review boards in academic medical centers. NEMJ. 2005;352(17):1825–7.

National Health Service, Health Research Authority. Resources, Research legislation and governance, Standard Operating Procedures. 2017. http://www.hra.nhs.uk/resources/research-legislation-and-governance/standard-operating-procedures/ . Accessed 21 June 2017.

European Commission. Clinical Trials Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001. https://ec.europa.eu/health/sites/health/files/files/eudralex/vol-1/dir_2001_20/dir_2001_20_en.pdf . Accessed 21 Jun 2017.

Download references

Acknowledgements

The authors would like to thank Dr. Michael C. King for his review of the manuscript draft.

Availability of data and materials

Not applicable.

Authors’ contributions

The listed authors (SP, JN) have each undertaken the following: made substantial contributions to conception and design of the model; been involved in drafting the manuscript; have read and given final approval of the version to be published and participated sufficiently in the work to take public responsibility for appropriate portions of the content; and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Authors’ information

SP is the Chair of the Conjoint Health Research Ethics Board at the University of Calgary. She is also a member of the Human Research Ethics Board at Mount Royal University and a member of the Research Ethics Board at the Alberta College of Art and Design. She serves on the Board of Directors for the Canadian Association of Research Ethics Boards.

JN is an Executive Technology Consultant specializing in Enterprise and Business Architecture. He has worked on process improvement initiatives across multiple industries as well as on the delivery of technology-based solutions. He was the project manager for the delivery of the IRISS online system for the Province of Alberta’s Health Research Ethics Harmonization initiative.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and affiliations.

Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada

Stacey A. Page

Conjoint Health Research Board, University of Calgary, Calgary, Alberta, Canada

ITM Vocational University, Vadodara, Gujurat, India

Jeffrey Nyeboer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stacey A. Page .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Page, S.A., Nyeboer, J. Improving the process of research ethics review. Res Integr Peer Rev 2 , 14 (2017). https://doi.org/10.1186/s41073-017-0038-7

Download citation

Received : 31 March 2017

Accepted : 14 June 2017

Published : 18 August 2017

DOI : https://doi.org/10.1186/s41073-017-0038-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research Ethics Boards
  • Research Ethics Committees
  • Medical research
  • Applied ethics
  • Institutional Review Boards

Research Integrity and Peer Review

ISSN: 2058-8615

what is ethical review in research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 14 December 2022

Advancing ethics review practices in AI research

  • Madhulika Srikumar   ORCID: orcid.org/0000-0002-6776-4684 1 ,
  • Rebecca Finlay 1 ,
  • Grace Abuhamad 2 ,
  • Carolyn Ashurst 3 ,
  • Rosie Campbell 4 ,
  • Emily Campbell-Ratcliffe 5 ,
  • Hudson Hongo 1 ,
  • Sara R. Jordan 6 ,
  • Joseph Lindley   ORCID: orcid.org/0000-0002-5527-3028 7 ,
  • Aviv Ovadya   ORCID: orcid.org/0000-0002-8766-0137 8 &
  • Joelle Pineau   ORCID: orcid.org/0000-0003-0747-7250 9 , 10  

Nature Machine Intelligence volume  4 ,  pages 1061–1064 ( 2022 ) Cite this article

9179 Accesses

9 Citations

46 Altmetric

Metrics details

A Publisher Correction to this article was published on 11 January 2023

This article has been updated

The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

You have full access to this article via your institution.

As artificial intelligence (AI) and machine learning (ML) technologies continue to advance, awareness of the potential negative consequences on society of AI or ML research has grown. Anticipating and mitigating these consequences can only be accomplished with the help of the leading experts on this work: researchers themselves.

Several leading AI and ML organizations, conferences and journals have therefore started to implement governance mechanisms that require researchers to directly confront risks related to their work that can range from malicious use to unintended harms. Some have initiated new ethics review processes, integrated within peer review, which primarily facilitate a reflection on the potential risks and effects on society after the research is conducted (Box 1 ). This is distinct from other responsibilities that researchers undertake earlier in the research process, such as the protection of the welfare of human participants, which are governed by bodies such as institutional review boards (IRBs).

Box 1 Current ethics review practices

Current ethics review practices can be thought of as a sliding scale that varies according to how submitting authors must conduct an ethical analysis and document it in their contributions. Most conferences and journals are yet to initiate ethics review.

Key examples of different types of ethics review process are outlined below.

Impact statement

NeurIPS 2020 broader impact statements - all authors were required to include a statement of the potential broader impact of their work, such as its ethical aspects and future societal consequences of the research, including positive and negative effects. Organizers also specified additional evaluation criteria for paper reviewers to flag submissions with potential ethical issues.

Other examples include the NAACL 2021 and the EMNLP 2021 ethical considerations sections, which encourages authors and reviewers to consider ethical questions in their submitted papers.

Nature Machine Intelligence asks authors for ethical and societal impact statements in papers that involve the identification or detection of humans or groups of humans, including behavioural and socio-economic data.

NeurIPS 2021 paper checklist - a checklist to prompt authors to reflect on potential negative societal effects of their work during the paper writing process (as well as other criteria). Authors of accepted papers were encouraged to include the checklist as an appendix. Reviewers could flag papers that required additional ethics review by the appointed ethics committee.

Other examples include the ACL Rolling Review (ARR) Responsible NLP Research checklist, which is designed to encourage best practices for responsible research.

Code of ethics or guidelines

International Conference on Learning Representations (ICLR) code of ethics - ICLR required authors to review and acknowledge the conference’s code of ethics during the submission process. Authors were not expected to include discussion on ethical aspects in their submissions unless necessary. Reviewers were encouraged to flag papers that may violate the code of ethics.

Other examples include the ACM Code of Ethics and Professional Conduct, which considers ethical principles but through the wider lens of professional conduct.

Although these initiatives are commendable, they have yet to be widely adopted. They are being pursued largely without the benefit of community alignment. As researchers and practitioners from academia, industry and non-profit organizations in the field of AI and its governance, we believe that community coordination is needed to ensure that critical reflection is meaningfully integrated within AI research to mitigate its harmful downstream consequences. The pace of AI and ML research and its growing potential for misuse necessitates that this coordination happen today.

Writing in Nature Machine Intelligence , Prunkl et al. 1 argue that the AI research community needs to encourage public deliberation on the merits and future of impact statements and other self-governance mechanisms in conference submissions. We agree. Here, we build on this suggestion, and provide three recommendations to enable this effective community coordination, as more ethics review approaches begin to emerge across conferences and journals. We believe that a coordinated community effort will require: (1) more research on the effects of ethics review processes; (2) more experimentation with such processes themselves; and (3) the creation of venues in which diverse voices both within and beyond the AI or ML community can share insights and foster norms. Although many of the challenges we address have been previously highlighted 1 , 2 , 3 , 4 , 5 , 6 , this Comment takes a wider view, calling for collaboration between different conferences and journals by contextualizing this conversation against more recent studies 7 , 8 , 9 , 10 , 11 and developments.

Developments in AI research ethics

In the past, many applied scientific communities have contended with the potential harmful societal effects of their research. The infamous anthrax attacks in 2001, for example, catalysed the creation of the National Science Advisory Board for Biosecurity to prevent the misuse of biomedical research. Virology, in particular, has had long-running debates about the responsibility of individual researchers conducting gain-of-function research. Today, the field of AI research finds itself at a similar juncture 12 . Algorithmic systems are now being deployed for high-stakes applications such as law enforcement and automated decision-making, in which the tools have the potential to increase bias, injustice, misuse and other harms at scale. The recent adoption of ethics and impact statements and checklists at some AI conferences and journals signals a much-needed willingness to deal with these issues. However, these ethics review practices are still evolving and are experimental in nature. The developments acknowledge gaps in existing, well-established governance mechanisms, such as IRBs, which focus on risks to human participants rather than risks to society as a whole. This limited focus leaves ethical issues such as the welfare of data workers and non-participants, and the implications of data generated by or about people outside of their scope 6 . We acknowledge that such ethical reflection, beyond IRB mechanisms, may also be relevant to other academic disciplines, particularly those for whom large datasets created by or about people are increasingly common, but such a discussion is beyond the scope of this piece. The need to reflect on ethical concerns seems particularly pertinent within AI, because of its relative infancy as a field, the rapid development of its capabilities and outputs, and its increasing effects on society.

In 2020, the NeurIPS ML conference required all papers to carry a ‘broader impact’ statement examining the ethical and societal effects of the research. The conference updated its approach in 2021, asking authors to complete a checklist and to document potential downstream consequences of their work. In the same year, the Partnership on AI released a white paper calling for the field to expand peer review criteria to consider the potential effects of AI research on society, including accidents, unintended consequences, inappropriate applications and malicious uses 3 . In an editorial citing the white paper, Nature Machine Intelligence announced that it would ask submissions to carry an ethical statement when the research involves the identification of individuals and related sensitive data 13 , recognizing that mitigating downstream consequences of AI research cannot be completely disentangled from how the research itself is conducted. In another recent development, Stanford University’s Ethics and Society Review (ESR) requires AI researchers who apply for funding to identify if their research poses any risks to society and also explain how those risks will be mitigated through research design 14 .

Other developments include the rising popularity of interdisciplinary conferences examining the effects of AI, such as the ACM Conference on Fairness, Accountability, and Transparency (FAccT), and the emergence of ethical codes of conduct for professional associations in computer science, such as the Association for Computing Machinery (ACM). Other actors have focused on upstream initiatives such as the integration of ethics reflection into all levels of the computer science curriculum.

Reactions from the AI research community to the introduction of ethics review practices include fears that these processes could restrict open scientific inquiry 3 . Scholars also note the inherent difficulty of anticipating the consequences of research 1 , with some AI researchers expressing concern that they do not have the expertise to perform such evaluations 7 . Other challenges include concerns about the lack of transparency in review practices at corporate research labs (which increasingly contribute to the most highly cited papers at premier AI conferences such as NeurIPS and ICML 9 ) as well as academic research culture and incentives supporting the ‘publish or perish’ mentality that may not allow time for ethical reflection.

With the emergence of these new attempts to acknowledge and articulate unique ethical considerations in AI research and the resulting concerns from some researchers, the need for the AI research community to come together to experiment, share knowledge and establish shared best practices is all the more urgent. We recommend the following three steps.

Study community behaviour and share learnings

So far, there are limited studies that have explored the responses of ML researchers to the launch of experimental ethics review practices. To understand how behaviour is changing and how to align practice with intended effect, we need to study what is happening and share learnings iteratively to advance innovation. For example, in response to the NeurIPS 2020 requirement for broader impact statements, a paper found that most researchers surveyed spent fewer than two hours working on this process 7 , perhaps retroactively towards the end of their research, making it difficult to know whether this reflection influenced or shifted research directions or not. Surveyed researchers also expressed scepticism about the mandated reflection on societal impacts 7 . An analysis of preprints found that researchers assessed impact through the narrow lens of technical contributions (that is, describing their work in the context of how it contributes to the research space and not how it may affect society), thereby overlooking potential effects on vulnerable stakeholders 8 . A qualitative analysis of a larger sample 10 and a quantitative analysis of all submitted papers 11 found that engagement was highly variable, and that researchers tended to favour the discussion of positive effects over negative effects.

We need to understand what works. These findings, all drawn from studies examining the implementation of ethics review at NeurIPS 2020, point to a pressing need to review actual versus intended community behaviour more thoroughly and consistently to evaluate the effectiveness of ethics review practices. We recognize that other fields have considered ethics in research in different ways. To get started, we propose the following approach, building on and expanding the analysis of Prunkl et al. 1 .

First, clear articulation of the purposes behind impact statements and other ethics review requirements is needed to evaluate efficacy and motivate future iterations by the community. Publication venues that organize ethics review must communicate expectations of this process comprehensively both at the level of individual contribution and for the community at large. At the individual level, goals could include encouraging researchers to reflect on the anticipated effects on society. At the community level, goals could include creating a culture of shared responsibility among researchers and (in the longer run) identifying and mitigating harms.

Second, because the exercise of anticipating downstream effects can be abstract and risks being reduced to a box-ticking endeavour, we need more data to ascertain whether they effectively promote reflection. Similar to the studies above, conference organizers and journal editors must monitor community behaviour through surveys with researchers and reviewers, partner with information scientists to analyse the responses 15 , and share their findings with the larger community. Reviewing community attitudes more systematically can provide data both on the process and effect of reflecting on harms for individual researchers, the quality of exploration encountered by reviewers, and uncover systemic challenges to practicing thoughtful ethical reflection. Work to better understand how AI researchers view their responsibility about the effects of their work in light of changing social contexts is also crucial.

Evaluating whether AI or ML researchers are more explicit about the downsides of their research in their papers is a preliminary metric for measuring change in community behaviour at large 2 . An analysis of the potential negative consequences of AI research can consider the types of application the research can make possible, the potential uses of those applications, and the societal effects they can cause 4 .

Building on the efforts at NeurIPS 16 and NAACL 17 , we can openly share our learnings as conference organizers and ethics committee members to gain a better understanding of what does and does not work.

Community behaviour in response to ethics review at the publication stage must also be studied to evaluate how structural and cultural forces throughout the research process can be reshaped towards more responsible research. The inclusion of diverse researchers and ethics reviewers, as well as people who face existing and potential harm, is a prerequisite to conduct research responsibly and improve our ability to anticipate harms.

Expand experimentation of ethical review

The low uptake of ethics review practices, and the lack of experimentation with such processes, limits our ability to evaluate the effectiveness of different approaches. Experimentation cannot be limited to a few conferences that focus on some subdomains of ML and computing research — especially for subdomains that envision real-world applications such as in employment, policing and healthcare settings. For instance, NeurIPS, which is largely considered a methods and theoretical conference, began an ethics review process in 2020, whereas conferences closer to applications, such as top-tier conferences in computer vision, have yet to implement such practices.

Sustained experimentation across subfields of AI can help us to study actual community behaviour, including differences in researcher attitudes and the unique opportunities and challenges that come with each domain. In the absence of accepted best practices, implementing ethics review processes will require conference organizers and journal editors to act under uncertainty. For that reason, we recognize that it may be easier for publication venues to begin their ethics review process by making it voluntary for authors. This can provide researchers and reviewers with the opportunity to become familiar with ethical and societal reflection, remove incentives for researchers to ‘game’ the process, and help the organizers and wider community to get closer to identifying how they can best facilitate the reflection process.

Create venues for debate, alignment and collective action

This work requires considerable cultural and institutional change that goes beyond the submission of ethical statements or checklists at conferences.

Ethical codes in scientific research have proven to be insufficient in the absence of community-wide norms and discussion 1 . Venues for open exchange can provide opportunities for researchers to share their experiences and challenges with ethical reflection. Such venues can be conducive to reflect on values as they evolve in AI or ML research, such as topics chosen for research, how research is conducted, and what values best reflect societal needs.

The establishment of venues for dialogue where conference organizers and journal editors can regularly share experiences, monitor trends in attitudes, and exchange insights on actual community behaviour across domains, while considering the evolving research landscape and range of opinions, is crucial. These venues would bring together an international group of actors involved throughout the research process, from funders, research leaders, and publishers to interdisciplinary experts adopting a critical lens on AI impact, including social scientists, legal scholars, public interest advocates, and policymakers.

In addition, reflection and dialogue can have a powerful role in influencing the future trajectory of a technology. Historically, gatherings convened by scientists have had far-reaching effects — setting the norms that guide research, and also creating practices and institutions to anticipate risks and inform downstream innovation. The Asilomar Conference on Recombinant DNA in 1975 and the Bermuda Meetings on genomic data sharing in the 1990s are instructive examples of scientists and funders, respectively, creating spaces for consensus-building 18 , 19 .

Proposing a global forum for gene-editing, scholars Jasanoff and Hulburt argued that such a venue should promote reflection on “what questions should be asked, whose views must be heard, what imbalances of power should be made visible, and what diversity of views exist globally” 20 . A forum for global deliberation on ethical approaches to AI or ML research will also need to do this.

By focusing on building the AI research field’s capacity to measure behavioural change, exchange insights, and act together, we can amplify emerging ethical review and oversight efforts. Doing this will require coordination across the entire research community and, accordingly, will come with challenges that need to be considered by conference organizers and others in their funding strategies. That said, we believe that there are important incremental steps that can be taken today towards realizing this change. For example, hosting an annual workshop on ethics review at pre-eminent AI conferences, or holding public panels on this subject 21 , hosting a workshop to review ethics statements 22 , and bringing conference organizers together 23 . Recent initiatives undertaken by AI research teams at companies to implement ethics review processes 24 , better understand societal impacts 25 and share learnings 26 , 27 also show how industry practitioners can have a positive effect. The AI community recognizes that more needs to be done to mitigate this technology’s potential harms. Recent developments in ethics review in AI research demonstrate that we must take action together.

Change history

11 january 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s42256-023-00608-6

Prunkl, C. E. A. et al. Nat. Mach. Intell. 3 , 104–110 (2021).

Article   Google Scholar  

Hecht, B. et al. Preprint at https://doi.org/10.48550/arXiv.2112.09544 (2021).

Partnership on AI. https://go.nature.com/3UUX0p3 (2021).

Ashurst, C. et al. https://go.nature.com/3gsQfvp (2020).

Hecht, B. https://go.nature.com/3AASZhf (2020).

Ashurst, C., Barocas, S., Campbell, R., Raji, D. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2057–2068 (2022).

Abuhamad, G. et al. Preprint at https://arxiv.org/abs/2011.13032 (2020).

Boyarskaya, M. et al. Preprint at https://arxiv.org/abs/2011.13416 (2020).

Birhane, A. et al. in FAccT ‘ 22: 2022 ACM Conference on Fairness, Accountability, and Transparency 173–184 (2022).

Nanayakkara, P. et al. in AIES ‘ 21: Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society 795–806 (2021).

Ashurst, C., Hine, E., Sedille, P. & Carlier, A. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2047–2056 (2022).

National Academies of Sciences, Engineering, and Medicine. https://go.nature.com/3UTKOEJ (date accessed 16 September 2022).

Nat. Mach. Intell . 3 , 367 (2021).

Bernstein, M. S. et al. Proc. Natl Acad. Sci. USA 118 , e2117261118 (2021).

Pineau, J. et al. J. Mach. Learn. Res. 22 , 7459–7478 (2021).

Google Scholar  

Benjio, S. et al. Neural Information Processing Systems. https://go.nature.com/3tQxGEO (2021).

Bender, E. M. & Fort, K. https://go.nature.com/3TWnbua (2021).

Gregorowius, D., Biller-Andorno, N. & Deplazes-Zemp, A. EMBO Rep. 18 , 355–358 (2017).

Jones, K. M., Ankeny, R. A. & Cook-Deegan, R. J. Hist. Biol. 51 , 693–805 (2018).

Jasanoff, S. & Hurlbut, J. B. Nature 555 , 435–437 (2018).

Partnership on AI. https://go.nature.com/3EpQwY4 (2021).

Sturdee, M. et al. in CHI Conf.Human Factors in Computing Systems Extended Abstracts (CHI ’21 Extended Abstracts) ; https://doi.org/10.1145/3411763.3441330 (2021).

Partnership on AI. https://go.nature.com/3AzdNFW (2022).

DeepMind. https://go.nature.com/3EQyUWT (2022).

Meta AI. https://go.nature.com/3i3PBVX (2022).

Munoz Ferrandis, C. OpenRAIL; https://huggingface.co/blog/open_rail (2022).

OpenAI. https://go.nature.com/3GyZPYk (2022).

Download references

Author information

Authors and affiliations.

Partnership on AI, San Francisco, CA, USA

Madhulika Srikumar, Rebecca Finlay & Hudson Hongo

ServiceNow, Santa Clara, CA, USA

Grace Abuhamad

The Alan Turing Institute, London, UK

Carolyn Ashurst

OpenAI, San Francisco, CA, USA

Rosie Campbell

Centre for Data Ethics and Innovation, London, UK

Emily Campbell-Ratcliffe

Future of Privacy Forum, Washington, DC, USA

Sara R. Jordan

Design Research Works, Lancaster University, Lancaster, UK

Joseph Lindley

Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge, MA, USA

Aviv Ovadya

Meta AI, Menlo Park, CA, USA

Joelle Pineau

McGill University, Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Madhulika Srikumar .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Carina Prunkl and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Srikumar, M., Finlay, R., Abuhamad, G. et al. Advancing ethics review practices in AI research. Nat Mach Intell 4 , 1061–1064 (2022). https://doi.org/10.1038/s42256-022-00585-2

Download citation

Published : 14 December 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s42256-022-00585-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How to design an ai ethics board.

  • Jonas Schuett
  • Ann-Katrin Reuel
  • Alexis Carlier

AI and Ethics (2024)

Machine learning in precision diabetes care and cardiovascular risk prediction

  • Evangelos K. Oikonomou
  • Rohan Khera

Cardiovascular Diabetology (2023)

Generative AI entails a credit–blame asymmetry

  • Sebastian Porsdam Mann
  • Brian D. Earp
  • Julian Savulescu

Nature Machine Intelligence (2023)

Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI

  • V. Muralidharan

npj Digital Medicine (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

what is ethical review in research

Why you need ethical approval

Why ethical approval must be obtained for all research involving human participants and/or human tissue.

All research involving human participants and/or human tissues requires ethical approval by the University's Research Ethics Sub-Committee (RESC) or one of the Faculty Research Ethics Committees (FRECs).

Ethical research is honest, rigorous, transparent, respectful and protects participants.

Participants are a valuable part of the research process and not merely a means of accessing data. Ethical review provides protection for participants, and also helps to protect the researcher. By obtaining ethical approval the researcher is demonstrating that they have adhered to the accepted ethical standards of a genuine research study.

Participants have the right to know who has access to their data and what is being done with it. If ethical approval has not been obtained, the individual researcher bears personal responsibility for any claims that may be made.

Research funders will generally only fund research that has ethical approval, and many publishers will not accept for publication results of research that was not ethically approved.

Principles of good ethical research

  • Voluntary, and informed, participation – free from coercion or undue influence
  • Worthwhile, and providing value that outweighs any risk or harm
  • Respect for the rights and dignity of participants, including confidentiality and anonymity
  • Conducted with integrity and transparency
  • Independent (and any conflicts of interest or partiality made explicit)
  • Clear definition of lines of responsibility and accountability

All universities are required to have a research ethics framework with which all researchers must comply, and ethical approval is needed for all research undertaken by university staff and students (both undergraduate and postgraduate) wherever research and related activities involves human participants or raises ethical issues. Ethical issues should be considered early on in the planning process and approval must be obtained before any primary data collection for the project begins.

All academic staff engaged in research, both externally grant-funded or undertaken as part of their internally allocated workload, are required to complete the University's online  Research Ethics training module  (requires a UWE Bristol staff login).

Applying for ethical approval

Ethical Considerations of Conducting Systematic Reviews in Educational Research

  • Open Access
  • First Online: 22 November 2019

Cite this chapter

You have full access to this open access chapter

what is ethical review in research

  • Harsh Suri 6  

133k Accesses

39 Citations

1 Altmetric

Ethical considerations of conducting systematic reviews in educational research are not typically discussed explicitly. However, systematic reviews are frequently read and cited in documents that influence educational policy and practice. Hence, ethical issues associated with what and how systematic reviews are produced and used have serious implications. It becomes imperative for systematic reviewers to reflexively engage with a variety of ethical issues associated with potential conflicts of interest and issues of voice and representation. This chapter discusses how systematic reviewers can draw upon the philosophical traditions of consequentialism, deontology or virtue ethics to situate their ethical decision-making.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

what is ethical review in research

Quality Criteria in Educational Research: Is Beauty More Important Than Popularity?

Questions of legitimacy and quality in educational research.

what is ethical review in research

Methodological Approaches to Literature Review

Ethical considerations of conducting systematic reviews in educational research are not typically discussed explicitly. As an illustration, ‘ethics’ is not listed as a term in the index of the second edition of ‘An Introduction to Systematic Reviews’ (Gough et al. 2017 ). This chapter draws from my earlier in-depth discussion of this topic in the Qualitative Research Journal (Suri 2008 ) along with more recent publications by colleagues in the field of research ethics and methods of research synthesis.

Unlike primary researchers, systematic reviewers do not collect deeply personal, sensitive or confidential information from participants. Systematic reviewers use publicly accessible documents as evidence and are seldom required to seek an institutional ethics approval before commencing a systematic review. Institutional Review Boards for ethical conduct of research do not typically include guidelines for systematic reviews. Nonetheless, in the past four decades systematic reviews have evolved to become more methodologically inclusive and play a powerful role in influencing policy, practice, further research and public perception. Hence, ethical considerations of how interests of different stakeholders are represented in a research review have become critical (Franklin 1999 ; Hammersley 2003 ; Harlen and Crick 2004 ; Popkewitz 1999 ).

Educational researchers often draw upon the philosophical traditions of consequentialism, deontology or virtue ethics to situate their ethical decision-making. Consequentialism or utilitarianism focuses on maximising benefit and minimising harm by undertaking a cost-benefit analysis of potential positive and negative impacts of research on all stakeholders. Deontology or universalism stems from Immanuel Kant’s logic that certain actions are inherently right or wrong and hence ends cannot justify the means. A deontological viewpoint is underpinned by rights-based theories that emphasise universal adherence to the principles of beneficence (do good), non-maleficence (prevent harm), justice, honesty and gratitude. While both consequentialism and deontology focus on actions and behaviour, virtue ethics focuses on being virtuous, especially in relationships with various stakeholders. There are several overlaps, as well as tensions, between and across these philosophical traditions (Brooks et al. 2014 ; Cohen et al. 2018 ).

Recognising the inherently situated nature of ethical decision-making, I am selectively eclectic in drawing from each of these traditions. I discuss a variety of ethical considerations of conducting systematic reviews informed by rights-based theories, ethics of care and Foucauldian ethics. Rights-based theories underpin deontology and consequentialism. Most regulatory research ethics guidelines, such as those offered by British Educational Research Association (BERA 2018 ) and American Educational Research Association are premised on rights-based theories that emphasises basic human rights, such as liberty, equality and dignity. Ethics of care prioritises attentiveness, responsibility, competence and responsiveness (Tronto 2005 ). Foucauldian ethics highlights the relationship of power and knowledge (Ball 2013 ).

In my earlier publications, I have identified the following three guiding principles for a quality research synthesis (Suri 2018 ; Suri and Clarke 2009 ):

Informed subjectivity and reflexivity

Purposefully informed selective inclusivity

Audience-appropriate transparency

In the rest of this chapter, I will discuss how these guiding principles can support ethical decision making in systematic reviews in each of the following six phases of systematic reviews as identified in my earlier publications (Suri 2014 ):

identifying an appropriate epistemological orientation

identifying an appropriate purpose

searching for relevant literature

evaluating, interpreting and distilling evidence from selected reports

constructing connected understandings

communicating with an audience

To promote ethical production and use of systematic reviews through this chapter, I have used questioning as a strategic tool with the purpose of raising awareness about a variety of ethical considerations among systematic reviewers and their audience

1 Identifying an Appropriate Epistemological Orientation

What philosophical traditions are amenable for guiding ethical decision - making in systematic reviews positioned along distinct epistemologies?

Practising informed subjectivity and reflexivity, all systematic reviewers must identify an appropriate epistemological orientation, such as post-positivist, interpretive, participatory and/or critical, that is aligned with their review purpose and research competence (Suri 2013 , 2018 ).

Deontological ethics is more relevant to post-positivist reviewers who focus on explaining, predicting or describing educational phenomena as generalisable laws expressed through relationships between measurable constructs and variables. The ethical focus of post-positivist systematic reviews tends to be on minimising threats to internal validity, external validity, internal reliability and external reliability of review findings. This is typically achieve by using a priori synthesis protocols, defining all key constructs conceptually and operationally in behavioural terms, employing exhaustive sampling strategies and employing variable oriented statistical analyses (Matt and Cook 2009 ; Petticrew and Roberts 2006 ).

Teleological ethics is more relevant to interpretive systematic reviews aiming to construct a holistic understanding of the educational phenomena that takes into account subjective experiences of diverse groups in varied contexts. Ethical decision making in interpretive systematic reviews lays an emphasis on authentically representing experiences and perceptions of diverse groups, especially those whose viewpoints tend to be less represented in the literature, to the extent that is permissible from the published literature. Maintaining a questioning gaze and a genuine engagement with diverse viewpoints, interpretive systematic reviewers focus on how individual accounts of a phenomenon reinforce, refute or augment each other (Eisenhart 1998 ; Noblit and Hare 1988 ).

Ethics of care is amenable to participatory systematic reviews that are designed to improve participant reviewers’ local world experientially through critical engagement with the relevant research. Ethical decision making in participatory systematic reviews promotes building teams of practitioners with the purpose of co-reviewing research that can transform their own practices and representations of their lived experiences. Participant co-reviewers exercise greater control throughout the review process to ensure that the review remains relevant to generating actionable knowledge for transforming their practice (Bassett and McGibbon 2013 ).

Foucauldian ethics is aligned with critical systematic reviews that contest dominant discourse by problematizing the prevalent metanarratives. Ethical decision making in critical systematic reviews focuses on problematizing ‘what we might take for granted’ (Schwandt 1998 , p. 410) in a field of research by raising ‘important questions about how narratives get constructed, what they mean, how they regulate particular forms of moral and social experiences, and how they presuppose and embody particular epistemological and political views of the world’ (Aronowitz and Giroux 1991 , pp. 80–81).

2 Identifying an Appropriate Purpose

What are key ethical considerations associated with identifying an appropriate purpose for a systematic review?

In this age of information explosion, systematic reviews require substantial resources. Guided by teleological ethics, systematic reviewers must conduct a cost-benefit analysis with a critical consideration of the purpose and scope of the review and its potential benefits to various groups of stakeholders.

If we consider the number of views or downloads as a proxy measure of impact, then we can gain useful insights by examining the teleological underpinnings of some of the highly read systematic reviews. Review of Educational Research (RER) tends to be regarded as the premiere educational research review journal internationally. Let us examine the scope and purpose of the three ‘most read’ articles in RER, as listed on 26 September 2018. Given the finite amount of resources available, an important question for educators is ‘what interventions are likely to be most effective, and under what circumstances?’. The power of feedback (Hattie and Timperley 2007 ), with 11463 views and downloads, is a conceptual analysis primarily drawing from the findings of published systematic reviews (largely meta-analyses) conducted to address this important question. In addition to effectively teaching what is deemed important, educators also have an important role of critiquing what is deemed important and why. The theory and practice of culturally relevant education: A synthesis of research across content areas (Aronson and Laughter 2016 ), with 8958 views and downloads, is an example of such a systematic review. After highlighting the positive outcomes of culturally relevant education, the authors problematise the validity of standardised testing as an unbiased form of a desirable educational outcome for all. As education is essentially a social phenomenon, understanding how different stakeholders perceive various configurations of an educational intervention is critical. Making sense of assessment feedback in higher education (Evans 2013 ), with 5372 views and downloads, is an example of a systematic review that follows such a pursuit. Even though each of these reviews required significant resources and expertise, the cost is justified by the benefits evident from the high number of views and downloads of these articles. Each of these three reviews makes clear recommendations for practitioners and researchers by providing an overview, as well as interrogating, current practices.

All educational researchers are expected to prevent, or disclose and manage, ethical dilemmas arising from any real or perceived conflicts of interest (AERA 2011 ; BERA 2018 ). Systematic reviewers should also carefully scrutinise how their personal, professional or financial interests may influence the review findings in a specific direction. As systematic reviews require significant effort and resources, it is logical for systematic reviewers to bid for funding. Recognising the influence of systematic reviews in shaping perceptions of the wider community, many profit and not profit organisations have become open to funding systematic reviews. Before accepting funding for conducting a systematic review, educational researchers must carefully reflect on the following questions:

How does the agenda of the funding source intersect with the purpose of the review?

How might this potentially influence the review process and findings? How will this be managed ethically to ensure integrity of the systematic review findings?

In case of sponsored systematic reviews, it is important to consider at the outset how potential ethical issues will be managed if the interest of the funding agency conflicts with the interests of relatively less influential or less represented groups. Systematic reviews funded by a single agency with a vested interest in the findings are particularly vulnerable to ethical dilemmas arising from a conflict of interest (The Methods Coordinating Group of the Campbell Collaboration 2017 ). One approach could be to seek funding from a combination of agencies representing interests of different stakeholder groups. Exploring the option of crowdfunding is another option that systematic reviewers could use to represent the interests of marginalised groups whose interests are typically overlooked in the agenda of powerful funding agencies. In participatory synthesis, it is critical that the purpose of the systematic review evolves organically in response to the emerging needs of the practitioner participant reviewers.

3 Searching for Relevant Literature

What are key ethical considerations associated with developing an appropriate strategy for sampling and searching relevant primary research reports to include in a systematic review?

A number of researchers in education and health sciences have found that studies with certain methodological orientations or types of findings are more likely to be funded, published, cited and retrieved through common search channels (Petticrew and Roberts 2006 ). Serious ethical implications arise when systematic reviews of biased research are drawn upon to make policy decisions with an assumption that review findings are representative of the larger population. In designing an appropriate sampling and search strategy, systematic reviewers should carefully consider the impact of potential publication biases and search biases.

Funding bias, methodological bias, outcome bias and confirmatory bias are common forms of publication bias in educational research. For instance, studies with large sample-sizes are more likely to attract research funding, being submitted for publishing and getting published in reputable journals (Finfgeld-Connett and Johnson 2012 ). Research that reports significantly positive effects of an innovative intervention is more likely to be submitted for publishing by primary researchers and being accepted for publishing by journal editors (Dixon-Woods 2011 ; Rothstein et al. 2004 ). Rather than reporting on all the comparisons made in a study, often authors report on only those comparisons that are significant (Sutton 2009 ). As a result, the effectiveness of innovative educational interventions gets spuriously inflated in published literature. Often, when an educational intervention is piloted, additional resources are allocated for staff capacity building. However, in real life when the same intervention is rolled out at scale, the same degree of support is not provided to teachers whose practice is impacted by the intervention (Schoenfeld 2006 ).

Even after getting published, certain types of studies are more likely to be cited and retrieved through common search channels, such as key databases and professional networks (Petticrew and Roberts 2006 ). Systematic reviewers must carefully consider common forms of search biases, such as database bias, citation bias, availability bias, language bias, country bias, familiarity bias and multiple publication bias. The term ‘grey literature’ is sometimes used to refer to published and unpublished reports, such as government reports, that are not typically included in common research indexes and databases (Rothstein and Hopewell 2009 ). Several scholars recommend inclusion of grey literature to minimise potential impact of publication bias and search bias (Glass 2000 ) and to be inclusive of key policy documents and government reports (Godin et al. 2015 ). On the other hand, several other scholars argue that systematic reviewers should include only published research that has undergone the peer-review process of academic community to include only high-quality research and to minimise the potential impact of multiple publications based on the same dataset (La Paro and Pianta 2000 ).

With the ease of internet publishing and searching, the distinction between published and unpublished research has become blurred and the term grey literature has varied connotations. While most systematic reviews employ exhaustive sampling, in recent years there has been an increasing uptake of purposeful sampling in systematic reviews as evident from more than 1055 Google Scholar citations of a publication on this topic: Purposeful sampling in qualitative research synthesis (Suri 2011 ).

Aligned with the review’s epistemological and teleological positioning, all systematic reviewers must prudently design a sampling strategy and search plan, with complementary sources, that will give them access to most relevant primary research from a variety of high-quality sources that is inclusive of diverse viewpoints. They must ethically consider positioning of the research studies included in their sample in relation to the diverse contextual configurations and viewpoints commonly observed in practical settings.

4 Evaluating, Interpreting and Distilling Evidence from the Selected Research Reports

What are key ethical considerations associated with evaluating, interpreting and distilling evidence from the selected research reports in a systematic review?

Systematic reviewers typically do not have direct access to participants of primary research studies included in their review. The information they analyse is inevitably refracted through the subjective lens of authors of individual studies. It is important for systematic reviewers to critically reflect upon contextual position of the authors of primary research studies included in the review, their methodological and pedagogical orientations, assumptions they are making, and how they might have influenced the findings of the original studies. This becomes particularly important with global access to information where critical contextual information, that is common practice in a particular context but not necessarily in other contexts, may be taken-for-granted by the authors of the primary research report and hence may not get explicitly mentioned.

Systematic reviewers must ethically consider the quality and relevance of evidence reported in primary research reports with respect to the review purpose (Major and Savin-Baden 2010 ). In evaluating quality of evidence in individual reports, it is important to use the evaluation criteria that are commensurate with the epistemological positioning of the author of the study. Cook and Campbell’s ( 1979 ) constructs of internal validity, construct validity, external validity and statistical conclusion are amenable for evaluating postpositivist research. Valentine ( 2009 ) provides a comprehensive discussion of criteria suitable for evaluating research employing a wide range of postpositivist methods. Lincoln and Guba’s ( 1985 ) constructs of credibility, transferability, dependability and confirmability are suitable for evaluating interpretive research. The Centre for Reviews and Dissemination (CRD 2009 ) provides a useful comparison of common qualitative research appraisal tools in Chap.  6 of its open access guidelines for systematic reviews. Herons and Reason’s ( 1997 ) constructs of critical subjectivity, epistemic participation and political participation emphasising a congruence of experiential, presentational, propositional, and practical knowings are appropriate for evaluating participatory research studies. Validity of transgression, rather than correspondence, is suitable for evaluating critically oriented research reports using Lather’s constructs of ironic validity, paralogical validity, rhizomatic validity and voluptuous validity (Lather 1993 ). Rather than seeking perfect studies, systematic reviewers must ethically evaluate the extent to which findings reported in individual studies are grounded in the reported evidence.

While interpreting evidence from individual research reports, systematic reviewers should be cognisant of the quality criteria that are commensurate with the epistemological positioning of the original study. It is important to ethically reflect on plausible reasons for critical information that may be missing from individual reports and how might that influence the report findings (Dunkin 1996 ). Through purposefully informed selective inclusivity, systematic reviewers must distil information that is most relevant for addressing the synthesis purpose.

Often a two-stage approach is appropriate for evaluating, interpreting and distilling evidence from individual studies. For example, in their review that won the American Educational Research Association’s Review of the Year Award , Wideen et al. ( 1998 ) first evaluated individual studies using the criteria aligned with the methodological orientation of individual studies. Then, they distilled information that was most relevant for addressing their review purpose. In this phase, systematic reviewers must ethically pay particular attention to the quality criteria that are aligned with the overarching methodological orientation of their review, including some of the following criteria: reducing any potential biases, honouring representations of the participants of primary research studies, enriching praxis of participant reviewers or constructing a critically reflexive account of how certain discourses of an educational phenomenon have become more powerful than others. The overarching orientation and purpose of the systematic review should influence the extent to which evidence from individual primary research studies is drawn upon in a systematic review to shape the review findings (Major and Savin-Baden 2010 ; Suri 2018 ).

5 Constructing Connected Understandings

What are key ethical considerations associated with constructing connected understandings in a systematic review?

Through informed subjectivity and reflexivity, systematic reviewers must ethically consider how their own contextual positioning is influencing the connected understandings they are constructing from the distilled evidence. A variety of systematic techniques can be used to minimise unacknowledged biases, such as content analysis, statistical techniques, historical methods, visual displays, narrative methods, critical sensibilities and computer-based techniques. Common strategies for enhancing quality of all systematic reviews include ‘reflexivity; collaborative sense-making; eliciting feedback from key stakeholders; identifying disconfirming cases and exploring rival connections; sensitivity analyses and using multiple lenses’ (Suri 2014 , p. 144).

In addition, systematic reviewers must pay specific attention to ethical considerations particularly relevant to their review’s epistemological orientation. For instance, all post-positivist systematic reviewers should be wary of the following types of common errors: unexplained selectivity, not discriminating between evidence of varying quality, inaccurate coding of contextual factors, overstating claims made in the review beyond what can be justified by the evidence reported in primary studies and not paying adequate attention to the findings that are at odds with the generalisations made in the review (Dunkin 1996 ). Interpretive systematic reviews should focus on ensuring authentic representation of the viewpoints of the participants of the original studies as expressed through the interpretive lens of the authors of those studies. Rather than aiming for generalisability of the findings, they should aim at transferability by focusing on how the findings of individual studies intersect with their methodological and contextual configurations. Ethical considerations in participatory systematic reviews should pay attention to the extent to which practitioner co-reviewers feel empowered to drive the agenda of the review to address their own questions, change their own practices through the learning afforded by participating in the experience of the synthesis and have practitioner voices heard through the review (Suri 2014 ). Critically oriented systematic reviews should highlight how certain representations silence or privilege some discourses over the others and how they intersect with the interests of various stakeholder groups (Baker 1999 ; Lather 1999 ; Livingston 1999 ).

6 Communicating with an Audience

What are key ethical considerations associated with communicating findings of a systematic review to diverse audiences?

All educational researchers are expected to adhere to the highest standards of quality and rigour (AERA 2011 ; BERA 2018 ). The PRISMA-P group have identified a list of ‘Preferred reporting items for systematic review and meta-analysis protocols’ (Moher et al. 2015 ) which are useful guidelines to improve the transparency of the process in systematic reviews. Like all educational researchers, systematic reviewers also have an obligation to disclose any sources of funding and potential conflicts of interest that could have influenced their findings.

All researchers should reflexively engage with issues that may impact on individuals participating in the research as well as the wider groups whose interests are intended to be addressed through their research (Greenwood 2016 ; Pullman and Wang 2001 ; Tolich and Fitzgerald 2006 ). Systematic reviewers should also critically consider the potential impact of the review findings on the participants of original studies and the wider groups whose practices or experiences are likely to be impacted by the review findings. They should carefully articulate the domain of applicability of a review to deter the extrapolation of the review findings beyond their intended use. Contextual configurations of typical primary research studies included in the review must be comprehensively and succinctly described in a way that contextual configurations missing from their sample of studies become visible.

Like primary researchers, systematic reviewers should reflexively engage with a variety of ethical issues associated that potential conflicts of interest and issues of voice and representation. Systematic reviews are frequently read and cited in documents that influence educational policy and practice. Hence, ethical issues associated with what and how systematic reviews are produced and used have serious implications. Systematic reviewers must pay careful attention to how perspectives of authors and research participants of original studies are represented in a way that makes the missing perspectives visible. Domain of applicability of systematic reviews should be scrutinised to deter unintended extrapolation of review findings to contexts where they are not applicable. This necessitates that they systematically reflect upon how various publication biases and search biases may influence the synthesis findings. Throughout the review process, they must remain reflexive about how their own subjective positioning is influencing, and being influenced, by the review findings. Purposefully informed selective inclusivity should guide critical decisions in the review process. In communicating the insights gained through the review, they must ensure audience-appropriate transparency to maximise an ethical impact of the review findings.

AERA. (2011). Code of ethics (Approved by Amercian Educational Research Association Council February 2011). Educational Researcher, 40 (3), 145–156.

Article   Google Scholar  

Aronowitz, S., & Giroux, H. A. (1991). Postmodern education: Politics, culture, and social criticism . Minneapolis: University of Minnesota Press.

Google Scholar  

Aronson, B., & Laughter, J. (2016). The theory and practice of culturally relevant education: A synthesis of research across content areas. Review of Educational Research, 86 (1), 163–206.

Baker, B. (1999). What is voice? Issues of identity and representation in the framing of reviews. Review of Educational Research, 69 (4), 365–383.

Ball, S. (2013). Foucault, power and education . NY: Routledge.

Book   Google Scholar  

Bassett, R., & McGibbon, E. (2013). A critical participatory and collaborative method for scoping the literature. Quality and quantity, 47 (6), 3249–3259.

BERA. (2018). British Educational Research Associations’ ethical guidelines for educational research . Retrieved 4 April 2018 from https://www.bera.ac.uk/wp-content/uploads/2018/06/BERA-Ethical-Guidelines-for-Educational-Research_4thEdn_2018.pdf?noredirect=1 .

Brooks, R., Kitty, t. R., & Maguire, M. (2014). Ethics and Education Research . London: Sage.

Cohen, L., Manion, L., & Morrison, K. (2018). Research Methods in Education (8th ed.). Abingdon: Routledge.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings . Chicago: Rand McNally.

CRD. (2009). Systematic reviews: CRD’s guidance for undertaking reviews in health care. Retrieved April 10, 2019 from https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf .

Dixon-woods, M. (2011). Using framework-based synthesis for conducting reviews of qualitative studies. BMC Medicine, 9 (3), 39–40.

Dunkin, M. J. (1996). Types of errors in synthesizing research in education. Review of Educational Research, 66 (2), 87–97.

Eisenhart, M. (1998). On the subject of interpretive reviews. Review of Educational Research, 68 (4), 391–399.

Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83 (1), 70–120. https://doi.org/10.3102/0034654312474350 .

Finfgeld-Connett, D., & Johnson, E. D. (2012). Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing, 69 (1), 194–203. https://doi.org/10.1111/j.1365-2648.2012.06037.x .

Franklin, B. M. (1999). Discourse, rationality and educational research: A historical perspective of RER. Review of Educational Research, 69 (4), 347–363.

Glass, G. V. (2000, January). Meta-analysis at 25. Retrieved April 10, 2019, from http://www.gvglass.info/papers/meta25.html .

Godin, K., Stapleton, J., Kirkpatrick, S. I., Hanning, R. M., & Leatherdale, S. T. (2015). Applying systematic review search methods to the grey literature: A case study examining guidelines for school-based breakfast programs in Canada. Systematic Reviews, 4 (1), 138. https://doi.org/10.1186/s13643-015-0125 .

Gough, D., Oliver, S., & Thomas, J. (Eds.). (2017). An introduction to systematic reviews (2nd ed.). London: Sage.

Greenwood, M. (2016). Approving or improving research ethics in management journals. Journal of Business Ethics, 137 , 507–520.

Hammersley, M. (2003). Systematic or unsystematic, is that the question? Some reflections on the science, art, and politics of reviewing research. Paper presented at the Department of Epidemiology and Public Health, University of Leicester.

Harlen, W., & Crick, R. D. (2004). Opportunities and challenges of using systematic reviews of research for evidence-based policy in education. Evaluation and Research in Education, 18 (1–2), 54–71.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–113.

Heron, J., & Reason, P. (1997). A participatory inquiry paradigm. Qualitative Inquiry, 3 (3), 274–294.

La Paro, K., & Pianta, R. (2000). Predicting children’s competence in the early school years: A meta-analytic review. Review of Educational Research, 70 (4), 443–484.

Lather, P. (1993). Fertile obsession: Validity after poststructuralism. The Sociological Quaterly, 34 (4), 673–693.

Lather, P. (1999). To be of use: The work of reviewing. Review of Educational Research, 69 (1), 2–7.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Beverly Hills, CA: Sage.

Livingston, G. (1999). Beyond watching over established ways: A review as recasting the literature, recasting the lived. Review of Educational Research, 69 (1), 9–19.

Major, C. H., & Savin-Baden, M. (2010). An introduction to qualitative research synthesis: Managing the information explosion in social science research . London: Routledge.

Matt, G. E., & Cook, T. D. (2009). Threats to the validity of generalized inferences. In H. M. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 537–560). New York: Sage.

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M.,... Stewart, L. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4 (1–9).

Noblit, G. W., & Hare, R. D. (1988). Meta-ethnography: Synthesizing qualitative studies . Newbury Park: Sage.

Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide . Malden, MA: Blackwell.

Popkewitz, T. S. (1999). Reviewing reviews: RER , research and the politics of educational knowledge. Review of Educational Research, 69 (4), 397–404.

Pullman, D., & Wang, A. T. (2001). Adaptive designs, informed consent, and the ethics of research. Controlled Clinical Trials, 22 (3), 203–210.

Rothstein, H. R., & Hopewell, S. (2009). Grey literature. In H. M. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 103–125). New York: Sage.

Rothstein, H. R., College, B., Turner, H. M., & Lavenberg, J. G. (2004). The Campbell Collaboration information retrieval policy brief . Retrieved 2006, June 05, from http://www.campbellcollaboration.org/MG/IRMGPolicyBriefRevised.pdf .

Schoenfeld, A. H. (2006). What doesn’t work: The challenge and failure of the What Works Clearinghouse to conduct meaningful reviews of studies of mathematics curricula. Educational Researcher, 35 (2), 13–21.

Schwandt, T. A. (1998). The interpretive review of educational matters: Is there any other kind? Review of Educational Research, 68 (4), 409–412.

Suri, H. (2008). Ethical considerations in synthesising research: Whose representations? Qualitative Research Journal, 8 (1), 62–73.

Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research Journal, 11 (2), 63–75.

Suri, H. (2013). Epistemological pluralism in qualitative research synthesis. International Journal of Qualitative Studies in Education 26 (7), 889–911.

Suri, H. (2014). Towards methodologically inclusive research synthesis . UK: Routledge.

Suri, H. (2018). ‘Meta-analysis, systematic reviews and research syntheses’ In L. Cohen, L. Manion & K. R. B. Morrison Research Methods in Education (8th ed., pp. 427–439). Abingdon: Routledge.

Suri, H., & Clarke, D. J. (2009). Advancements in research synthesis methods: From a methodologically inclusive perspective. Review of Educational Research, 79 (1), 395–430.

Sutton, A. J. (2009). Publication bias. In H. M. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 435–452). New York: Sage.

The Methods Coordinating Group of the Campbell Collaboration. (2017). Methodological expectations of Campbell Collaboration intervention reviews: Reporting standards. Retrieved April 21, 2019, from https://www.campbellcollaboration.org/library/campbell-methods-reporting-standards.html .

Tolich, M., & Fitzgerald, M. (2006). If ethics committees were designed for ethnography. Journal of Empirical Research on Human Research Ethics. Journal of Empirical Research on Human Research Ethics, 12 (2), 71–78.

Tronto, J. C. (2005). An ethic of care. In A. E. Cudd & R. O. Andreasen (Eds.), Feminist theory: A philosophical anthology (pp. 251–263). Oxford, UK Malden, Massachusetts: Blackwell.

Valentine, J. C. (2009). Judging the quality of primary research. In H. M. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 129–146). New York: Sage.

Wideen, M., Mayer-Smith, J., & Moon, B. (1998). A critical analysis of the research on learning to teach: Making the case for an ecological perspective on inquiry. Review of Educational Research, 68 (2), 130–178.

Download references

Author information

Authors and affiliations.

Deakin University, Melbourne, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Harsh Suri .

Editor information

Editors and affiliations.

Oldenburg, Germany

Olaf Zawacki-Richter

Essen, Germany

Michael Kerres

Svenja Bedenlier

Melissa Bond

Katja Buntins

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2020 The Author(s)

About this chapter

Suri, H. (2020). Ethical Considerations of Conducting Systematic Reviews in Educational Research. In: Zawacki-Richter, O., Kerres, M., Bedenlier, S., Bond, M., Buntins, K. (eds) Systematic Reviews in Educational Research. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-27602-7_3

Download citation

DOI : https://doi.org/10.1007/978-3-658-27602-7_3

Published : 22 November 2019

Publisher Name : Springer VS, Wiesbaden

Print ISBN : 978-3-658-27601-0

Online ISBN : 978-3-658-27602-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

1439 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

what is ethical review in research

  • Open access
  • Published: 29 April 2024

What is context in knowledge translation? Results of a systematic scoping review

  • Tugce Schmitt   ORCID: orcid.org/0000-0001-6893-6428 1 ,
  • Katarzyna Czabanowska 1 &
  • Peter Schröder-Bäck 1  

Health Research Policy and Systems volume  22 , Article number:  52 ( 2024 ) Cite this article

217 Accesses

4 Altmetric

Metrics details

Knowledge Translation (KT) aims to convey novel ideas to relevant stakeholders, motivating their response or action to improve people’s health. Initially, the KT literature focused on evidence-based medicine, applying findings from laboratory and clinical research to disease diagnosis and treatment. Since the early 2000s, the scope of KT has expanded to include decision-making with health policy implications.

This systematic scoping review aims to assess the evolving knowledge-to-policy concepts, that is, macro-level KT theories, models and frameworks (KT TMFs). While significant attention has been devoted to transferring knowledge to healthcare settings (i.e. implementing health policies, programmes or measures at the meso-level), the definition of 'context' in the realm of health policymaking at the macro-level remains underexplored in the KT literature. This study aims to close the gap.

A total of 32 macro-level KT TMFs were identified, with only a limited subset of them offering detailed insights into contextual factors that matter in health policymaking. Notably, the majority of these studies prompt policy changes in low- and middle-income countries and received support from international organisations, the European Union, development agencies or philanthropic entities.

Peer Review reports

Few concepts are used by health researchers as vaguely and yet as widely as Knowledge Translation (KT), a catch-all term that accommodates a broad spectrum of ambitions. Arguably, to truly understand the role of context in KT, we first need to clarify what KT means. The World Health Organization (WHO) defines KT as ‘the synthesis, exchange and application of knowledge by relevant stakeholders to accelerate the benefits of global and local innovation in strengthening health systems and improving people’s health’ [ 1 ]. Here, particular attention should be paid to ‘innovation’, given that without unpacking this term, the meaning of KT would still remain ambiguous. Rogers’ seminal work ‘Diffusion of Innovations’ [ 2 ] defines innovation as an idea, practice or object that is perceived as novel by individuals or groups adopting it. In this context, he argues that the objective novelty of an idea in terms of the amount of time passed after its discovery holds little significance [ 2 ]. Rather, it is the subjective perception of newness by the individual that shapes their response [ 2 ]. In other words, if an idea seems novel to individuals, and thereby relevant stakeholders according to the aforementioned WHO definition, it qualifies as an innovation. From this perspective, it can be stated that a fundamental activity of KT is to communicate ideas that could be perceived as original to the targeted stakeholders, with the aim of motivating their response to improve health outcomes. This leaves us with the question of who exactly these stakeholders might be and what kind of actions would be required from them.

The scope of stakeholders in KT has evolved over time, along with their prompted responses. Initially, during the early phases of KT, the focus primarily revolved around healthcare providers and their clinical decisions, emphasising evidence-based medicine. Nearly 50 years ago, the first scientific article on KT was published, introducing Tier 1 KT, which concentrated on applying laboratory discoveries to disease diagnosis or treatment, also known as bench-to-bedside KT [ 3 ]. The primary motivation behind this initial conceptualisation of KT was to engage healthcare providers as the end-users of specific forms of knowledge, primarily related to randomised controlled trials of pharmaceuticals and evidence-based medicine [ 4 ]. In the early 2000s, the second phase of KT (Tier 2) emerged under the term ‘campus-to-clinic KT’ [ 3 ]. This facet, also known as translational research, was concerned with using evidence from health services research in healthcare provision, both in practice and policy [ 4 ]. Consequently, by including decision-makers as relevant end-users, KT scholars expanded the realm of research-to-action from the clinical environment to policy-relevant decision-making [ 5 ]. Following this trajectory, additional KT schemes (Tier 3–Tier 5) have been introduced into academic discourse, encompassing the dissemination, implementation and broader integration of knowledge into public policies [ 6 , 7 ]. Notably, the latest scheme (Tier 5) is becoming increasingly popular and represents the broadest approach, which describes the translation of knowledge to global communities and aims to involve fundamental, universal change in attitudes, policies and social systems [ 7 ].

In other words, a noticeable shift in KT has occurred with time towards macro-level interventions, named initially as evidence- based policymaking and later corrected to evidence- informed policymaking. In parallel with these significant developments, various alternative terms to KT have emerged, including ‘implementation science’, ‘knowledge transfer’, and ‘dissemination and research use’, often with considerable overlap [ 8 ]. Arguably, among the plethora of alternative terms proposed, implementation science stands out prominently. While initially centred on evidence-based medicine at the meso-level (e.g. implementing medical guidelines), it has since broadened its focus to ‘encompass all aspects of research relevant to the scientific study of methods to promote the uptake of research findings into routine settings in clinical, community and policy contexts’ [ 9 ], closely mirroring the definition to KT. Thus, KT, along with activities under different names that share the same objective, has evolved into an umbrella term over the years, encompassing a wide range of strategies aimed at enhancing the impact of research not only on clinical practice but also on public policies [ 10 ]. Following the adoption of such a comprehensive definition of KT, some researchers have asserted that using evidence in public policies is not merely commendable but essential [ 11 ].

In alignment with the evolution of KT from (bio-)medical sciences to public policies, an increasing number of scholars have offered explanations on how health policies should be developed [ 12 ], indicating a growing focus on exploring the mechanisms of health policymaking in the KT literature. However, unlike in the earlier phases of KT, which aimed to transfer knowledge from the laboratory to healthcare provision, decisions made for public policies may be less technical and more complex than those in clinical settings [ 3 , 13 , 14 ]. Indeed, social scientists point out that scholarly works on evidence use in health policies exhibit theoretical shortcomings as they lack engagement with political science and public administration theories and concepts [ 15 , 16 , 17 , 18 ]; only a few of these works employ policy theories and political concepts to guide data collection and make sense of their findings [ 19 ]. Similarly, contemporary literature that conceptualises KT as an umbrella term for both clinical and public policy decision-making, with calls for a generic ‘research-to-action’ [ 20 ], may fail to recognise the different types of actions required to change clinical practices and influence health policies. In many respects, such calls can even lead to a misconception that evidence-informed policymaking is simply a scaled-up version of evidence-based medicine [ 21 ].

In this study, we systematically review knowledge translation theories, models and frameworks (also known as KT TMFs) that were developed for health policies. Essentially, KT TMFs can be depicted as bridges that connect findings across diverse studies, as they establish a common language and standardise the measurement and assessment of desired policy changes [ 22 ]. This makes them essential for generalising implementation efforts and research findings [ 23 ]. While distinctions between a theory, a model or a framework are not always crystal-clear [ 24 ], the following definitions shed light on how they are interpreted in the context of KT. To start with, theory can be described as a set of analytical principles or statements crafted to structure our observations, enhance our understanding and explain the world [ 24 ]. Within implementation science, theories are encapsulated as either generalised models or frameworks. In other words, they are integrated into broader concepts, allowing researchers to form assumptions that help clarify phenomena and create hypotheses for testing [ 25 ].

Whereas theories in the KT literature are explanatory as well as descriptive, KT models are only descriptive with a more narrowly defined scope of explanation [ 24 ]; hence they have a more specific focus than theories [ 25 ]. KT models are created to facilitate the formulation of specific assumptions regarding a set of parameters or variables, which can subsequently be tested against outcomes using predetermined methods [ 25 ]. By offering simplified representations of complex situations, KT models can describe programme elements expected to produce desired results, or theoretical constructs believed to influence or moderate observed outcomes. In this way, they encompass theories related to change or explanation [ 22 ].

Lastly, frameworks in the KT language define a set of variables and the relations among them in a broad sense [ 25 ]. Frameworks, without the aim of providing explanations, solely describe empirical phenomena, representing a structure, overview, outline, system or plan consisting of various descriptive categories and the relations between them that are presumed to account for a phenomenon [ 24 ]. They portray loosely-structured constellations of theoretical constructs, without necessarily specifying their relationships; they can also offer practical methods for achieving implementation objectives [ 22 ]. Some scholars suggest sub-classifications and categorise a framework as ‘actionable’ if it has the potential to facilitate macro-level policy changes [ 11 ].

Context, which encompasses the entire environment in which policy decisions are made, is not peripheral but central to policymaking, playing a crucial role in its conceptualisation [ 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. In the KT literature, the term ‘context’ is frequently employed, albeit often with a lack of precision [ 35 ]. It tends to serve as a broad term including various elements within a situation that are relevant to KT in some way but have not been explicitly identified [36]. However, there is a growing interest in delving deeper into what context refers to, as evidenced by increasing research attention [ 31 , 32 , 37 , 38 , 39 , 40 , 41 ]. While the definition of context in the transfer of knowledge to healthcare settings (i.e. implementing health policies, programmes or measures at the meso-level) has been systematically studied [ 36 , 37 , 42 , 43 ], the question of how KT scholars detail context in health policymaking remains unanswered. With our systematic scoping review, we aim to close this gap.

While KT TMFs, emerged from evidence-based medicine, have historically depicted the use of evidence from laboratories or healthcare organisations as the gold standard, we aimed to assess in this study whether and to what extent the evolving face of KT, addressing health policies, succeeded in foregrounding ‘context’. Our objective was thus not to evaluate the quality of these KT TMFs but rather to explore how scholars have incorporated contextual influences into their reasoning. We conducted a systematic scoping review to explore KT TMFs that are relevant to agenda-setting, policy formulation or policy adoption, in line with the aim of this study. Therefore, publications related to policy implementation in healthcare organisations or at the provincial level, as well as those addressing policy evaluation, did not meet our inclusion criteria. Consequently, given our focus on macro-level interventions, we excluded all articles that concentrate on translating clinical research into practice (meso-level interventions) and health knowledge to patients or citizens (micro-level interventions).

Prior systematic scoping reviews in the area of KT TMFs serve as a valuable foundation upon which to build further studies [ 44 , 45 ]. Using established methodologies may ensure a validated approach, allowing for a more nuanced understanding of KT TMFs in the context of existing scholarly work. Our review methodology employed a similar approach to that followed by Strifler et al. in 2018, who conducted a systematic scoping review of KT TMFs in the field of cancer prevention and management, as well as other chronic diseases [ 44 ]. Their search strategy was preferred over others for two primary reasons. First, Strifler et al. investigated KT TMFs altogether, systematically and comprehensively. Second, unlike many other review studies on KT, they focused on macro-level KT and included all relevant keywords useful for the purpose of our study in their Ovid/MEDLINE search query [ 44 ]. For our scoping review, we adapted their search query with the assistance of a specialist librarian. This process involved eliminating terms associated with cancer and chronic diseases, removing time limitation on the published papers, and including an additional language other than English due to authors’ proficiency in German. We included articles published in peer-reviewed journals until November 2022, excluding opinion papers, conference abstracts and study protocols, without any restriction on publication date or place. Our search query is presented in Table  1 .

Following a screening methodology similar to that employed by Votruba et al. [ 11 ], the first author conducted an initial screening of the titles and abstracts of 2918 unique citations. Full texts were selected and scrutinised if they appeared relevant to the topics of agenda-setting, policy formulation or policy adoption. Among these papers, the first author also identified those that conceptualised a KT TMF. Simultaneously, the last author independently screened 2918 titles and abstracts, randomly selecting 20% of them to identify studies related to macro-level KT. Regarding papers that conceptualised a KT TMF, all those initially selected by the first author underwent a thorough examination by the last author as well. In the papers reviewed by these two authors of this study, KT TMFs were typically presented as either Tables or Figures. In cases where these visual representations did not contain sufficient information about ‘context’, the main body of the study was carefully scrutinised by both reviewers to ensure no relevant information was missed. Any unclear cases were discussed and resolved to achieve 100% inter-rater agreement between the first and second reviewers. This strategy resulted in the inclusion of 32 relevant studies. The flow chart outlining our review process is provided in Fig.  1 .

figure 1

Flow chart of the review process

According to the results of our systematic scoping review (Table  2 ), the first KT TMF developed for health policies dates back to 2003, confirming the emergence of a trend that expanded the meaning of the term Knowledge Translation to include policymakers as end-users of evidence during approximately the same period. In their study, Jacobson et al. [ 46 ] present a framework derived from a literature review to enhance understanding of user groups by organising existing knowledge, identifying gaps and emphasising the importance of learning about new contexts. However, despite acknowledging the significance of the user group context, the paper lacks a thorough explanation of the authors’ understanding of this term. The second study in our scoping review provides some details. Recognising a shift from evidence-based medicine to evidence-based health policymaking in the KT literature, the article by Dobrow et al. from 2004 [ 30 ] emphasises the importance of considering contextual factors. They present a conceptual framework for evidence-based decision-making, highlighting the influence of context in KT. Illustrated through examples from colorectal cancer screening policy development, their conceptual framework emphasises the significance of context in the introduction, interpretation and application of evidence. Third, Lehoux et al. [ 47 ] examine the field of Health Technology Assessment (HTA) and its role in informing decision and policymaking in Canada. By developing a conceptual framework for HTA dissemination and use, they touch on the institutional environment and briefly describe contextual factors.

Notably, the first three publications in our scoping review are authored by scholars affiliated with Canada, which is less of a coincidence, given the role of Canadian Institutes of Health Research (CIHR), the federal funding agency for health research: The CIHR Act (Bill C-13) mandates CIHR to ensure that the translation of health knowledge permeates every aspect of its work [ 48 ]. Moreover, it was CIHR that coined the term Knowledge Translation, defining KT as ‘a dynamic and iterative process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge to improve health, provide more effective health services and products, and strengthen the health care system’ [ 49 ] . This comprehensive definition has since been adapted by international organisations (IOs), including WHO. The first document published by WHO that utilised KT to influence health policies dates back to 2005, entitled ‘Bridging the “know-do” gap: Meeting on knowledge translation in global health’, an initiative that was supported by the Canadian Coalition for Global Health Research, the Canadian International Development Agency, the German Agency for Technical Cooperation and the WHO Special Programme on Research and Training in Tropical Diseases [ 1 ]. Following this official recognition by WHO, studies in our scoping review after 2005 indicate a noticeable expansion of KT, encompassing a wider geographical area than Canada.

The article of Ashford et al. from 2006 [ 50 ] discusses the challenge of policy decisions in Kenya in the health field being disconnected from scientific evidence and presents a model for translating knowledge into policy actions through agenda-setting, coalition building and policy learning. However, the framework lacks explicit incorporation of contextual factors influencing health policies. Bauman et al. [ 51 ] propose a six-step framework for successful dissemination of physical activity evidence, illustrated through four case studies from three countries (Canada, USA and Brazil) and a global perspective. They interpret contextual factors as barriers and facilitators to physical activity and public health innovations. Focusing on the USA, Gold [ 52 ] explains factors, processes and actors that shape pathways between research and its use in a summary diagram, including a reference to ‘other influences in process’ for context. Green et al. [ 4 ] examine the gap between health research and its application in public health without focusing on a specific geographical area. Their study comprehensively reviews various concepts of diffusion, dissemination and implementation in public health, proposing ways to blend diffusion theory with other theories. Their ‘utilization-focused surveillance framework’ interprets context as social determinants as structures, economics, politics and culture.

Further, the article by Dhonukshe-Rutten et al. from 2010 [ 53 ] presents a general framework that outlines the process of translating nutritional requirements into policy applications from a European perspective. The framework incorporates scientific evidence, stakeholder interests and the socio-political context. The description of this socio-political context is rather brief, encompassing political and social priorities, legal context, ethical issues and economic implications. Ir et al. [ 54 ] analyse the use of knowledge in shaping policy on health equity funds in Cambodia, with the objective of understanding how KT contributes to the development of health policies that promote equity. Yet no information on context is available in the framework that they suggest. A notable exception among these early KT TMFs until 2010 is the conceptual framework for analysing integration of targeted health interventions into health systems by Atun et al. [ 55 ], in which the authors provide details about the factors that have an influence on the process of bringing evidence to health policies. Focusing on the adoption, diffusion and assimilation of health interventions, their conceptual framework provides a systematic approach for evaluating and informing policies in this field. Compared to the previous studies discussed above, their definition of context for this framework is comprehensive (Table  2 ). Overall, most of the studies containing macro-level KT TMFs published until 2010 either do not fully acknowledge contextual factors or provide generic terms such as cultural, political and economic for brief description (9 out of 10; 90%).

Studies published after 2010 demonstrate a notable geographical shift, with a greater emphasis on low- and middle-income countries (LMICs). By taking the adoption of the directly observed treatment, short-course (DOTS) strategy for tuberculosis control in Mexico as a case study, Bissell et al. [ 56 ] examine policy transfer to Mexico and its relevance to operational research efforts and suggest a model for analysis of health policy transfer. The model interprets context as health system, including political, economic, social, cultural and technological features. Focusing on HIV/AIDS in India, Tran et al. [ 57 ] explore KT by considering various forms of evidence beyond scientific evidence, such as best practices derived from programme experience and disseminated through personal communication. Their proposed framework aims to offer an analytical tool for understanding how evidence-based influence is exerted. In their framework, no information is available on context. Next, Bertone et al. [ 58 ] report on the effectiveness of Communities of Practice (CoPs) in African countries and present a conceptual framework for analysing and assessing transnational CoPs in health policy. The framework organises the key elements of CoPs, linking available resources, knowledge management activities, policy and practice changes, and improvements in health outcomes. Context is only briefly included in this framework.

Some other studies include both European and global perspectives. The publication from Timotijevic et al. from 2013 [ 59 ] introduces an epistemological framework that examines the considerations influencing the policy-making process, with a specific focus on micronutrient requirements in Europe. They present case studies from several European countries, highlighting the relevance of the framework in understanding the policy context related to micronutrients. Context is interpreted in this framework as global trends, data, media, broader consumer beliefs, ethical considerations, and wider social, legal, political, and economic environment. Next, funded by the European Union, the study by Onwujekwe et al. [ 60 ] examines the role of different types of evidence in health policy development in Nigeria. Although they cover the factors related to policy actors in their framework for assessing the role of evidence in policy development, they provide no information on context. Moreover, Redman et al. [ 61 ] present the SPIRIT Action Framework, which aims to enhance the use of research in policymaking. Context is interpreted in this framework as policy influences, i.e. public opinion, media, economic climate, legislative/policy infrastructure, political ideology and priorities, stakeholder interests, expert advice, and resources. From a global perspective, Spicer et al. [ 62 ] explore the contextual factors that influenced the scale-up of donor-funded maternal and newborn health innovations in Ethiopia, India and Nigeria, highlighting the importance of context in assessing and adapting innovations. Their suggested contextual factors influencing government decisions to accept, adopt and finance innovations at scale are relatively comprehensive (Table  2 ).

In terms of publication frequency, the pinnacle of reviewed KT studies was in 2017. Among six studies published in 2017, four lack details about context in their KT conceptualisations and one study touches on context very briefly. Bragge et al. [ 5 ] brought for their study an international terminology working group together to develop a simplified framework of interventions to integrate evidence into health practices, systems, and policies, named as the Aims, Ingredients, Mechanism, Delivery framework, albeit without providing details on contextual factors. Second, Mulvale et al. [ 63 ] present a conceptual framework that explores the impact of policy dialogues on policy development, illustrating how these dialogues can influence different stages of the policy cycle. Similar to the previous one, this study too, lacks information on context. In a systematic review, Sarkies et al. [ 64 ] evaluate the effectiveness of research implementation strategies in promoting evidence-informed policy decisions in healthcare. The study explores the factors associated with effective strategies and their inter-relationship, yet without further information on context. Fourth, Houngbo et al. [ 65 ] focus on the development of a strategy to implement a good governance model for health technology management in the public health sector, drawing from their experience in Benin. They outline a six-phase model that includes preparatory analysis, stakeholder identification and problem analysis, shared analysis and visioning, development of policy instruments for pilot testing, policy development and validation, and policy implementation and evaluation. They provide no information about context in their model. Fifth, Mwendera et al. [ 66 ] present a framework for improving the use of malaria research in policy development in Malawi, which was developed based on case studies exploring the policymaking process, the use of local malaria research, and assessing facilitators and barriers to research utilisation. Contextual setting is considered as Ministry of Health (MoH) with political set up, leadership system within the MoH, government policies and cultural set up. In contrast to these five studies, Ellen et al. [ 67 ] present a relatively comprehensive framework to support evidence-informed policymaking in ageing and health. The framework includes thought-provoking questions to discover contextual factors (Table  2 ).

Continuing the trend, studies published after 2017 focus increasingly on LMICs. In their embedded case study, Ongolo-Zogo et al. [ 68 ] examine the influence of two Knowledge Translation Platforms (KTPs) on policy decisions to achieve the health millennium development goals in Cameroon and Uganda. It explores how these KTPs influenced policy through interactions within policy issue networks, engagement with interest groups, and the promotion of evidence-supported ideas, ultimately shaping the overall policy climate for evidence-informed health system policymaking. Contextual factors are thereby interpreted as institutions (structures, legacies, policy networks), interests, ideas (values, research evidence) and external factors (reports, commitments). Focusing on the ‘Global South’, Plamondon et al. [ 69 ] suggest blending integrated knowledge translation with global health governance as an approach for strengthening leadership for health equity action. In terms of contextual factors, they include some information such as adapting knowledge to local context, consideration of the composition of non-traditional actors, such as civil society and private sector, in governance bodies and guidance for meaningful engagement between actors, particularly in shared governance models. Further, Vincenten et al. [ 70 ] propose a conceptual model to enhance understanding of interlinking factors that influence the evidence implementation process. Their evidence implementation model for public health systems refers to ‘context setting’, albeit without providing further detail.

Similarly, the study by Motani et al. from 2019 [ 71 ] assesses the outcomes and lessons learned from the EVIDENT partnership that focused on knowledge management for evidence-informed decision-making in nutrition and health in Africa. Although they mention ‘contextualising evidence’ in their conceptual framework, information about context is lacking. Focusing on Latin America and the Caribbean, Varallyay et al. [ 72 ] introduce a conceptual framework for evaluating embedded implementation research in various contexts. The framework outlines key stages of evidence-informed decision-making and provides guidance on assessing embeddedness and critical contextual factors. Compared to others, their conceptual framework provides a relatively comprehensive elaboration on contextual factors. In addition, among all the studies reviewed, Leonard et al. [ 73 ] present an exceptionally comprehensive analysis, where they identify the facilitators and barriers to the sustainable implementation of evidence-based health innovations in LMICs. Through a systematic literature review, they scrutinise 79 studies and categorise the identified barriers and facilitators into seven groups: context, innovation, relations and networks, institutions, knowledge, actors, and resources. The first one, context, contains rich information that could be seen in Table  2 .

Continuing from LMICs, Votruba et al. [ 74 ] present in their study the EVITA (EVIdence To Agenda setting) conceptual framework for mental health research-policy interrelationships in LMICs with some information about context, detailed as external influences and political context. In a follow-up study, they offer an updated framework for understanding evidence-based mental health policy agenda-setting [ 75 ]. In their revised framework, context is interpreted as external context and policy sphere, encompassing policy agenda, window of opportunity, political will and key individuals. Lastly, to develop a comprehensive monitoring and evaluation framework for evidence-to-policy networks, Kuchenmüller et al. [ 76 ] present the EVIPNet Europe Theory of Change and interpret contextual factors for evidence-informed policymaking as political, economic, logistic and administrative. Overall, it can be concluded that studies presenting macro-level KT TMFs from 2011 until 2022 focus mainly on LMICs (15 out of 22; close to 70%) and the majority of them were funded by international (development) organisations, the European Commission and global health donor agencies. An overwhelming number of studies among them (19 out of 22; close to 90%) provide either no information on contextual details or these were included only partly with some generic terms in KT TMFs.

Our systematic scoping review suggests that the approach of KT, which has evolved from evidence-based medicine to evidence-informed policymaking, tends to remain closely tied to its clinical origins when developing TMFs. In other words, macro-level KT TMFs place greater emphasis on the (public) health issue at hand rather than considering the broader decision-making context, a viewpoint shared by other scholars as well [ 30 ]. One reason could be that in the early stages of KT TMFs, the emphasis primarily focused on implementing evidence-based practices within clinical settings. At that time, the spotlight was mostly on content, including aspects like clinical studies, checklists and guidelines serving as the evidence base. In those meso-level KT TMFs, a detailed description of context, i.e. the overall environment in which these practices should be implemented, might have been deemed less necessary, given that healthcare organisations, such as hospitals to implement medical guidelines or surgical safety checklists, show similar characteristics globally.

However, as the scope of KT TMFs continues to expand to include the influence on health policies, a deeper understanding of context-specific factors within different jurisdictions and the dynamics of the policy process is becoming increasingly crucial. This is even more important for KT scholars aiming to conceptualise large-scale changes, as described in KT Tier 5, which necessitate a thorough understanding of targeted behaviours within societies. As the complexity of interventions increases due to the growing number of stakeholders either affecting or being affected by them, the interventions are surrounded by a more intricate web of attitudes, incentives, relationships, rules of engagement and spheres of influence [ 7 ]. The persisting emphasis on content over context in the evolving field of KT may oversimplify the complex process of using evidence in policymaking and understanding the society [ 77 ]. Some scholars argue that this common observation in public health can be attributed to the dominance of experts primarily from medical sciences [ 78 , 79 , 80 ]. Our study confirms the potential limitation of not incorporating insights from political science and public policy studies, which can lead to what is often termed a ‘naïve’ conceptualisation of evidence-to-policy schemes [ 15 , 16 , 17 ]. It is therefore strongly encouraged that the emerging macro-level KT concepts draw on political science and public administration if KT scholars intend to effectively communicate new ideas to policymakers, with the aim of prompting their action or response. We summarised our findings into three points.

Firstly, KT scholars may want to identify and pinpoint exactly where a change should occur within the policy process. The main confusion that we observed in the KT literature arises from a lack of understanding of how public policies are made. Notably, the term ‘evidence-informed policymaking’ can refer to any stage of the policy cycle, spanning from agenda-setting to policy formulation, adoption, implementation and evaluation. Understanding these steps will allow researchers to refine their language when advocating for policy changes across various jurisdictions; for instance, the word ‘implementation’ is often inappropriately used in KT literature. As commonly known, at the macro-level, public policies take the form of legislation, law-making and regulation, thereby shaping the practices or policies to be implemented at the meso- and micro-levels [ 81 ]. In other words, the process of using specific knowledge to influence health policies, however evidence-based it might be, falls mostly under the responsibility and jurisdiction of sovereign states. For this reason, macro-level KT TMFs should reflect the importance of understanding the policy context and the complexities associated with policymaking, rather than suggesting flawed or unrealistic top-down ‘implementation’ strategies in countries by foregrounding the content, or the (public) health issue at hand.

Our second observation from this systematic scoping review points towards a selective perception among researchers when reporting on policy interventions. Research on KT does not solely exist due to the perceived gap between scientific evidence and policy but also because of the pressures the organisations or researchers face in being accountable to their funding sources, ensuring the continuity of financial support for their activities and claiming output legitimacy to change public policies [ 8 ]. This situation indirectly compels researchers working to influence health policies in the field to provide ‘evidence-based’ feedback on the success of their projects to donors [ 82 ]. In doing so, researchers may overly emphasise the content of the policy intervention in their reporting to secure further funding, while they underemphasis the contextual factors. These factors, often perceived as a given, might actually be the primary facilitators of their success. Such a lack of transparency regarding the definition of context is particularly visible in the field of global health, where LMICs often rely on external donors. It is important to note that this statement is not intended as a negative critique of their missions or an evaluation of health outcomes in countries following such missions. Rather, it seeks to explain the underlying reason why researchers, particularly those reliant on donors in LMICs, prioritise promoting the concept of KT from a technical standpoint, giving less attention to contextual factors in their reasoning.

Lastly, and connected to the previous point, it is our observation that the majority of macro-level KT TMFs fail to give adequate consideration to both power dynamics in countries (internal vs. external influences) and the actual role that government plays in public policies. Notably, although good policymaking entails an honest effort to use the best available evidence, the belief that this will completely negate the role of power and politics in decision-making is a technocratic illusion [ 83 ]. Among the studies reviewed, the framework put forth by Leonard et al. [ 73 ] offers the most comprehensive understanding of context and includes a broad range of factors (such as political, social, and economic) discovered also in other reviewed studies. Moreover, the framework, developed through an extensive systematic review, offers a more in-depth exploration of these contextual factors than merely listing them as a set of keywords. Indeed, within the domains of political science and public policy, such factors shaping health policies have received considerable scholarly attention for decades. To define what context entails, Walt refers in her book ‘Health Policy: An Introduction to Process and Power’ [ 84 ] to the work of Leichter from 1979 [ 85 ], who provides a scheme for analysing public policy. This includes i) situational factors, which are transient, impermanent, or idiosyncratic; ii) structural factors, which are relatively unchanging elements of the society and polity; iii) cultural factors, which are value commitments of groups; and iv) environmental factors, which are events, structures and values that exist outside the boundaries of a political system and influence decisions within it. His detailed sub-categories for context can be found in Table  3 . This flexible public policy framework may offer KT researchers a valuable approach to understanding contextual factors and provide some guidance to define the keywords to focus on. Scholars can adapt this framework to suit a wide range of KT topics, creating more context-sensitive and comprehensive KT TMFs.

Admittedly, our study has certain limitations. Despite choosing one of the most comprehensive bibliographic databases for our systematic scoping review, which includes materials from biomedicine, allied health fields, biological and physical sciences, humanities, and information science in relation to medicine and healthcare, we acknowledge that we may have missed relevant articles indexed in other databases. Hence, exclusively using Ovid/MEDLINE due to resource constraints may have narrowed the scope and diversity of scholarly literature examined in this study. Second, our review was limited to peer-reviewed publications in English and German. Future studies could extend our findings by examining the extent to which contextual factors are detailed in macro-level KT TMFs published in grey literature and in different languages. Given the abundance of KT reports, working papers or policy briefs published by IOs and development agencies, such an endeavour could enrich our findings and either support or challenge our conclusions. Nonetheless, to our knowledge, this study represents the first systematic review and critical appraisal of emerging knowledge-to-policy concepts, also known as macro-level KT TMFs. It successfully blends insights from both biomedical and public policy disciplines, and could serve as a roadmap for future research.

The translation of knowledge to policymakers involves more than technical skills commonly associated with (bio-)medical sciences, such as creating evidence-based guidelines or clinical checklists. Instead, evidence-informed policymaking reflects an ambition to engage in the political dimensions of states. Therefore, the evolving KT concepts addressing health policies should be seen as a political decision-making process, rather than a purely analytical one, as is the case with evidence-based medicine. To better understand the influence of power dynamics and governance structures in policymaking, we suggest that future macro-level KT TMFs draw on insights from political science and public administration. Collaborative, interdisciplinary research initiatives could be undertaken to bridge the gap between these fields. Technocratic KT TMFs that overlook contextual factors risk propagating misconceptions in academic circles about how health policies are made, as they become increasingly influential over time. Research, the systematic pursuit of knowledge, is neither inherently good nor bad; it can be sought after, used or misused, like any other tool in policymaking. What is needed in the KT discourse is not another generic call for ‘research-to-action’ but rather an understanding of the dividing line between research-to- clinical -action and research-to- political -action.

Availability of data and materials

Available upon reasonable request.

WHO. Bridging the ‘Know-Do’ Gap: Meeting on Knowledge Translation in Global Health : 10–12 October 2005 World Health Organization Geneva, Switzerland [Internet]. 2005. https://www.measureevaluation.org/resources/training/capacity-building-resources/high-impact-research-training-curricula/bridging-the-know-do-gap.pdf

Rogers EM. Diffusion of innovations. 3rd ed. New York: Free Press; 1983.

Google Scholar  

Greenhalgh T, Wieringa S. Is it time to drop the ‘knowledge translation’ metaphor? A critical literature review. J R Soc Med. 2011;104(12):501–9.

Article   PubMed   PubMed Central   Google Scholar  

Green LW, Ottoson JM, García C, Hiatt RA. Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009;30(1):151–74.

Article   PubMed   Google Scholar  

Bragge P, Grimshaw JM, Lokker C, Colquhoun H, Albrecht L, Baron J, et al. AIMD—a validated, simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. BMC Med Res Methodol. 2017;17(1):38.

Zarbin M. What Constitutes Translational Research? Implications for the Scope of Translational Vision Science and Technology. Transl Vis Sci Technol 2020;9(8).

Hassmiller Lich K, Frerichs L, Fishbein D, Bobashev G, Pentz MA. Translating research into prevention of high-risk behaviors in the presence of complex systems: definitions and systems frameworks. Transl Behav Med. 2016;6(1):17–31.

Tetroe JM, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, et al. Health research funding agencies’ support and promotion of knowledge translation: an international study. Milbank Q. 2008;86(1):125–55.

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1(1):1.

Article   PubMed Central   Google Scholar  

Rychetnik L, Bauman A, Laws R, King L, Rissel C, Nutbeam D, et al. Translating research for evidence-based public health: key concepts and future directions. J Epidemiol Community Health. 2012;66(12):1187–92.

Votruba N, Ziemann A, Grant J, Thornicroft G. A systematic review of frameworks for the interrelationships of mental health evidence and policy in low- and middle-income countries. Health Res Policy Syst. 2018;16(1):85.

Delnord M, Tille F, Abboud LA, Ivankovic D, Van Oyen H. How can we monitor the impact of national health information systems? Results from a scoping review. Eur J Public Health. 2020;30(4):648–59.

Malterud K, Bjelland AK, Elvbakken KT. Evidence-based medicine—an appropriate tool for evidence-based health policy? A case study from Norway. Health Res Policy Syst. 2016;14(1):15.

Borst RAJ, Kok MO, O’Shea AJ, Pokhrel S, Jones TH, Boaz A. Envisioning and shaping translation of knowledge into action: a comparative case-study of stakeholder engagement in the development of a European tobacco control tool. Health Policy. 2019;123(10):917–23.

Liverani M, Hawkins B, Parkhurst JO. Political and institutional influences on the use of evidence in public health policy: a systematic review. PLoS ONE. 2013;8(10): e77404.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Cairney P. The politics of evidence-based policy making, 1st ed. London: Palgrave Macmillan UK: Imprint: Palgrave Pivot, Palgrave Macmillan; 2016.

Parkhurst J. The Politics of Evidence: From evidence-based policy to the good governance of evidence [Internet]. Routledge; 2016. https://www.taylorfrancis.com/books/9781315675008

Cairney P, Oliver K. Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Res Policy Syst. 2017;15(1):35.

Verboom B, Baumann A. Mapping the Qualitative Evidence Base on the Use of Research Evidence in Health Policy-Making: A Systematic Review. Int J Health Policy Manag. 2020;16.

Ward V, House A, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Swinburn B, Gill T, Kumanyika S. Obesity prevention: a proposed framework for translating evidence into action. Obes Rev. 2005;6(1):23–33.

Article   CAS   PubMed   Google Scholar  

Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;283: 112461.

Birken SA, Rohweder CL, Powell BJ, Shea CM, Scott J, Leeman J, et al. T-CaST: an implementation theory comparison and selection tool. Implement Sci. 2018;13(1):143.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Rapport F, Clay-Williams R, Churruca K, Shih P, Hogden A, Braithwaite J. The struggle of translating science into action: foundational concepts of implementation science. J Eval Clin Pract. 2018;24(1):117–26.

Hagenaars LL, Jeurissen PPT, Klazinga NS. The taxation of unhealthy energy-dense foods (EDFs) and sugar-sweetened beverages (SSBs): An overview of patterns observed in the policy content and policy context of 13 case studies. Health Policy. 2017;121(8):887–94.

Sheikh K, Gilson L, Agyepong IA, Hanson K, Ssengooba F, Bennett S. Building the field of health policy and systems research: framing the questions. PLOS Med. 2011;8(8): e1001073.

Tran NT, Hyder AA, Kulanthayan S, Singh S, Umar RSR. Engaging policy makers in road safety research in Malaysia: a theoretical and contextual analysis. Health Policy. 2009;90(1):58–65.

Walt G, Gilson L. Reforming the health sector in developing countries: the central role of policy analysis. Health Policy Plan. 1994;9(4):353–70.

Dobrow MJ, Goel V, Upshur REG. Evidence-based health policy: context and utilisation. Soc Sci Med. 2004;58(1):207–17.

Barnfield A, Savolainen N, Lounamaa A. Health Promotion Interventions: Lessons from the Transfer of Good Practices in CHRODIS-PLUS. Int J Environ Res Public Health. 2020;17(4).

van de Goor I, Hämäläinen RM, Syed A, Juel Lau C, Sandu P, Spitters H, et al. Determinants of evidence use in public health policy making: results from a study across six EU countries. Health Policy Amst Neth. 2017;121(3):273–81.

Article   Google Scholar  

Ornstein JT, Hammond RA, Padek M, Mazzucca S, Brownson RC. Rugged landscapes: complexity and implementation science. Implement Sci. 2020;15(1):85.

Seward N, Hanlon C, Hinrichs-Kraples S, Lund C, Murdoch J, Taylor Salisbury T, et al. A guide to systems-level, participatory, theory-informed implementation research in global health. BMJ Glob Health. 2021;6(12): e005365.

Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci. 2017;12(1):21.

Rogers L, De Brún A, McAuliffe E. Defining and assessing context in healthcare implementation studies: a systematic review. BMC Health Serv Res. 2020;20(1):591.

Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

Arksey H, O’Malley L, Baldwin S, Harris J, Mason A, Golder S. Literature review report: services to support carers of people with mental health problems. 2002;182.

Tabak RG, Khoong EC, Chambers D, Brownson RC. Bridging research and practice. Am J Prev Med. 2012;43(3):337–50.

O’Donovan MA, McCallion P, McCarron M, Lynch L, Mannan H, Byrne E. A narrative synthesis scoping review of life course domains within health service utilisation frameworks. HRB Open Res. 2019.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, et al. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Bate P, Robert G, Fulop N, Øvretviet J, Dixon-Woods M. Perspectives on context: a collection of essays considering the role of context in successful quality improvement [Internet]. 2014. https://www.health.org.uk/sites/default/files/PerspectivesOnContext_fullversion.pdf

Ziemann A, Brown L, Sadler E, Ocloo J, Boaz A, Sandall J. Influence of external contextual factors on the implementation of health and social care interventions into practice within or across countries—a protocol for a ‘best fit’ framework synthesis. Syst Rev. 2019. https://doi.org/10.1186/s13643-019-1180-8 .

Strifler L, Cardoso R, McGowan J, Cogo E, Nincic V, Khan PA, et al. Scoping review identifies significant number of knowledge translation theories, models, and frameworks with limited use. J Clin Epidemiol. 2018;100:92–102.

Esmail R, Hanson HM, Holroyd-Leduc J, Brown S, Strifler L, Straus SE, et al. A scoping review of full-spectrum knowledge translation theories, models, and frameworks. Implement Sci. 2020;15(1):11.

Jacobson N, Butterill D, Goering P. Development of a framework for knowledge translation: understanding user context. J Health Serv Res Policy. 2003;8(2):94–9.

Lehoux P, Denis JL, Tailliez S, Hivon M. Dissemination of health technology assessments: identifying the visions guiding an evolving policy innovation in Canada. J Health Polit Policy Law. 2005;30(4):603–42.

Parliament of Canada. Government Bill (House of Commons) C-13 (36–2) - Royal Assent - Canadian Institutes of Health Research Act [Internet]. https://parl.ca/DocumentViewer/en/36-2/bill/C-13/royal-assent/page-31 . Accessed 1 Apr 2023.

Straus SE, Tetroe J, Graham I. Defining knowledge translation. CMAJ Can Med Assoc J. 2009;181(3–4):165–8.

Ashford L. Creating windows of opportunity for policy change: incorporating evidence into decentralized planning in Kenya. Bull World Health Organ. 2006;84(8):669–72.

Bauman AE, Nelson DE, Pratt M, Matsudo V, Schoeppe S. Dissemination of physical activity evidence, programs, policies, and surveillance in the international public health arena. Am J Prev Med. 2006;31(4):57–65.

Gold M. Pathways to the use of health services research in policy. Health Serv Res. 2009;44(4):1111–36.

Dhonukshe-Rutten RAM, Timotijevic L, Cavelaars AEJM, Raats MM, de Wit LS, Doets EL, et al. European micronutrient recommendations aligned: a general framework developed by EURRECA. Eur J Clin Nutr. 2010;64(2):S2-10.

Ir P, Bigdeli M, Meessen B, Van Damme W. Translating knowledge into policy and action to promote health equity: The Health Equity Fund policy process in Cambodia 2000–2008. Health Policy. 2010;96(3):200–9.

Atun R, de Jongh T, Secci F, Ohiri K, Adeyi O. Integration of targeted health interventions into health systems: a conceptual framework for analysis. Health Policy Plan. 2010;25(2):104–11.

Bissell K, Lee K, Freeman R. Analysing policy transfer: perspectives for operational research. Int J Tuberc Lung Dis Off J Int Union Tuberc Lung Dis. 2011;15(9).

Tran NT, Bennett SC, Bishnu R, Singh S. Analyzing the sources and nature of influence: how the Avahan program used evidence to influence HIV/AIDS prevention policy in India. Implement Sci. 2013;8(1):44.

Bertone MP, Meessen B, Clarysse G, Hercot D, Kelley A, Kafando Y, et al. Assessing communities of practice in health policy: a conceptual framework as a first step towards empirical research. Health Res Policy Syst. 2013;11(1):39.

Timotijevic L, Brown KA, Lähteenmäki L, de Wit L, Sonne AM, Ruprich J, et al. EURRECA—a framework for considering evidence in public health nutrition policy development. Crit Rev Food Sci Nutr. 2013;53(10):1124–34.

Onwujekwe O, Uguru N, Russo G, Etiaba E, Mbachu C, Mirzoev T, et al. Role and use of evidence in policymaking: an analysis of case studies from the health sector in Nigeria. Health Res Policy Syst. 2015;13(1):46.

Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT action framework: a structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136–137:147–55.

Spicer N, Berhanu D, Bhattacharya D, Tilley-Gyado RD, Gautham M, Schellenberg J, et al. ‘The stars seem aligned’: a qualitative study to understand the effects of context on scale-up of maternal and newborn health innovations in Ethiopia, India and Nigeria. Glob Health. 2016;12(1):75.

Mulvale G, McRae SA, Milicic S. Teasing apart “the tangled web” of influence of policy dialogues: lessons from a case study of dialogues about healthcare reform options for Canada. Implement Sci IS. 2017;12.

Sarkies MN, Bowles KA, Skinner EH, Haas R, Lane H, Haines TP. The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review. Implement Sci. 2017;12(1):132.

Houngbo PTh, Coleman HLS, Zweekhorst M, De Cock Buning TJ, Medenou D, Bunders JFG. A Model for Good Governance of Healthcare Technology Management in the Public Sector: Learning from Evidence-Informed Policy Development and Implementation in Benin. PLoS ONE. 2017;12(1):e0168842.

Mwendera C, de Jager C, Longwe H, Hongoro C, Phiri K, Mutero CM. Development of a framework to improve the utilisation of malaria research for policy development in Malawi. Health Res Policy Syst. 2017;15(1):97.

Ellen ME, Panisset U, de AraujoCarvalho I, Goodwin J, Beard J. A knowledge translation framework on ageing and health. Health Policy. 2017;121(3):282–91.

Ongolo-Zogo P, Lavis JN, Tomson G, Sewankambo NK. Assessing the influence of knowledge translation platforms on health system policy processes to achieve the health millennium development goals in Cameroon and Uganda: a comparative case study. Health Policy Plan. 2018;33(4):539–54.

Plamondon KM, Pemberton J. Blending integrated knowledge translation with global health governance: an approach for advancing action on a wicked problem. Health Res Policy Syst. 2019;17(1):24.

Vincenten J, MacKay JM, Schröder-Bäck P, Schloemer T, Brand H. Factors influencing implementation of evidence-based interventions in public health systems—a model. Cent Eur J Public Health. 2019;27(3):198–203.

Motani P, Van de Walle A, Aryeetey R, Verstraeten R. Lessons learned from Evidence-Informed Decision-Making in Nutrition & Health (EVIDENT) in Africa: a project evaluation. Health Res Policy Syst. 2019;17(1):12.

Varallyay NI, Langlois EV, Tran N, Elias V, Reveiz L. Health system decision-makers at the helm of implementation research: development of a framework to evaluate the processes and effectiveness of embedded approaches. Health Res Policy Syst. 2020;18(1):64.

Leonard E, de Kock I, Bam W. Barriers and facilitators to implementing evidence-based health innovations in low- and middle-income countries: a systematic literature review. Eval Program Plann. 2020;82: 101832.

Votruba N, Grant J, Thornicroft G. The EVITA framework for evidence-based mental health policy agenda setting in low- and middle-income countries. Health Policy Plan. 2020;35(4):424–39.

Votruba N, Grant J, Thornicroft G. EVITA 2.0, an updated framework for understanding evidence-based mental health policy agenda-setting: tested and informed by key informant interviews in a multilevel comparative case study. Health Res Policy Syst. 2021;19(1):35.

Kuchenmüller T, Chapman E, Takahashi R, Lester L, Reinap M, Ellen M, et al. A comprehensive monitoring and evaluation framework for evidence to policy networks. Eval Program Plann. 2022;91: 102053.

Ettelt S. The politics of evidence use in health policy making in Germany—the case of regulating hospital minimum volumes. J Health Polit Policy Law. 2017;42(3):513–38.

Greer SL, Bekker M, de Leeuw E, Wismar M, Helderman JK, Ribeiro S, et al. Policy, politics and public health. Eur J Public Health. 2017;27(suppl 4):40–3.

Fafard P, Cassola A. Public health and political science: challenges and opportunities for a productive partnership. Public Health. 2020;186:107–9.

Löblová O. Epistemic communities and experts in health policy-making. Eur J Public Health. 2018;28(suppl 3):7–10.

Maddalena V. Evidence-Based Decision-Making 8: Health Policy, a Primer for Researchers. In: Parfrey PS, Barrett BJ, editors. Clinical Epidemiology: Practice and Methods. New York, NY: Springer; 2015. (Methods in Molecular Biology).

Louis M, Maertens L. Why international organizations hate politics - Depoliticizing the world [Internet]. London and New York: Routledge; 2021. (Global Institutions). https://library.oapen.org/bitstream/handle/20.500.12657/47578/1/9780429883279.pdf

Hassel A, Wegrich K. How to do public policy. 1st ed. Oxford: Oxford University Press; 2022.

Book   Google Scholar  

Walt G. Health policy: an introduction to process and power. 7th ed. Johannesburg: Witwatersrand University Press; 2004.

Leichter HM. A comparative approach to policy analysis: health care policy in four nations. Cambridge: Cambridge University Press; 1979.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Department of International Health, Care and Public Health Research Institute – CAPHRI, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands

Tugce Schmitt, Katarzyna Czabanowska & Peter Schröder-Bäck

You can also search for this author in PubMed   Google Scholar

Contributions

TS: Conceptualization, Methodology, Formal analysis, Investigation, Writing—Original Draft; KC: Writing—Review & Editing; PSB: Validation, Formal analysis, Writing—Review & Editing, Supervision.

Corresponding author

Correspondence to Tugce Schmitt .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Schmitt, T., Czabanowska, K. & Schröder-Bäck, P. What is context in knowledge translation? Results of a systematic scoping review. Health Res Policy Sys 22 , 52 (2024). https://doi.org/10.1186/s12961-024-01143-5

Download citation

Received : 26 June 2023

Accepted : 11 April 2024

Published : 29 April 2024

DOI : https://doi.org/10.1186/s12961-024-01143-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Knowledge Translation
  • Evidence-informed policymaking
  • Health systems

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is ethical review in research

COMMENTS

  1. What Is Ethics in Research and Why Is It Important?

    Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research.

  2. Ethical Considerations in Research

    Getting ethical approval for your study. Before you start any study involving data collection with people, you'll submit your research proposal to an institutional review board (IRB).. An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution's code of conduct. They check that your research materials and procedures ...

  3. Ethics review of big data research: What should stay and what should be

    Ethics review is the process of assessing the ethics of research involving humans. The Ethics Review Committee (ERC) is the key oversight mechanism designated to ensure ethics review. Whether or not this governance mechanism is still fit for purpose in the data-driven research context remains a debated issue among research ethics experts. In this article, we seek to address this issue in a ...

  4. Guide to ethical approval

    Application form. Obtaining ethical approval is divided into national and local stages. The first task is to complete an application form. This has recently changed from the National Research Ethics Service form to a new Integrated Research Application System.3 This is much more than just a form; it is an integrated dataset designed to fulfil the requirements of a number of review bodies.

  5. Ethical review and qualitative research competence: Guidance for

    The role of ethical review is to ensure that ethical standards in research are met. In Australia this process is governed by the National Statement on the Ethical Conduct of Research Involving Humans (National Health and Medical Research Council, 2007 (revised 2015)).The National Statement (as it is called) provides both guidelines on ethical research conduct for those designing and conducting ...

  6. Improving the process of research ethics review

    The research ethics review process may appear to some like the proverbial black box. An application is submitted and considered and a decision is made: SUBMIT > REVIEW > DECISION. In reality, the first step to understanding and improving the process is recognizing that research ethics review involves more than just the REB. Contributing to the ...

  7. Guiding Principles for Ethical Research

    NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research: Social and clinical value. Scientific validity. Fair subject selection. Favorable risk-benefit ratio. Independent review. Informed consent. Respect for potential and enrolled subjects.

  8. Standards and Operational Guidance for Ethics Review of Health-Related

    The new WHO publication "Standards and operational guidance for ethics review of health-related research with human participants", is a compilation of 10 standards that are applicable to the ethics review of health related research with human participants. This document is intended to provide guidance on the research ethics review process, not to take a substantive position on how ...

  9. Improving the process of research ethics review

    Research Ethics Boards, or Institutional Review Boards, protect the safety and welfare of human research participants. These bodies are responsible for providing an independent evaluation of proposed research studies, ultimately ensuring that the research does not proceed unless standards and regulations are met. Concurrent with the growing volume of human participant research, the workload ...

  10. Standards and guidance for members of the research ethics committees

    The primary task of an REC is the ethical review of research protocols and their supporting documents. Approval or disapproval is based on the ethical acceptability of the research, including its social value and scientific validity, an acceptable ratio of potential benefits to risks of harm, the minimization of risks, adequate informed consent procedures (including cultural appropriateness ...

  11. Understanding Scientific and Research Ethics

    Reputable journals screen for ethics at submission—and inability to pass ethics checks is one of the most common reasons for rejection. Unfortunately, once a study has begun, it's often too late to secure the requisite ethical reviews and clearances. Learn how to prepare for publication success by ensuring your study meets all ethical requirements before work begins.

  12. Advancing ethics review practices in AI research

    The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated ...

  13. PDF Research Ethics Review

    research regarding research ethics review. In so doing, this chapter also offers a critique of existing work and suggests some future directions for both the regulatory design of research ethics review and also researching the field itself. 18.2 research ethics review as a regulatory process

  14. Research Ethics Review (Chapter 18)

    In some jurisdictions, this review, known as research ethics review, is mandated by law. In these cases, the law may be general 1 or it may apply to specific kinds of health research, such as clinical trials of an investigational medicinal product 2 or health research involving adults lacking capacity. 3 In other jurisdictions, and depending on ...

  15. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific integrity and the end uses of research. This journal is a member of the Committee on ...

  16. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    To summarize research ethics review experiences in a study about the research ethics review process: Online feedback form of researcher's data: It is necessary to increase transparency of the review process, consistent application of federal guidelines, and a more collaborative review approach to improve the trust of qualitative researchers ...

  17. Why you need ethical approval

    Ethical review provides protection for participants, and also helps to protect the researcher. By obtaining ethical approval the researcher is demonstrating that they have adhered to the accepted ethical standards of a genuine research study. Participants have the right to know who has access to their data and what is being done with it.

  18. Ethical Considerations of Conducting Systematic Reviews in ...

    Ethical considerations of conducting systematic reviews in educational research are not typically discussed explicitly. As an illustration, 'ethics' is not listed as a term in the index of the second edition of 'An Introduction to Systematic Reviews' (Gough et al. 2017).This chapter draws from my earlier in-depth discussion of this topic in the Qualitative Research Journal (Suri 2008 ...

  19. Back to Basics: Scientific, Conflict of Interest, and Ethical Review of

    Research Ethics Review Boards should ensure that the focus of the informed consent process and the consent form is on informing and protecting participants, NOT on protecting institutions. The ethical foundations of research protections in the United States can be found in the three tenets identified in the Belmont Report: ...

  20. Ensuring ethical standards and procedures for research with human beings

    Research ethics govern the standards of conduct for scientific researchers. It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are ...

  21. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  22. (PDF) Ethical Considerations of Conducting Systematic Reviews in

    Whilst this review is a synthesis of data and therefore exempt from requiring a formal ethical approval as per the human research ethic's governing policy for our setting (University of the ...

  23. Ethics in educational research: Review boards, ethical issues and

    The paper concludes that the ethical conduct of educational research is more complex than adhering to a set of strict 'rules' but is an issue of resolving ethical dilemmas, which is beyond the scope of a single event review process (see, for example, the Economic and Social Research Council's Research Ethics Framework ). Ethics in ...

  24. Ethics in scientific research: a lens into its importance, history, and

    Institutional Review Boards play critical roles in upholding ethical standards in research. An IRB is a committee established by an institution conducting research to review, approve, and monitor research involving human subjects 7,8. Their primary role is to ensure that the rights and welfare of participants are protected.

  25. What is context in knowledge translation? Results of a systematic

    According to the results of our systematic scoping review (Table 2), the first KT TMF developed for health policies dates back to 2003, confirming the emergence of a trend that expanded the meaning of the term Knowledge Translation to include policymakers as end-users of evidence during approximately the same period.In their study, Jacobson et al. [] present a framework derived from a ...

  26. When Artists Go to Work: On the Ethics of Engaging the Arts in Public

    Patrick T. Smith and Jill K. Sonke, " When Artists Go to Work: On the Ethics of Engaging the Arts in Public Health," in "Time to Rebuild: Essays on Trust in Health Care and Science," ed. Lauren A. Taylor, Gregory E. Kaebnick, and Mildred Z. Solomon, special report, Hastings Center Report 53, no. 5

  27. ePROS

    The purpose of ePROS is to contribute to the VA Research Enterprise mission of improving Veterans lives through research by: . Ensuring the protection of the public, research staff, human participants, and animals in VA conducted research through policy, education, risk assessment, and mitigation.

  28. Ethics review of big data research: What should stay and what should be

    Ethics review is the process of assessing the ethics of research involving humans. The Ethics Review Committee (ERC) is the key oversight mechanism designated to ensure ethics review. Whether or not this governance mechanism is still fit for purpose in the data-driven research context remains a debated issue among research ethics experts.

  29. GPSolo eReport

    GPSolo eReport is a member benefit of the ABA Solo, Small Firm and General Practice Division. It is a monthly electronic newsletter that includes valuable practice tips, news, technology trends, and featured articles on substantive practice areas.