News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Conducting a literature review: why do a literature review, why do a literature review.

  • How To Find "The Literature"
  • Found it -- Now What?

Besides the obvious reason for students -- because it is assigned! -- a literature review helps you explore the research that has come before you, to see how your research question has (or has not) already been addressed.

You identify:

  • core research in the field
  • experts in the subject area
  • methodology you may want to use (or avoid)
  • gaps in knowledge -- or where your research would fit in

It Also Helps You:

  • Publish and share your findings
  • Justify requests for grants and other funding
  • Identify best practices to inform practice
  • Set wider context for a program evaluation
  • Compile information to support community organizing

Great brief overview, from NCSU

Want To Know More?

Cover Art

  • Next: How To Find "The Literature" >>
  • Last Updated: Apr 25, 2024 1:10 PM
  • URL: https://guides.lib.berkeley.edu/litreview
  • UConn Library
  • Literature Review: The What, Why and How-to Guide
  • Introduction

Literature Review: The What, Why and How-to Guide — Introduction

  • Getting Started
  • How to Pick a Topic
  • Strategies to Find Sources
  • Evaluating Sources & Lit. Reviews
  • Tips for Writing Literature Reviews
  • Writing Literature Review: Useful Sites
  • Citation Resources
  • Other Academic Writings

What are Literature Reviews?

So, what is a literature review? "A literature review is an account of what has been published on a topic by accredited scholars and researchers. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries." Taylor, D.  The literature review: A few tips on conducting it . University of Toronto Health Sciences Writing Centre.

Goals of Literature Reviews

What are the goals of creating a Literature Review?  A literature could be written to accomplish different aims:

  • To develop a theory or evaluate an existing theory
  • To summarize the historical or existing state of a research topic
  • Identify a problem in a field of research 

Baumeister, R. F., & Leary, M. R. (1997). Writing narrative literature reviews .  Review of General Psychology , 1 (3), 311-320.

What kinds of sources require a Literature Review?

  • A research paper assigned in a course
  • A thesis or dissertation
  • A grant proposal
  • An article intended for publication in a journal

All these instances require you to collect what has been written about your research topic so that you can demonstrate how your own research sheds new light on the topic.

Types of Literature Reviews

What kinds of literature reviews are written?

Narrative review: The purpose of this type of review is to describe the current state of the research on a specific topic/research and to offer a critical analysis of the literature reviewed. Studies are grouped by research/theoretical categories, and themes and trends, strengths and weakness, and gaps are identified. The review ends with a conclusion section which summarizes the findings regarding the state of the research of the specific study, the gaps identify and if applicable, explains how the author's research will address gaps identify in the review and expand the knowledge on the topic reviewed.

  • Example : Predictors and Outcomes of U.S. Quality Maternity Leave: A Review and Conceptual Framework:  10.1177/08948453211037398  

Systematic review : "The authors of a systematic review use a specific procedure to search the research literature, select the studies to include in their review, and critically evaluate the studies they find." (p. 139). Nelson, L. K. (2013). Research in Communication Sciences and Disorders . Plural Publishing.

  • Example : The effect of leave policies on increasing fertility: a systematic review:  10.1057/s41599-022-01270-w

Meta-analysis : "Meta-analysis is a method of reviewing research findings in a quantitative fashion by transforming the data from individual studies into what is called an effect size and then pooling and analyzing this information. The basic goal in meta-analysis is to explain why different outcomes have occurred in different studies." (p. 197). Roberts, M. C., & Ilardi, S. S. (2003). Handbook of Research Methods in Clinical Psychology . Blackwell Publishing.

  • Example : Employment Instability and Fertility in Europe: A Meta-Analysis:  10.1215/00703370-9164737

Meta-synthesis : "Qualitative meta-synthesis is a type of qualitative study that uses as data the findings from other qualitative studies linked by the same or related topic." (p.312). Zimmer, L. (2006). Qualitative meta-synthesis: A question of dialoguing with texts .  Journal of Advanced Nursing , 53 (3), 311-318.

  • Example : Women’s perspectives on career successes and barriers: A qualitative meta-synthesis:  10.1177/05390184221113735

Literature Reviews in the Health Sciences

  • UConn Health subject guide on systematic reviews Explanation of the different review types used in health sciences literature as well as tools to help you find the right review type
  • << Previous: Getting Started
  • Next: How to Pick a Topic >>
  • Last Updated: Sep 21, 2022 2:16 PM
  • URL: https://guides.lib.uconn.edu/literaturereview

Creative Commons

University of Texas

  • University of Texas Libraries

Literature Reviews

  • What is a literature review?
  • Steps in the Literature Review Process
  • Define your research question
  • Determine inclusion and exclusion criteria
  • Choose databases and search
  • Review Results
  • Synthesize Results
  • Analyze Results
  • Librarian Support

What is a Literature Review?

A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important past and current research and practices. It provides background and context, and shows how your research will contribute to the field. 

A literature review should: 

  • Provide a comprehensive and updated review of the literature;
  • Explain why this review has taken place;
  • Articulate a position or hypothesis;
  • Acknowledge and account for conflicting and corroborating points of view

From  S age Research Methods

Purpose of a Literature Review

A literature review can be written as an introduction to a study to:

  • Demonstrate how a study fills a gap in research
  • Compare a study with other research that's been done

Or it can be a separate work (a research article on its own) which:

  • Organizes or describes a topic
  • Describes variables within a particular issue/problem

Limitations of a Literature Review

Some of the limitations of a literature review are:

  • It's a snapshot in time. Unlike other reviews, this one has beginning, a middle and an end. There may be future developments that could make your work less relevant.
  • It may be too focused. Some niche studies may miss the bigger picture.
  • It can be difficult to be comprehensive. There is no way to make sure all the literature on a topic was considered.
  • It is easy to be biased if you stick to top tier journals. There may be other places where people are publishing exemplary research. Look to open access publications and conferences to reflect a more inclusive collection. Also, make sure to include opposing views (and not just supporting evidence).

Source: Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal, vol. 26, no. 2, June 2009, pp. 91–108. Wiley Online Library, doi:10.1111/j.1471-1842.2009.00848.x.

Meryl Brodsky : Communication and Information Studies

Hannah Chapman Tripp : Biology, Neuroscience

Carolyn Cunningham : Human Development & Family Sciences, Psychology, Sociology

Larayne Dallas : Engineering

Janelle Hedstrom : Special Education, Curriculum & Instruction, Ed Leadership & Policy ​

Susan Macicak : Linguistics

Imelda Vetter : Dell Medical School

For help in other subject areas, please see the guide to library specialists by subject .

Periodically, UT Libraries runs a workshop covering the basics and library support for literature reviews. While we try to offer these once per academic year, we find providing the recording to be helpful to community members who have missed the session. Following is the most recent recording of the workshop, Conducting a Literature Review. To view the recording, a UT login is required.

  • October 26, 2022 recording
  • Last Updated: Oct 26, 2022 2:49 PM
  • URL: https://guides.lib.utexas.edu/literaturereviews

Creative Commons License

LSE - Small Logo

  • About the LSE Impact Blog
  • Comments Policy
  • Popular Posts
  • Recent Posts
  • Subscribe to the Impact Blog
  • Write for us
  • LSE comment

Neal Haddaway

October 19th, 2020, 8 common problems with literature reviews and how to fix them.

3 comments | 315 shares

Estimated reading time: 5 minutes

Literature reviews are an integral part of the process and communication of scientific research. Whilst systematic reviews have become regarded as the highest standard of evidence synthesis, many literature reviews fall short of these standards and may end up presenting biased or incorrect conclusions. In this post, Neal Haddaway highlights 8 common problems with literature review methods, provides examples for each and provides practical solutions for ways to mitigate them.

Enjoying this blogpost? 📨 Sign up to our  mailing list  and receive all the latest LSE Impact Blog news direct to your inbox.

Researchers regularly review the literature – it’s an integral part of day-to-day research: finding relevant research, reading and digesting the main findings, summarising across papers, and making conclusions about the evidence base as a whole. However, there is a fundamental difference between brief, narrative approaches to summarising a selection of studies and attempting to reliably and comprehensively summarise an evidence base to support decision-making in policy and practice.

So-called ‘evidence-informed decision-making’ (EIDM) relies on rigorous systematic approaches to synthesising the evidence. Systematic review has become the highest standard of evidence synthesis and is well established in the pipeline from research to practice in the field of health . Systematic reviews must include a suite of specifically designed methods for the conduct and reporting of all synthesis activities (planning, searching, screening, appraising, extracting data, qualitative/quantitative/mixed methods synthesis, writing; e.g. see the Cochrane Handbook ). The method has been widely adapted into other fields, including environment (the Collaboration for Environmental Evidence ) and social policy (the Campbell Collaboration ).

explain why literature review is viewed as a challenge for many researchers

Despite the growing interest in systematic reviews, traditional approaches to reviewing the literature continue to persist in contemporary publications across disciplines. These reviews, some of which are incorrectly referred to as ‘systematic’ reviews, may be susceptible to bias and as a result, may end up providing incorrect conclusions. This is of particular concern when reviews address key policy- and practice- relevant questions, such as the ongoing COVID-19 pandemic or climate change.

These limitations with traditional literature review approaches could be improved relatively easily with a few key procedures; some of them not prohibitively costly in terms of skill, time or resources.

In our recent paper in Nature Ecology and Evolution , we highlight 8 common problems with traditional literature review methods, provide examples for each from the field of environmental management and ecology, and provide practical solutions for ways to mitigate them.

There is a lack of awareness and appreciation of the methods needed to ensure systematic reviews are as free from bias and as reliable as possible: demonstrated by recent, flawed, high-profile reviews. We call on review authors to conduct more rigorous reviews, on editors and peer-reviewers to gate-keep more strictly, and the community of methodologists to better support the broader research community. Only by working together can we build and maintain a strong system of rigorous, evidence-informed decision-making in conservation and environmental management.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our  comments policy  if you have any concerns on posting a comment below

Image credit:  Jaeyoung Geoffrey Kang  via unsplash

Print Friendly, PDF & Email

About the author

explain why literature review is viewed as a challenge for many researchers

Neal Haddaway is a Senior Research Fellow at the Stockholm Environment Institute, a Humboldt Research Fellow at the Mercator Research Institute on Global Commons and Climate Change, and a Research Associate at the Africa Centre for Evidence. He researches evidence synthesis methodology and conducts systematic reviews and maps in the field of sustainability and environmental science. His main research interests focus on improving the transparency, efficiency and reliability of evidence synthesis as a methodology and supporting evidence synthesis in resource constrained contexts. He co-founded and coordinates the Evidence Synthesis Hackathon (www.eshackathon.org) and is the leader of the Collaboration for Environmental Evidence centre at SEI. @nealhaddaway

Why is mission creep a problem and not a legitimate response to an unexpected finding in the literature? Surely the crucial points are that the review’s scope is stated clearly and implemented rigorously, not when the scope was finalised.

  • Pingback: Quick, but not dirty – Can rapid evidence reviews reliably inform policy? | Impact of Social Sciences

#9. Most of them are terribly boring. Which is why I teach students how to make them engaging…and useful.

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

Related Posts

explain why literature review is viewed as a challenge for many researchers

“But I’m not ready!” Common barriers to writing and how to overcome them

November 16th, 2020.

explain why literature review is viewed as a challenge for many researchers

“Remember a condition of academic writing is that we expose ourselves to critique” – 15 steps to revising journal articles

January 18th, 2017.

explain why literature review is viewed as a challenge for many researchers

A simple guide to ethical co-authorship

March 29th, 2021.

explain why literature review is viewed as a challenge for many researchers

How common is academic plagiarism?

February 8th, 2024.

explain why literature review is viewed as a challenge for many researchers

Visit our sister blog LSE Review of Books

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 12 October 2020

Eight problems with literature reviews and how to fix them

  • Neal R. Haddaway   ORCID: orcid.org/0000-0003-3902-2234 1 , 2 , 3 ,
  • Alison Bethel 4 ,
  • Lynn V. Dicks 5 , 6 ,
  • Julia Koricheva   ORCID: orcid.org/0000-0002-9033-0171 7 ,
  • Biljana Macura   ORCID: orcid.org/0000-0002-4253-1390 2 ,
  • Gillian Petrokofsky 8 ,
  • Andrew S. Pullin 9 ,
  • Sini Savilaakso   ORCID: orcid.org/0000-0002-8514-8105 10 , 11 &
  • Gavin B. Stewart   ORCID: orcid.org/0000-0001-5684-1544 12  

Nature Ecology & Evolution volume  4 ,  pages 1582–1589 ( 2020 ) Cite this article

12k Accesses

84 Citations

386 Altmetric

Metrics details

  • Conservation biology
  • Environmental impact

An Author Correction to this article was published on 19 October 2020

This article has been updated

Traditional approaches to reviewing literature may be susceptible to bias and result in incorrect decisions. This is of particular concern when reviews address policy- and practice-relevant questions. Systematic reviews have been introduced as a more rigorous approach to synthesizing evidence across studies; they rely on a suite of evidence-based methods aimed at maximizing rigour and minimizing susceptibility to bias. Despite the increasing popularity of systematic reviews in the environmental field, evidence synthesis methods continue to be poorly applied in practice, resulting in the publication of syntheses that are highly susceptible to bias. Recognizing the constraints that researchers can sometimes feel when attempting to plan, conduct and publish rigorous and comprehensive evidence syntheses, we aim here to identify major pitfalls in the conduct and reporting of systematic reviews, making use of recent examples from across the field. Adopting a ‘critical friend’ role in supporting would-be systematic reviews and avoiding individual responses to police use of the ‘systematic review’ label, we go on to identify methodological solutions to mitigate these pitfalls. We then highlight existing support available to avoid these issues and call on the entire community, including systematic review specialists, to work towards better evidence syntheses for better evidence and better decisions.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

explain why literature review is viewed as a challenge for many researchers

Similar content being viewed by others

explain why literature review is viewed as a challenge for many researchers

Worldwide divergence of values

explain why literature review is viewed as a challenge for many researchers

Genome-wide association studies

explain why literature review is viewed as a challenge for many researchers

Artificial intelligence and illusions of understanding in scientific research

Change history, 19 october 2020.

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Grant, M. J. & Booth, A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr. J. 26 , 91–108 (2009).

PubMed   Google Scholar  

Haddaway, N. R. & Macura, B. The role of reporting standards in producing robust literature reviews. Nat. Clim. Change 8 , 444–447 (2018).

Google Scholar  

Pullin, A. S. & Knight, T. M. Science informing policy–a health warning for the environment. Environ. Evid. 1 , 15 (2012).

Haddaway, N., Woodcock, P., Macura, B. & Collins, A. Making literature reviews more reliable through application of lessons from systematic reviews. Conserv. Biol. 29 , 1596–1605 (2015).

CAS   PubMed   Google Scholar  

Pullin, A., Frampton, G., Livoreil, B. & Petrokofsky, G. Guidelines and Standards for Evidence Synthesis in Environmental Management (Collaboration for Environmental Evidence, 2018).

White, H. The twenty-first century experimenting society: the four waves of the evidence revolution. Palgrave Commun. 5 , 47 (2019).

O’Leary, B. C. et al. The reliability of evidence review methodology in environmental science and conservation. Environ. Sci. Policy 64 , 75–82 (2016).

Woodcock, P., Pullin, A. S. & Kaiser, M. J. Evaluating and improving the reliability of evidence syntheses in conservation and environmental science: a methodology. Biol. Conserv. 176 , 54–62 (2014).

Campbell Systematic Reviews: Policies and Guidelines (Campbell Collaboration, 2014).

Higgins, J. P. et al. Cochrane Handbook for Systematic Reviews of Interventions (John Wiley & Sons, 2019).

Shea, B. J. et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358 , j4008 (2017).

PubMed   PubMed Central   Google Scholar  

Haddaway, N. R., Land, M. & Macura, B. “A little learning is a dangerous thing”: a call for better understanding of the term ‘systematic review’. Environ. Int. 99 , 356–360 (2017).

Freeman, R. E. Strategic Management: A Stakeholder Approach (Cambridge Univ. Press, 2010).

Haddaway, N. R. et al. A framework for stakeholder engagement during systematic reviews and maps in environmental management. Environ. Evid. 6 , 11 (2017).

Land, M., Macura, B., Bernes, C. & Johansson, S. A five-step approach for stakeholder engagement in prioritisation and planning of environmental evidence syntheses. Environ. Evid. 6 , 25 (2017).

Oliver, S. & Dickson, K. Policy-relevant systematic reviews to strengthen health systems: models and mechanisms to support their production. Evid. Policy 12 , 235–259 (2016).

Savilaakso, S. et al. Systematic review of effects on biodiversity from oil palm production. Environ. Evid. 3 , 4 (2014).

Savilaakso, S., Laumonier, Y., Guariguata, M. R. & Nasi, R. Does production of oil palm, soybean, or jatropha change biodiversity and ecosystem functions in tropical forests. Environ. Evid. 2 , 17 (2013).

Haddaway, N. R. & Crowe, S. Experiences and lessons in stakeholder engagement in environmental evidence synthesis: a truly special series. Environ. Evid. 7 , 11 (2018).

Sánchez-Bayo, F. & Wyckhuys, K. A. Worldwide decline of the entomofauna: a review of its drivers. Biol. Conserv. 232 , 8–27 (2019).

Agarwala, M. & Ginsberg, J. R. Untangling outcomes of de jure and de facto community-based management of natural resources. Conserv. Biol. 31 , 1232–1246 (2017).

Gurevitch, J., Curtis, P. S. & Jones, M. H. Meta-analysis in ecology. Adv. Ecol. Res. 32 , 199–247 (2001).

CAS   Google Scholar  

Haddaway, N. R., Macura, B., Whaley, P. & Pullin, A. S. ROSES RepOrting standards for Systematic Evidence Syntheses: pro forma, flow-diagram and descriptive summary of the plan and conduct of environmental systematic reviews and systematic maps. Environ. Evid. 7 , 7 (2018).

Lwasa, S. et al. A meta-analysis of urban and peri-urban agriculture and forestry in mediating climate change. Curr. Opin. Environ. Sustain. 13 , 68–73 (2015).

Pacifici, M. et al. Species’ traits influenced their response to recent climate change. Nat. Clim. Change 7 , 205–208 (2017).

Owen-Smith, N. Ramifying effects of the risk of predation on African multi-predator, multi-prey large-mammal assemblages and the conservation implications. Biol. Conserv. 232 , 51–58 (2019).

Prugh, L. R. et al. Designing studies of predation risk for improved inference in carnivore-ungulate systems. Biol. Conserv. 232 , 194–207 (2019).

Li, Y. et al. Effects of biochar application in forest ecosystems on soil properties and greenhouse gas emissions: a review. J. Soil Sediment. 18 , 546–563 (2018).

Moher, D., Liberati, A., Tetzlaff, J. & Altman, D. G., The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 6 , e1000097 (2009).

Bernes, C. et al. What is the influence of a reduction of planktivorous and benthivorous fish on water quality in temperate eutrophic lakes? A systematic review. Environ. Evid. 4 , 7 (2015).

McDonagh, M., Peterson, K., Raina, P., Chang, S. & Shekelle, P. Avoiding bias in selecting studies. Methods Guide for Effectiveness and Comparative Effectiveness Reviews [Internet] (Agency for Healthcare Research and Quality, 2013).

Burivalova, Z., Hua, F., Koh, L. P., Garcia, C. & Putz, F. A critical comparison of conventional, certified, and community management of tropical forests for timber in terms of environmental, economic, and social variables. Conserv. Lett. 10 , 4–14 (2017).

Min-Venditti, A. A., Moore, G. W. & Fleischman, F. What policies improve forest cover? A systematic review of research from Mesoamerica. Glob. Environ. Change 47 , 21–27 (2017).

Bramer, W. M., Giustini, D. & Kramer, B. M. R. Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study. Syst. Rev. 5 , 39 (2016).

Bramer, W. M., Giustini, D., Kramer, B. M. R. & Anderson, P. F. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews. Syst. Rev. 2 , 115 (2013).

Gusenbauer, M. & Haddaway, N. R. Which academic search systems are suitable for systematic reviews or meta‐analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 11 , 181–217 (2020).

Livoreil, B. et al. Systematic searching for environmental evidence using multiple tools and sources. Environ. Evid. 6 , 23 (2017).

Mlinarić, A., Horvat, M. & Šupak Smolčić, V. Dealing with the positive publication bias: why you should really publish your negative results. Biochem. Med. 27 , 447–452 (2017).

Lin, L. & Chu, H. Quantifying publication bias in meta‐analysis. Biometrics 74 , 785–794 (2018).

Haddaway, N. R. & Bayliss, H. R. Shades of grey: two forms of grey literature important for reviews in conservation. Biol. Conserv. 191 , 827–829 (2015).

Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36 , 1–48 (2010).

Bilotta, G. S., Milner, A. M. & Boyd, I. On the use of systematic reviews to inform environmental policies. Environ. Sci. Policy 42 , 67–77 (2014).

Englund, G., Sarnelle, O. & Cooper, S. D. The importance of data‐selection criteria: meta‐analyses of stream predation experiments. Ecology 80 , 1132–1141 (1999).

Burivalova, Z., Şekercioğlu, Ç. H. & Koh, L. P. Thresholds of logging intensity to maintain tropical forest biodiversity. Curr. Biol. 24 , 1893–1898 (2014).

Bicknell, J. E., Struebig, M. J., Edwards, D. P. & Davies, Z. G. Improved timber harvest techniques maintain biodiversity in tropical forests. Curr. Biol. 24 , R1119–R1120 (2014).

Damette, O. & Delacote, P. Unsustainable timber harvesting, deforestation and the role of certification. Ecol. Econ. 70 , 1211–1219 (2011).

Blomley, T. et al. Seeing the wood for the trees: an assessment of the impact of participatory forest management on forest condition in Tanzania. Oryx 42 , 380–391 (2008).

Haddaway, N. R. et al. How does tillage intensity affect soil organic carbon? A systematic review. Environ. Evid. 6 , 30 (2017).

Higgins, J. P. et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 343 , d5928 (2011).

Stewart, G. Meta-analysis in applied ecology. Biol. Lett. 6 , 78–81 (2010).

Koricheva, J. & Gurevitch, J. Uses and misuses of meta‐analysis in plant ecology. J. Ecol. 102 , 828–844 (2014).

Vetter, D., Ruecker, G. & Storch, I. Meta‐analysis: a need for well‐defined usage in ecology and conservation biology. Ecosphere 4 , 1–24 (2013).

Stewart, G. B. & Schmid, C. H. Lessons from meta-analysis in ecology and evolution: the need for trans-disciplinary evidence synthesis methodologies. Res. Synth. Methods 6 , 109–110 (2015).

Macura, B. et al. Systematic reviews of qualitative evidence for environmental policy and management: an overview of different methodological options. Environ. Evid. 8 , 24 (2019).

Koricheva, J. & Gurevitch, J. in Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J. et al.) Ch. 1 (Princeton Scholarship Online, 2013).

Britt, M., Haworth, S. E., Johnson, J. B., Martchenko, D. & Shafer, A. B. The importance of non-academic coauthors in bridging the conservation genetics gap. Biol. Conserv. 218 , 118–123 (2018).

Graham, L., Gaulton, R., Gerard, F. & Staley, J. T. The influence of hedgerow structural condition on wildlife habitat provision in farmed landscapes. Biol. Conserv. 220 , 122–131 (2018).

Delaquis, E., de Haan, S. & Wyckhuys, K. A. On-farm diversity offsets environmental pressures in tropical agro-ecosystems: a synthetic review for cassava-based systems. Agric. Ecosyst. Environ. 251 , 226–235 (2018).

Popay, J. et al. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews: A Product from the ESRC Methods Programme Version 1 (Lancaster Univ., 2006).

Pullin, A. S. et al. Human well-being impacts of terrestrial protected areas. Environ. Evid. 2 , 19 (2013).

Waffenschmidt, S., Knelangen, M., Sieben, W., Bühn, S. & Pieper, D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med. Res. Methodol. 19 , 132 (2019).

Rallo, A. & García-Arberas, L. Differences in abiotic water conditions between fluvial reaches and crayfish fauna in some northern rivers of the Iberian Peninsula. Aquat. Living Resour. 15 , 119–128 (2002).

Glasziou, P. & Chalmers, I. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. BMJ 363 , k4645 (2018).

Haddaway, N. R. Open Synthesis: on the need for evidence synthesis to embrace Open Science. Environ. Evid. 7 , 26 (2018).

Download references

Acknowledgements

We thank C. Shortall from Rothamstead Research for useful discussions on the topic.

Author information

Authors and affiliations.

Mercator Research Institute on Climate Change and Global Commons, Berlin, Germany

Neal R. Haddaway

Stockholm Environment Institute, Stockholm, Sweden

Neal R. Haddaway & Biljana Macura

Africa Centre for Evidence, University of Johannesburg, Johannesburg, South Africa

College of Medicine and Health, Exeter University, Exeter, UK

Alison Bethel

Department of Zoology, University of Cambridge, Cambridge, UK

Lynn V. Dicks

School of Biological Sciences, University of East Anglia, Norwich, UK

Department of Biological Sciences, Royal Holloway University of London, Egham, UK

Julia Koricheva

Department of Zoology, University of Oxford, Oxford, UK

Gillian Petrokofsky

Collaboration for Environmental Evidence, UK Centre, School of Natural Sciences, Bangor University, Bangor, UK

  • Andrew S. Pullin

Liljus ltd, London, UK

Sini Savilaakso

Department of Forest Sciences, University of Helsinki, Helsinki, Finland

Evidence Synthesis Lab, School of Natural and Environmental Sciences, University of Newcastle, Newcastle-upon-Tyne, UK

Gavin B. Stewart

You can also search for this author in PubMed   Google Scholar

Contributions

N.R.H. developed the manuscript idea and a first draft. All authors contributed to examples and edited the text. All authors have read and approve of the final submission.

Corresponding author

Correspondence to Neal R. Haddaway .

Ethics declarations

Competing interests.

S.S. is a co-founder of Liljus ltd, a firm that provides research services in sustainable finance as well as forest conservation and management. The other authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary table.

Examples of literature reviews and common problems identified.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Haddaway, N.R., Bethel, A., Dicks, L.V. et al. Eight problems with literature reviews and how to fix them. Nat Ecol Evol 4 , 1582–1589 (2020). https://doi.org/10.1038/s41559-020-01295-x

Download citation

Received : 24 March 2020

Accepted : 31 July 2020

Published : 12 October 2020

Issue Date : December 2020

DOI : https://doi.org/10.1038/s41559-020-01295-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A review of the necessity of a multi-layer land-use planning.

  • Hashem Dadashpoor
  • Leyla Ghasempour

Landscape and Ecological Engineering (2024)

Synthesizing the relationships between environmental DNA concentration and freshwater macrophyte abundance: a systematic review and meta-analysis

  • Toshiaki S. Jo

Hydrobiologia (2024)

A Systematic Review of the Effects of Multi-purpose Forest Management Practices on the Breeding Success of Forest Birds

  • João M. Cordeiro Pereira
  • Grzegorz Mikusiński
  • Ilse Storch

Current Forestry Reports (2024)

Parasitism in viviparous vertebrates: an overview

  • Juan J. Palacios-Marquez
  • Palestina Guevara-Fiore

Parasitology Research (2024)

Environmental evidence in action: on the science and practice of evidence synthesis and evidence-based decision-making

  • Steven J. Cooke
  • Carly N. Cook

Environmental Evidence (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

explain why literature review is viewed as a challenge for many researchers

Eight common problems with science literature reviews and how to fix them

explain why literature review is viewed as a challenge for many researchers

Research Fellow, Africa Centre for Evidence, University of Johannesburg

Disclosure statement

Neal Robert Haddaway works for the Stockholm Environment Institute and the Mercator Research Institute on Global Commons and Climate Change. He receives funding from the Alexander von Humboldt Foundation, Mistra, Formas, and Vinnova. He is also an honorary Research Associate at the Africa Centre for Evidence at the University of Johannesburg.

University of Johannesburg provides support as an endorsing partner of The Conversation AFRICA.

View all partners

A pile of books, with ribbons sticking out to denote bookmarks

Researchers regularly review the literature that’s generated by others in their field. This is an integral part of day-to-day research: finding relevant research, reading and digesting the main findings, summarising across papers, and making conclusions about the evidence base as a whole.

However, there is a fundamental difference between brief, narrative approaches to summarising a selection of studies and attempting to reliably, comprehensively summarise an evidence base to support decision-making in policy and practice.

So-called “evidence-informed decision-making” relies on rigorous systematic approaches to synthesising the evidence. Systematic review has become the highest standard of evidence synthesis. It is well established in the pipeline from research to practice in several fields including health , the environment and social policy . Rigorous systematic reviews are vital for decision-making because they help to provide the strongest evidence that a policy is likely to work (or not). They also help to avoid expensive or dangerous mistakes in the choice of policies.

But systematic review has not yet entirely replaced traditional methods of literature review. These traditional reviews may be susceptible to bias and so may end up providing incorrect conclusions. This is especially worrying when reviews address key policy and practice questions.

The good news is that the limitations of traditional literature review approaches could be improved relatively easily with a few key procedures. Some of these are not prohibitively costly in terms of skill, time or resources. That’s particularly important in African contexts, where resource constraints are a daily reality, but should not compromise the continent’s need for rigorous, systematic and transparent evidence to inform policy.

In our recent paper in Nature Ecology and Evolution , we highlighted eight common problems with traditional literature review methods. We gave examples for each problem, drawing from the field of environmental management and ecology. Finally, we outlined practical solutions.

These are the eight problems we identified in our paper .

First, traditional literature reviews can lack relevance. This is because limited stakeholder engagement can lead to a review that is of limited practical use to decision-makers.

Second, reviews that don’t publish their methods in an a priori (meaning that it is published before the review work begins) protocol may suffer from mission creep. In our paper we give the example of a 2019 review that initially stated it was looking at all population trends among insects. Instead, it ended up focusing only on studies that showed insect population declines. This could have been prevented by publishing and sticking to methods outlined in a protocol.

Third, a lack of transparency and replicability in the review methods may mean that the review cannot be replicated . Replicability is a central tenet of the scientific method.

Selection bias is another common problem. Here, the studies that are included in a literature review are not representative of the evidence base. A lack of comprehensiveness, stemming from an inappropriate search method, can also mean that reviews end up with the wrong evidence for the question at hand.

Traditional reviews may also exclude grey literature . This is defined as any document

produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers, i.e., where publishing is not the primary activity of the producing body.

It includes organisational reports and unpublished theses or other studies . Traditional reviews may also fail to test for evidence of publication bias; both these issues can result in incorrect or misleading conclusions. Another common error is to treat all evidence as equally valid. The reality is that some research studies are more valid than others. This needs to be accounted for in the synthesis.

Inappropriate synthesis is another common issue. This involves methods like vote-counting, which refers to tallying studies based on their statistical significance. Finally, a lack of consistency and error checking (as would happen when a reviewer works alone) can introduce errors and biases if a single reviewer makes decisions without consensus .

All of these common problems can be solved, though. Here’s how.

Stakeholders can be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets. Best-practice guidelines for this process already exist .

Researchers can carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, critical appraisal and synthesis in detail. Organisations like the Collaboration for Environmental Evidence have existing protocols from which people can draw.

Researchers also need to be explicit and use high-quality guidance and standards for review conduct and reporting . Several such standards already exist .

Another useful approach is to carefully design a search strategy with an info specialist; to trial the search strategy against a benchmark list; and to use multiple bibliographic databases, languages and sources of grey literature. Researchers should then publish their search methods in an a priori protocol for peer review.

Researchers should consider carefully planning and trialling a critical appraisal tool before starting the process in full, learning from existing robust critical appraisal tools . Critical appraisal is the carefully planned assessment of all possible risks of bias and possible confounders in a research study. Researchers should select their synthesis method carefully, based on the data analysed. Vote-counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarise and describe the evidence base.

Finally, at least two reviewers should screen a subset of the evidence base to ensure consistency and shared understanding of the methods before proceeding. Ideally, reviewers should conduct all decisions separately and then consolidate.

Collaboration

Collaboration is crucial to address the problems with traditional review processes. Authors need to conduct more rigorous reviews. Editors and peer reviewers need to gate-keep more strictly. The community of methodologists needs to better support the broader research community.

Working together, the academic and research community can build and maintain a strong system of rigorous, evidence-informed decision-making in conservation and environmental management – and, ultimately, in other disciplines.

  • Systematic reviews
  • Evidence based policy
  • Academic research

explain why literature review is viewed as a challenge for many researchers

Program Manager, Teaching & Learning Initiatives

explain why literature review is viewed as a challenge for many researchers

Lecturer/Senior Lecturer, Earth System Science (School of Science)

explain why literature review is viewed as a challenge for many researchers

Sydney Horizon Educators (Identified)

explain why literature review is viewed as a challenge for many researchers

Deputy Social Media Producer

explain why literature review is viewed as a challenge for many researchers

Associate Professor, Occupational Therapy

University of North Florida

  • Become Involved |
  • Give to the Library |
  • Staff Directory |
  • UNF Library
  • Thomas G. Carpenter Library

Conducting a Literature Review

Benefits of conducting a literature review.

  • Steps in Conducting a Literature Review
  • Summary of the Process
  • Additional Resources
  • Literature Review Tutorial by American University Library
  • The Literature Review: A Few Tips On Conducting It by University of Toronto
  • Write a Literature Review by UC Santa Cruz University Library

While there might be many reasons for conducting a literature review, following are four key outcomes of doing the review.

Assessment of the current state of research on a topic . This is probably the most obvious value of the literature review. Once a researcher has determined an area to work with for a research project, a search of relevant information sources will help determine what is already known about the topic and how extensively the topic has already been researched.

Identification of the experts on a particular topic . One of the additional benefits derived from doing the literature review is that it will quickly reveal which researchers have written the most on a particular topic and are, therefore, probably the experts on the topic. Someone who has written twenty articles on a topic or on related topics is more than likely more knowledgeable than someone who has written a single article. This same writer will likely turn up as a reference in most of the other articles written on the same topic. From the number of articles written by the author and the number of times the writer has been cited by other authors, a researcher will be able to assume that the particular author is an expert in the area and, thus, a key resource for consultation in the current research to be undertaken.

Identification of key questions about a topic that need further research . In many cases a researcher may discover new angles that need further exploration by reviewing what has already been written on a topic. For example, research may suggest that listening to music while studying might lead to better retention of ideas, but the research might not have assessed whether a particular style of music is more beneficial than another. A researcher who is interested in pursuing this topic would then do well to follow up existing studies with a new study, based on previous research, that tries to identify which styles of music are most beneficial to retention.

Determination of methodologies used in past studies of the same or similar topics.  It is often useful to review the types of studies that previous researchers have launched as a means of determining what approaches might be of most benefit in further developing a topic. By the same token, a review of previously conducted studies might lend itself to researchers determining a new angle for approaching research.

Upon completion of the literature review, a researcher should have a solid foundation of knowledge in the area and a good feel for the direction any new research should take. Should any additional questions arise during the course of the research, the researcher will know which experts to consult in order to quickly clear up those questions.

  • << Previous: Home
  • Next: Steps in Conducting a Literature Review >>
  • Last Updated: Aug 29, 2022 8:54 AM
  • URL: https://libguides.unf.edu/litreview

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jan 4, 2024 10:52 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

IMAGES

  1. conducting a review of literature

    explain why literature review is viewed as a challenge for many researchers

  2. explain 5 importance of literature review in research

    explain why literature review is viewed as a challenge for many researchers

  3. Literature Review: What is and How to do it?

    explain why literature review is viewed as a challenge for many researchers

  4. steps of literature review slideshare

    explain why literature review is viewed as a challenge for many researchers

  5. Literature Review: Why is it Important for Research?

    explain why literature review is viewed as a challenge for many researchers

  6. Why Is Literature Review Important? (3 Benefits Explained)

    explain why literature review is viewed as a challenge for many researchers

VIDEO

  1. Lecture 1: Importance of Literature Review

  2. Lecture 2: Carrying out Effective Literature Review

  3. Literature Review Writing Part 1

  4. UK researchers solve ancient scroll using AI

  5. লেকচার ৪.৩ঃ লিটারেচার রিভিউ লেখার নিয়ম উদাহরণসহ ।। How to Write Literature Review with Examples?

  6. B3 How to Write Effective Literature Review?

COMMENTS

  1. Writing a literature review

    A formal literature review is an evidence-based, in-depth analysis of a subject. There are many reasons for writing one and these will influence the length and style of your review, but in essence a literature review is a critical appraisal of the current collective knowledge on a subject. Rather than just being an exhaustive list of all that ...

  2. Why Do A Literature Review?

    Besides the obvious reason for students -- because it is assigned! -- a literature review helps you explore the research that has come before you, to see how your research question has (or has not) already been addressed. You identify: core research in the field. experts in the subject area. methodology you may want to use (or avoid)

  3. Approaching literature review for academic purposes: The Literature

    INTRODUCTION. Writing the literature review (LR) is often viewed as a difficult task that can be a point of writer's block and procrastination in postgraduate life.Disagreements on the definitions or classifications of LRs may confuse students about their purpose and scope, as well as how to perform an LR.Interestingly, at many universities, the LR is still an important element in any ...

  4. Guidance on Conducting a Systematic Literature Review

    Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  5. Literature Review: The What, Why and How-to Guide

    "A literature review is an account of what has been published on a topic by accredited scholars and researchers. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be ...

  6. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  7. What is a literature review?

    A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important ...

  8. Critically reviewing literature: A tutorial for new researchers

    Instead, a literature review for an empirical article or for a thesis is usually organized by concept. However, a literature review on a topic that one is trying to publish in its own right could be organized by the issues uncovered in that review e.g. definitional issues, measurement issues and so on. 3.3. Assessing the literature that was ...

  9. PDF Literature Review and Focusing the Research

    Literature Review Uses • When writing a literature review for the purposes of planning a research study, what are some of the uses that the literature review can serve for you? • Why is a literature review especially important in areas that (a) are emerging, (b) typically have small samples (e.g., special education research), or (c) represent

  10. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    This section addresses such questions broadly while providing general guidance for writing a narrative literature review that evaluates the most pertinent studies. The literature review process should begin before the research is conducted. As Boote and Beile (2005, p. 3) suggested, researchers should be "scholars before researchers."

  11. 8 common problems with literature reviews and how to fix them

    In our recent paper in Nature Ecology and Evolution, we highlight 8 common problems with traditional literature review methods, provide examples for each from the field of environmental management and ecology, and provide practical solutions for ways to mitigate them. Problem. Solution. Lack of relevance - limited stakeholder engagement can ...

  12. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  13. Eight problems with literature reviews and how to fix them

    Main. The aims of literature reviews range from providing a primer for the uninitiated to summarizing the evidence for decision making 1. Traditional approaches to literature reviews are ...

  14. Eight common problems with science literature reviews and how to fix them

    First, traditional literature reviews can lack relevance. This is because limited stakeholder engagement can lead to a review that is of limited practical use to decision-makers. Second, reviews ...

  15. Conducting a Literature Review

    By the same token, a review of previously conducted studies might lend itself to researchers determining a new angle for approaching research. Upon completion of the literature review, a researcher should have a solid foundation of knowledge in the area and a good feel for the direction any new research should take.

  16. Eight problems with literature reviews and how to fix them

    Eight problems, eight solutions. In the following section, we use recent exam ples of literature reviews. published in the field of conservation and envir onmental science. to highlight eight ...

  17. (PDF) Literature Reviews: What are the Challenges, and ...

    The purpose of this paper is to review the literature and develop an understanding of the complexities and challenges faced by students and new researchers in preparing journal papers.

  18. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  19. Steps in Conducting a Literature Review

    Why is it important? A literature review is important because it: Explains the background of research on a topic. Demonstrates why a topic is significant to a subject area. Discovers relationships between research studies/ideas. Identifies major themes, concepts, and researchers on a topic. Identifies critical gaps and points of disagreement.

  20. Challenges confronting beginning researchers in conducting literature

    Conducting literature review is a complicated, sometimes confusing and laborious process that beginning educational researchers, especially graduate students, often find challenging. However, in the past these challenges were hardly considered, but in more recent times they have been increasingly considered by various faculties and graduate ...

  21. (PDF) The challenges of conducting literature reviews in research

    This chapter introduces the challenges of conducting aliterature review by addressing first, what a literature review actually is and its purpose.Further, the elements of preparation, organisation ...

  22. Challenges confronting beginning researchers in conducting literature

    1 Nanyang Walk, Singapore 637616; weiching.lee@ nie.edu.sg. 2. Challenges Confronting Beginning Researchers in C onducting Literature. Revie ws. Conducting literature review is a co mplica ted ...

  23. Processes, challenges and recommendations of Gray Literature Review: An

    1. Introduction. Since Systematic Literature Review (SLR) was introduced to Software Engineering (SE) in 2004 [1], a large number of SLRs on various SE topics have been published [2].The value of SLRs lies in helping SE researchers and practitioners by synthesizing the evidence obtained from papers, the number of which could be hundreds [3], [4], [5], [6].