- Search Menu
Sign in through your institution
- Browse content in Arts and Humanities
- Browse content in Archaeology
- Anglo-Saxon and Medieval Archaeology
- Archaeological Methodology and Techniques
- Archaeology by Region
- Archaeology of Religion
- Archaeology of Trade and Exchange
- Biblical Archaeology
- Contemporary and Public Archaeology
- Environmental Archaeology
- Historical Archaeology
- History and Theory of Archaeology
- Industrial Archaeology
- Landscape Archaeology
- Mortuary Archaeology
- Prehistoric Archaeology
- Underwater Archaeology
- Urban Archaeology
- Zooarchaeology
- Browse content in Architecture
- Architectural Structure and Design
- History of Architecture
- Residential and Domestic Buildings
- Theory of Architecture
- Browse content in Art
- Art Subjects and Themes
- History of Art
- Industrial and Commercial Art
- Theory of Art
- Biographical Studies
- Byzantine Studies
- Browse content in Classical Studies
- Classical History
- Classical Philosophy
- Classical Mythology
- Classical Numismatics
- Classical Literature
- Classical Reception
- Classical Art and Architecture
- Classical Oratory and Rhetoric
- Greek and Roman Epigraphy
- Greek and Roman Law
- Greek and Roman Papyrology
- Greek and Roman Archaeology
- Late Antiquity
- Religion in the Ancient World
- Social History
- Digital Humanities
- Browse content in History
- Colonialism and Imperialism
- Diplomatic History
- Environmental History
- Genealogy, Heraldry, Names, and Honours
- Genocide and Ethnic Cleansing
- Historical Geography
- History by Period
- History of Emotions
- History of Agriculture
- History of Education
- History of Gender and Sexuality
- Industrial History
- Intellectual History
- International History
- Labour History
- Legal and Constitutional History
- Local and Family History
- Maritime History
- Military History
- National Liberation and Post-Colonialism
- Oral History
- Political History
- Public History
- Regional and National History
- Revolutions and Rebellions
- Slavery and Abolition of Slavery
- Social and Cultural History
- Theory, Methods, and Historiography
- Urban History
- World History
- Browse content in Language Teaching and Learning
- Language Learning (Specific Skills)
- Language Teaching Theory and Methods
- Browse content in Linguistics
- Applied Linguistics
- Cognitive Linguistics
- Computational Linguistics
- Forensic Linguistics
- Grammar, Syntax and Morphology
- Historical and Diachronic Linguistics
- History of English
- Language Acquisition
- Language Evolution
- Language Reference
- Language Variation
- Language Families
- Lexicography
- Linguistic Anthropology
- Linguistic Theories
- Linguistic Typology
- Phonetics and Phonology
- Psycholinguistics
- Sociolinguistics
- Translation and Interpretation
- Writing Systems
- Browse content in Literature
- Bibliography
- Children's Literature Studies
- Literary Studies (Asian)
- Literary Studies (European)
- Literary Studies (Eco-criticism)
- Literary Studies (Romanticism)
- Literary Studies (American)
- Literary Studies (Modernism)
- Literary Studies - World
- Literary Studies (1500 to 1800)
- Literary Studies (19th Century)
- Literary Studies (20th Century onwards)
- Literary Studies (African American Literature)
- Literary Studies (British and Irish)
- Literary Studies (Early and Medieval)
- Literary Studies (Fiction, Novelists, and Prose Writers)
- Literary Studies (Gender Studies)
- Literary Studies (Graphic Novels)
- Literary Studies (History of the Book)
- Literary Studies (Plays and Playwrights)
- Literary Studies (Poetry and Poets)
- Literary Studies (Postcolonial Literature)
- Literary Studies (Queer Studies)
- Literary Studies (Science Fiction)
- Literary Studies (Travel Literature)
- Literary Studies (War Literature)
- Literary Studies (Women's Writing)
- Literary Theory and Cultural Studies
- Mythology and Folklore
- Shakespeare Studies and Criticism
- Browse content in Media Studies
- Browse content in Music
- Applied Music
- Dance and Music
- Ethics in Music
- Ethnomusicology
- Gender and Sexuality in Music
- Medicine and Music
- Music Cultures
- Music and Religion
- Music and Media
- Music and Culture
- Music Education and Pedagogy
- Music Theory and Analysis
- Musical Scores, Lyrics, and Libretti
- Musical Structures, Styles, and Techniques
- Musicology and Music History
- Performance Practice and Studies
- Race and Ethnicity in Music
- Sound Studies
- Browse content in Performing Arts
- Browse content in Philosophy
- Aesthetics and Philosophy of Art
- Epistemology
- Feminist Philosophy
- History of Western Philosophy
- Meta-Philosophy
- Metaphysics
- Moral Philosophy
- Non-Western Philosophy
- Philosophy of Science
- Philosophy of Language
- Philosophy of Mind
- Philosophy of Perception
- Philosophy of Action
- Philosophy of Law
- Philosophy of Religion
- Philosophy of Mathematics and Logic
- Practical Ethics
- Social and Political Philosophy
- Browse content in Religion
- Biblical Studies
- Christianity
- East Asian Religions
- History of Religion
- Judaism and Jewish Studies
- Qumran Studies
- Religion and Education
- Religion and Health
- Religion and Politics
- Religion and Science
- Religion and Law
- Religion and Art, Literature, and Music
- Religious Studies
- Browse content in Society and Culture
- Cookery, Food, and Drink
- Cultural Studies
- Customs and Traditions
- Ethical Issues and Debates
- Hobbies, Games, Arts and Crafts
- Natural world, Country Life, and Pets
- Popular Beliefs and Controversial Knowledge
- Sports and Outdoor Recreation
- Technology and Society
- Travel and Holiday
- Visual Culture
- Browse content in Law
- Arbitration
- Browse content in Company and Commercial Law
- Commercial Law
- Company Law
- Browse content in Comparative Law
- Systems of Law
- Competition Law
- Browse content in Constitutional and Administrative Law
- Government Powers
- Judicial Review
- Local Government Law
- Military and Defence Law
- Parliamentary and Legislative Practice
- Construction Law
- Contract Law
- Browse content in Criminal Law
- Criminal Procedure
- Criminal Evidence Law
- Sentencing and Punishment
- Employment and Labour Law
- Environment and Energy Law
- Browse content in Financial Law
- Banking Law
- Insolvency Law
- History of Law
- Human Rights and Immigration
- Intellectual Property Law
- Browse content in International Law
- Private International Law and Conflict of Laws
- Public International Law
- IT and Communications Law
- Jurisprudence and Philosophy of Law
- Law and Politics
- Law and Society
- Browse content in Legal System and Practice
- Courts and Procedure
- Legal Skills and Practice
- Legal System - Costs and Funding
- Primary Sources of Law
- Regulation of Legal Profession
- Medical and Healthcare Law
- Browse content in Policing
- Criminal Investigation and Detection
- Police and Security Services
- Police Procedure and Law
- Police Regional Planning
- Browse content in Property Law
- Personal Property Law
- Restitution
- Study and Revision
- Terrorism and National Security Law
- Browse content in Trusts Law
- Wills and Probate or Succession
- Browse content in Medicine and Health
- Browse content in Allied Health Professions
- Arts Therapies
- Clinical Science
- Dietetics and Nutrition
- Occupational Therapy
- Operating Department Practice
- Physiotherapy
- Radiography
- Speech and Language Therapy
- Browse content in Anaesthetics
- General Anaesthesia
- Browse content in Clinical Medicine
- Acute Medicine
- Cardiovascular Medicine
- Clinical Genetics
- Clinical Pharmacology and Therapeutics
- Dermatology
- Endocrinology and Diabetes
- Gastroenterology
- Genito-urinary Medicine
- Geriatric Medicine
- Infectious Diseases
- Medical Toxicology
- Medical Oncology
- Pain Medicine
- Palliative Medicine
- Rehabilitation Medicine
- Respiratory Medicine and Pulmonology
- Rheumatology
- Sleep Medicine
- Sports and Exercise Medicine
- Clinical Neuroscience
- Community Medical Services
- Critical Care
- Emergency Medicine
- Forensic Medicine
- Haematology
- History of Medicine
- Browse content in Medical Dentistry
- Oral and Maxillofacial Surgery
- Paediatric Dentistry
- Restorative Dentistry and Orthodontics
- Surgical Dentistry
- Browse content in Medical Skills
- Clinical Skills
- Communication Skills
- Nursing Skills
- Surgical Skills
- Medical Ethics
- Medical Statistics and Methodology
- Browse content in Neurology
- Clinical Neurophysiology
- Neuropathology
- Nursing Studies
- Browse content in Obstetrics and Gynaecology
- Gynaecology
- Occupational Medicine
- Ophthalmology
- Otolaryngology (ENT)
- Browse content in Paediatrics
- Neonatology
- Browse content in Pathology
- Chemical Pathology
- Clinical Cytogenetics and Molecular Genetics
- Histopathology
- Medical Microbiology and Virology
- Patient Education and Information
- Browse content in Pharmacology
- Psychopharmacology
- Browse content in Popular Health
- Caring for Others
- Complementary and Alternative Medicine
- Self-help and Personal Development
- Browse content in Preclinical Medicine
- Cell Biology
- Molecular Biology and Genetics
- Reproduction, Growth and Development
- Primary Care
- Professional Development in Medicine
- Browse content in Psychiatry
- Addiction Medicine
- Child and Adolescent Psychiatry
- Forensic Psychiatry
- Learning Disabilities
- Old Age Psychiatry
- Psychotherapy
- Browse content in Public Health and Epidemiology
- Epidemiology
- Public Health
- Browse content in Radiology
- Clinical Radiology
- Interventional Radiology
- Nuclear Medicine
- Radiation Oncology
- Reproductive Medicine
- Browse content in Surgery
- Cardiothoracic Surgery
- Gastro-intestinal and Colorectal Surgery
- General Surgery
- Neurosurgery
- Paediatric Surgery
- Peri-operative Care
- Plastic and Reconstructive Surgery
- Surgical Oncology
- Transplant Surgery
- Trauma and Orthopaedic Surgery
- Vascular Surgery
- Browse content in Science and Mathematics
- Browse content in Biological Sciences
- Aquatic Biology
- Biochemistry
- Bioinformatics and Computational Biology
- Developmental Biology
- Ecology and Conservation
- Evolutionary Biology
- Genetics and Genomics
- Microbiology
- Molecular and Cell Biology
- Natural History
- Plant Sciences and Forestry
- Research Methods in Life Sciences
- Structural Biology
- Systems Biology
- Zoology and Animal Sciences
- Browse content in Chemistry
- Analytical Chemistry
- Computational Chemistry
- Crystallography
- Environmental Chemistry
- Industrial Chemistry
- Inorganic Chemistry
- Materials Chemistry
- Medicinal Chemistry
- Mineralogy and Gems
- Organic Chemistry
- Physical Chemistry
- Polymer Chemistry
- Study and Communication Skills in Chemistry
- Theoretical Chemistry
- Browse content in Computer Science
- Artificial Intelligence
- Computer Architecture and Logic Design
- Game Studies
- Human-Computer Interaction
- Mathematical Theory of Computation
- Programming Languages
- Software Engineering
- Systems Analysis and Design
- Virtual Reality
- Browse content in Computing
- Business Applications
- Computer Security
- Computer Games
- Computer Networking and Communications
- Digital Lifestyle
- Graphical and Digital Media Applications
- Operating Systems
- Browse content in Earth Sciences and Geography
- Atmospheric Sciences
- Environmental Geography
- Geology and the Lithosphere
- Maps and Map-making
- Meteorology and Climatology
- Oceanography and Hydrology
- Palaeontology
- Physical Geography and Topography
- Regional Geography
- Soil Science
- Urban Geography
- Browse content in Engineering and Technology
- Agriculture and Farming
- Biological Engineering
- Civil Engineering, Surveying, and Building
- Electronics and Communications Engineering
- Energy Technology
- Engineering (General)
- Environmental Science, Engineering, and Technology
- History of Engineering and Technology
- Mechanical Engineering and Materials
- Technology of Industrial Chemistry
- Transport Technology and Trades
- Browse content in Environmental Science
- Applied Ecology (Environmental Science)
- Conservation of the Environment (Environmental Science)
- Environmental Sustainability
- Environmentalist Thought and Ideology (Environmental Science)
- Management of Land and Natural Resources (Environmental Science)
- Natural Disasters (Environmental Science)
- Nuclear Issues (Environmental Science)
- Pollution and Threats to the Environment (Environmental Science)
- Social Impact of Environmental Issues (Environmental Science)
- History of Science and Technology
- Browse content in Materials Science
- Ceramics and Glasses
- Composite Materials
- Metals, Alloying, and Corrosion
- Nanotechnology
- Browse content in Mathematics
- Applied Mathematics
- Biomathematics and Statistics
- History of Mathematics
- Mathematical Education
- Mathematical Finance
- Mathematical Analysis
- Numerical and Computational Mathematics
- Probability and Statistics
- Pure Mathematics
- Browse content in Neuroscience
- Cognition and Behavioural Neuroscience
- Development of the Nervous System
- Disorders of the Nervous System
- History of Neuroscience
- Invertebrate Neurobiology
- Molecular and Cellular Systems
- Neuroendocrinology and Autonomic Nervous System
- Neuroscientific Techniques
- Sensory and Motor Systems
- Browse content in Physics
- Astronomy and Astrophysics
- Atomic, Molecular, and Optical Physics
- Biological and Medical Physics
- Classical Mechanics
- Computational Physics
- Condensed Matter Physics
- Electromagnetism, Optics, and Acoustics
- History of Physics
- Mathematical and Statistical Physics
- Measurement Science
- Nuclear Physics
- Particles and Fields
- Plasma Physics
- Quantum Physics
- Relativity and Gravitation
- Semiconductor and Mesoscopic Physics
- Browse content in Psychology
- Affective Sciences
- Clinical Psychology
- Cognitive Psychology
- Cognitive Neuroscience
- Criminal and Forensic Psychology
- Developmental Psychology
- Educational Psychology
- Evolutionary Psychology
- Health Psychology
- History and Systems in Psychology
- Music Psychology
- Neuropsychology
- Organizational Psychology
- Psychological Assessment and Testing
- Psychology of Human-Technology Interaction
- Psychology Professional Development and Training
- Research Methods in Psychology
- Social Psychology
- Browse content in Social Sciences
- Browse content in Anthropology
- Anthropology of Religion
- Human Evolution
- Medical Anthropology
- Physical Anthropology
- Regional Anthropology
- Social and Cultural Anthropology
- Theory and Practice of Anthropology
- Browse content in Business and Management
- Business Strategy
- Business Ethics
- Business History
- Business and Government
- Business and Technology
- Business and the Environment
- Comparative Management
- Corporate Governance
- Corporate Social Responsibility
- Entrepreneurship
- Health Management
- Human Resource Management
- Industrial and Employment Relations
- Industry Studies
- Information and Communication Technologies
- International Business
- Knowledge Management
- Management and Management Techniques
- Operations Management
- Organizational Theory and Behaviour
- Pensions and Pension Management
- Public and Nonprofit Management
- Social Issues in Business and Management
- Strategic Management
- Supply Chain Management
- Browse content in Criminology and Criminal Justice
- Criminal Justice
- Criminology
- Forms of Crime
- International and Comparative Criminology
- Youth Violence and Juvenile Justice
- Development Studies
- Browse content in Economics
- Agricultural, Environmental, and Natural Resource Economics
- Asian Economics
- Behavioural Finance
- Behavioural Economics and Neuroeconomics
- Econometrics and Mathematical Economics
- Economic Systems
- Economic History
- Economic Methodology
- Economic Development and Growth
- Financial Markets
- Financial Institutions and Services
- General Economics and Teaching
- Health, Education, and Welfare
- History of Economic Thought
- International Economics
- Labour and Demographic Economics
- Law and Economics
- Macroeconomics and Monetary Economics
- Microeconomics
- Public Economics
- Urban, Rural, and Regional Economics
- Welfare Economics
- Browse content in Education
- Adult Education and Continuous Learning
- Care and Counselling of Students
- Early Childhood and Elementary Education
- Educational Equipment and Technology
- Educational Strategies and Policy
- Higher and Further Education
- Organization and Management of Education
- Philosophy and Theory of Education
- Schools Studies
- Secondary Education
- Teaching of a Specific Subject
- Teaching of Specific Groups and Special Educational Needs
- Teaching Skills and Techniques
- Browse content in Environment
- Applied Ecology (Social Science)
- Climate Change
- Conservation of the Environment (Social Science)
- Environmentalist Thought and Ideology (Social Science)
- Management of Land and Natural Resources (Social Science)
- Natural Disasters (Environment)
- Pollution and Threats to the Environment (Social Science)
- Social Impact of Environmental Issues (Social Science)
- Sustainability
- Browse content in Human Geography
- Cultural Geography
- Economic Geography
- Political Geography
- Browse content in Interdisciplinary Studies
- Communication Studies
- Museums, Libraries, and Information Sciences
- Browse content in Politics
- African Politics
- Asian Politics
- Chinese Politics
- Comparative Politics
- Conflict Politics
- Elections and Electoral Studies
- Environmental Politics
- Ethnic Politics
- European Union
- Foreign Policy
- Gender and Politics
- Human Rights and Politics
- Indian Politics
- International Relations
- International Organization (Politics)
- Irish Politics
- Latin American Politics
- Middle Eastern Politics
- Political Methodology
- Political Communication
- Political Philosophy
- Political Sociology
- Political Behaviour
- Political Economy
- Political Institutions
- Political Theory
- Politics and Law
- Politics of Development
- Public Administration
- Public Policy
- Qualitative Political Methodology
- Quantitative Political Methodology
- Regional Political Studies
- Russian Politics
- Security Studies
- State and Local Government
- UK Politics
- US Politics
- Browse content in Regional and Area Studies
- African Studies
- Asian Studies
- East Asian Studies
- Japanese Studies
- Latin American Studies
- Middle Eastern Studies
- Native American Studies
- Scottish Studies
- Browse content in Research and Information
- Research Methods
- Browse content in Social Work
- Addictions and Substance Misuse
- Adoption and Fostering
- Care of the Elderly
- Child and Adolescent Social Work
- Couple and Family Social Work
- Direct Practice and Clinical Social Work
- Emergency Services
- Human Behaviour and the Social Environment
- International and Global Issues in Social Work
- Mental and Behavioural Health
- Social Justice and Human Rights
- Social Policy and Advocacy
- Social Work and Crime and Justice
- Social Work Macro Practice
- Social Work Practice Settings
- Social Work Research and Evidence-based Practice
- Welfare and Benefit Systems
- Browse content in Sociology
- Childhood Studies
- Community Development
- Comparative and Historical Sociology
- Disability Studies
- Economic Sociology
- Gender and Sexuality
- Gerontology and Ageing
- Health, Illness, and Medicine
- Marriage and the Family
- Migration Studies
- Occupations, Professions, and Work
- Organizations
- Population and Demography
- Race and Ethnicity
- Social Theory
- Social Movements and Social Change
- Social Research and Statistics
- Social Stratification, Inequality, and Mobility
- Sociology of Religion
- Sociology of Education
- Sport and Leisure
- Urban and Rural Studies
- Browse content in Warfare and Defence
- Defence Strategy, Planning, and Research
- Land Forces and Warfare
- Military Administration
- Military Life and Institutions
- Naval Forces and Warfare
- Other Warfare and Defence Issues
- Peace Studies and Conflict Resolution
- Weapons and Equipment
A newer edition of this book is available.
- < Previous chapter
- Next chapter >
23 Program Evaluation
Paul R. Brandon, University of Hawai‘i at Mānoa
Anna L. Ah Sam, University of Hawai‘i at Mānoa
- Published: 01 July 2014
- Cite Icon Cite
- Permissions Icon Permissions
The profession of educational and social program evaluation has expanded exponentially around the globe since the mid-1960s and continues to receive the considerable attention of theorists, methodologists, and practitioners. The literature on it is wide and deep, reflecting an array of definitions and conceptions of purpose and social role. The chapter discusses these topics and several others, including opinions about the choice of methods, some of which are used primarily within evaluation approaches to conducting evaluation; the aspects of programs that evaluators typically address; the concept of value; the differences between evaluation and social science research; research on evaluation topics; and the major evaluation issues and concerns that have dominated discussion in the literature over the years
Personal account
- Sign in with email/username & password
- Get email alerts
- Save searches
- Purchase content
- Activate your purchase/trial code
- Add your ORCID iD
Institutional access
Sign in with a library card.
- Sign in with username/password
- Recommend to your librarian
- Institutional account management
- Get help with access
Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:
IP based access
Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.
Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.
- Click Sign in through your institution.
- Select your institution from the list provided, which will take you to your institution's website to sign in.
- When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
- Following successful sign in, you will be returned to Oxford Academic.
If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.
Enter your library card number to sign in. If you cannot sign in, please contact your librarian.
Society Members
Society member access to a journal is achieved in one of the following ways:
Sign in through society site
Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:
- Click Sign in through society site.
- When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.
If you do not have a society account or have forgotten your username or password, please contact your society.
Sign in using a personal account
Some societies use Oxford Academic personal accounts to provide access to their members. See below.
A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.
Some societies use Oxford Academic personal accounts to provide access to their members.
Viewing your signed in accounts
Click the account icon in the top right to:
- View your signed in personal account and access account management features.
- View the institutional accounts that are providing access.
Signed in but can't access content
Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.
For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.
Our books are available by subscription or purchase to libraries and institutions.
Month: | Total Views: |
---|---|
October 2022 | 32 |
November 2022 | 31 |
December 2022 | 30 |
January 2023 | 3 |
April 2023 | 1 |
June 2023 | 24 |
July 2023 | 1 |
August 2023 | 4 |
September 2023 | 6 |
October 2023 | 6 |
November 2023 | 2 |
December 2023 | 1 |
February 2024 | 3 |
May 2024 | 2 |
June 2024 | 3 |
August 2024 | 4 |
September 2024 | 3 |
- About Oxford Academic
- Publish journals with us
- University press partners
- What we publish
- New features
- Open access
- Rights and permissions
- Accessibility
- Advertising
- Media enquiries
- Oxford University Press
- Oxford Languages
- University of Oxford
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
- Copyright © 2024 Oxford University Press
- Cookie settings
- Cookie policy
- Privacy policy
- Legal notice
This Feature Is Available To Subscribers Only
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
- - Google Chrome
Intended for healthcare professionals
- My email alerts
- BMA member login
- Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution
Search form
- Advanced search
- Search responses
- Search blogs
- Qualitative Research:...
Qualitative Research: Case study evaluation
- Related content
- Peer review
- Justin Keen , research fellow, health economics research group a ,
- Tim Packwood a
- Brunel University, Uxbridge, Middlesex UB8 3PH
- a Correspondence to: Dr Keen.
Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.
This is the last in a series of seven articles describing non-quantitative techniques and showing their value in health research
Introduction
The medical approach to understanding disease has traditionally drawn heavily on qualitative data, and in particular on case studies to illustrate important or interesting phenomena. The tradition continues today, not least in regular case reports in this and other medical journals. Moreover, much of the everyday work of doctors and other health professionals still involves decisions that are qualitative rather than quantitative in nature.
This paper discusses the use of qualitative research methods, not in clinical care but in case study evaluations of health service interventions. It is useful for doctors to understand the principles guiding the design and conduct of these evaluations, because they are frequently used by both researchers and inspectorial agencies (such as the Audit Commission in the United Kingdom and the Office of Technology Assessment in the United States) to investigate the work of doctors and other health professionals.
We briefly discuss the circumstances in which case study research can usefully be undertaken in health service settings and the ways in which qualitative methods are used within case studies. Examples show how qualitative methods are applied, both in purely qualitative studies and alongside quantitative methods.
Case study evaluations
Doctors often find themselves asking important practical questions, such as should we be involved in the management of hospitals and, if so, how? how will new government policies affect the lives of our patients? and how can we cope with changes …
Log in using your username and password
BMA Member Log In
If you have a subscription to The BMJ, log in:
- Need to activate
- Log in via institution
- Log in via OpenAthens
Log in through your institution
Subscribe from £184 *.
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )
You can download a PDF version for your personal record.
Buy this article
- Open access
- Published: 26 May 2016
A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso
- Léna D’Ostie-Racine 1 ,
- Christian Dagenais 1 &
- Valéry Ridde 2 , 3
Health Research Policy and Systems volume 14 , Article number: 37 ( 2016 ) Cite this article
5939 Accesses
6 Citations
6 Altmetric
Metrics details
Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations.
The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU) by a humanitarian NGO based in Burkina Faso. This organization developed an evaluation strategy in 2008 to document the implementation and effects of its maternal and child healthcare user fee exemption program. Program evaluations have been undertaken ever since, and the present study examined the discourses of evaluation partners in 2009 (n = 15) and 2011 (n = 17). Semi-structured individual interviews and one group interview were conducted to identify instances of EU over time. Alkin and Taut’s (Stud Educ Eval 29:1–12, 2003) conceptualization of EU was used as the basis for thematic qualitative analyses of the different forms of EU identified by stakeholders of the exemption program in the two data collection periods.
Results demonstrated that stakeholders began to understand and value the utility of program evaluations once they were exposed to evaluation findings and then progressively used evaluations over time. EU was manifested in a variety of ways, including instrumental and conceptual use of evaluation processes and findings, as well as the persuasive use of findings. Such EU supported planning, decision-making, program practices, evaluation capacity, and advocacy.
Conclusions
The study sheds light on the many ways evaluations can be used by different actors in the humanitarian sector. Conceptualizations of EU are also critically discussed.
Peer Review reports
Humanitarian assistance organizations are increasing investing in program evaluation to enhance performance, practice and accountability [ 1 – 5 ]. Yet, ensuring knowledge derived from evaluation of humanitarian action, defined as the “ systematic and impartial examination of humanitarian action intended to draw lessons to improve policy and practice and enhance accountability ” [ 6 ], is actually used remains an important challenge [ 2 , 4 , 5 , 7 – 9 ]. A common difficulty highlighted by Hallam [ 4 ] is that “ too often, humanitarian evaluations exist as a disconnected process, rather than becoming embedded as part of the culture and mindset of humanitarian organisations ”. The literature offers few examples of evaluation strategies that have been integrated into a humanitarian aid program, used effectively, and documented over time [ 10 ]. Rare also are studies that document the perspectives of both knowledge producers (e.g. evaluators) and intended users [ 10 ].
The present article examines evaluation use (EU) by HELP ( Hilfe zur Selbsthilfe e.V. ), a German humanitarian non-governmental organization (NGO) based in Burkina Faso that has developed an evaluation strategy now embedded into the country’s healthcare user fee exemption program [ 11 – 14 ]. The exemption program was implemented in Burkina Faso in part because of the country’s high rates of mortality and morbidity and its context of economic poverty, in which user fees undermine the accessibility of health services for many [ 13 – 16 ]. Especially in the Sahel region, where HELP implemented its user fee exemption program, maternal and infant rates of malnutrition, morbidity and mortality are exceedingly high, as shown in WHO’s 2014 statistical report [ 13 , 14 , 17 ]. HELP’s program is aimed at exempting indigents, pregnant and breastfeeding women, as well as children under five, from user fees [ 13 ]. Similar user fee subsidies or exemption programs had been attempted in different West African countries [ 18 ], but planning, implementation, and evaluation were frequently insufficient and often only partial [ 19 , 20 ], and in general the measured impacts were smaller than expected [ 21 ]. Hence, while such exemption programs innovated upon previous practices in West Africa [ 22 ] and in some instances seemed promising [ 21 ], for a complex array of reasons, health sector deficiencies persisted and health indicators remained worrisome [ 21 , 23 , 24 ]. Thus, documenting and evaluating the implementation of innovative health financing programs has become increasingly necessary. West African decision-makers and practitioners have required empirical documentation on the processes and effects of user fee exemptions to ground their reflections, decisions and actions [ 18 , 22 , 23 , 25 , 26 ].
HELP had previously implemented an exemption program in Niger, which had been evaluated in 2007 at the request of its funding agency, the European Commission’s Humanitarian Aid and Civil Protection department (ECHO). The external evaluators were impressed by the HELP managers’ interest in evaluation findings and by their proactivity in implementing evaluation recommendations. Conscious that empirical evidence can support improvements in the humanitarian sector [ 23 , 26 ], HELP managers consulted those same external evaluators while planning the Burkina Faso user fee exemption program, hoping to render it more evidence based. Together, the external evaluators and HELP managers developed an actual strategy for the evaluation, to be embedded within the user fee exemption program, and requested and were granted a specific budget for that evaluation strategy. Upon budget approval in 2008, HELP staff and the evaluators simultaneously developed both the Burkina Faso exemption program and the evaluation strategy aimed at documenting its implementation and effectiveness for purposes of accountability, program learning and improvement, and advocacy [ 8 , 11 ]. Indeed, evaluating HELP’s exemption program as it evolved in Burkina Faso would provide opportunities for HELP and its partners to learn from and improve the exemption program. Resulting documentation could also be used to enhance HELP’s transparency and accountability and to facilitate its advocacy for equitable access to healthcare. Advocating for equitable access to healthcare was also one of ECHO’s objectives and hence was in line with its own mission. These were the main motives driving HELP decision-makers and their partners, including a principal evaluator, to develop the evaluation strategy.
Ridde et al. [ 12 ] have described in detail 12 of the studies undertaken by HELP as part of the evaluation strategy (Box 1). Stakeholders of the strategy, referred to in this article as evaluation partners (EPs), were primarily HELP’s exemption program staff and the external evaluators, but also included the Sahel regional health director ( directeur régional de la santé , DRS), the district chief physicians ( médecins chefs de district , MCDs), and representatives from ECHO, as well as advocacy partners, including a journalist and a representative of Amnesty International.
Box 1 HELP evaluation studies from 2007 to 2011
Studies on evaluation of effects |
1. Assessment of effects on the population through a survey of a representative panel of households 2. Assessment of effects on health facilities using an interrupted time-series analysis 3. Assessment of the community context and of health centres (Centre de santé et de promotion sociale: CSPS) 4. Accounting study assessing the financial capacities of the community-based health centre management committees (comité de gestion: COGES) in the two districts by comparing data 12 months before and 6 months after the experiment 5. Appropriateness of prescriptions for children under the age of 5 years 6. Effectiveness of an indigent selection process assessed using a quantitative methodology 7. Assessment of effects on childbirth costs ( = 849) and particularly the estimation of excessive expenses for households 8. Effects on community participation and the empowerment of COGES members and women |
Studies on assessment of processes and relevance |
9. A process evaluation of user fees abolition for pregnant women and children under 5 years in two districts in Niger (West Africa) 10. User fees abolition policy in Niger: Comparing the under 5 years exemption implementation in two districts [ ] 11. A case study into the times taken to reimburse procedures performed without payment, in a sample of ten CSPSs 12. A study on the costs of reimbursed procedures for children under the age of 5 years 13. A process assessment of an intervention’s progress, strengths and weaknesses, chances of continuing, merits and relevance 14. Analysis of relevance of an indigent selection process, performed during the same data collection for effects on community participation (see above) 15. Action-Research guided by Réseaux d’Accès aux Médicaments Essentiels (RAME) - Dori team: Quality of health services - Sebba team: maternal morbidity in the context of cost sharing, Soins obstétricaux néonataux d’urgence (SONU), and HELP’s exemption - RAME team: Treatment coverage at the Yalgado Ouedraogo Hospital in the context of the prepaid emergency kits 16. Assessment of health centre staff workload 17. Evaluation of HELP’s knowledge transfer strategy |
Adapted from Ridde et al. [ ] |
Following an evaluability assessment of EU in Burkina Faso as part of the evaluation strategy described by Ridde et al. [ 12 ], it was clear the experiences of its EPs presented a rich opportunity to examine progressive EU over time [ 28 ]. More specifically, the present study is innovative in examining the different forms of EU in depth, using a diachronic approach to observe any variations in EU between 2009 and 2011 from the varied perspectives of the different EPs. EPs who had collaborated both on the Niger 2007 evaluation and on the evaluation strategy in Burkina Faso were able to discuss variations in EU between 2007 and 2011.
- Evaluation use
Traditionally, EU has been viewed solely as the use of evaluation findings, referring, for example, to the application of evaluation recommendations [ 29 , 30 ]. In this view, after reading an evaluation report, staff in a humanitarian program aimed at alleviating malnutrition could, for example, strive to implement a recommendation to increase the supply of a given nutrient to toddlers of a given community. Current definitions of EU, however, include not only findings use but also process use, a term originally coined by Patton [ 31 ] to refer to the “ individual changes in thinking, attitudes, and behaviour, and program or organizational changes in procedures and culture that occur among those involved in evaluation as a result of the learning that occurs during the evaluation process ”. Patton [ 32 ] explained that process use could, for instance, manifest as “ infusing evaluative thinking into an organization’s culture ” [ 32 ], which might be seen in attempts to use more clear, specific, concrete and observable logic [ 31 ]. Humanitarian staff for the same nutritional program could, for example, learn during an evaluation process to specify clearer program objectives, beneficiary selection criteria, program actions and success indicators. Such process use could enhance shared understanding among them and potentially lead to program improvements and ultimately to lower rates of malnourishment. In the present study, we have attempted to attend to a broad spectrum of EUs by according no primacy to findings use over process use and by documenting unintended uses as well as uses that occurred over time in a cumulative or gradual manner.
The principal objective of the present study was to examine the diverse uses of evaluation findings and processes engendered by the evaluation strategy. A related objective was to examine whether any changes in EU occurred between 2009 and 2011. Hence, the focus was not on the use of a particular evaluation study, but more generally on how EU evolved over time, as the evaluation strategy was developed and more than 15 evaluation studies (Box 1) were conducted. For the present study, we employed an adapted version of Alkin and Taut’s [ 33 ] conceptualization of EU to ensure its diverse manifestations were identified. In their model, EU is either findings use (instrumental, conceptual, legitimative) or process use (instrumental, conceptual, symbolic). ‘Instrumental use’ involves direct use of evaluation-based knowledge for decision-making or for changing program practices [ 33 ]. ‘Conceptual use’ refers to indirect use of knowledge that leads to changes in the intended user’s understanding of program-related issues. ‘Symbolic use’ relates to situations in which those requesting the evaluation simply seek to demonstrate their willingness to undergo evaluation for the sake of reputation or status [ 29 , 33 ]. Lastly, ‘legitimative use’ occurs when evaluation findings are used to justify previously undertaken actions or decisions [ 33 ]. We adapted Alkin and Taut’s [ 33 ] conceptualization by integrating its symbolic and legitimative uses under the broader concept of ‘persuasive use’ to also account for what Estabrooks [ 34 ] described as using evaluation as a persuasive or political means to legitimize a position or practice. Leviton and Hughes [ 35 ] further clarify the interpersonal influence that is integral to persuasive use, explaining that it involves using evaluation-based knowledge as a means to convince others to subscribe to the implications of an evaluation and hence to support a particular position by promoting or defending it. We added this term to stress the point made by previous authors that persuasive forms of EU can also serve constructive purposes [ 35 , 36 ]. For instance, empirical evidence can be used persuasively to advocate for equity in global health. Symbolic and legitimative EU are terms that commonly carry negative connotations and are not easily applied to such constructive purposes. Persuasive use is included to draw attention to the different and concurrent ways in which evaluations can be used to influence reputations, judgment of actions or political positions.
Some examples may help clarify these different forms of EU. For instance, discussions during the evaluation process about the lack of potable water in a given village could lead intended users to think about strategies to bring water to the village; they might also recognize how helpful evaluations are in highlighting water needs for that village and how hard village locals have been working to fetch their water. These are forms of ‘conceptual process use’, in that intended users’ conceptions changed as a result of discussions during the evaluation process. Had such conceptual changes occurred as they learned of evaluation findings, this would have been ‘conceptual findings use’. Had intended users come to meet with locals and/or decided to dig a well, this would illustrate ‘instrumental process use’. It would have been ‘instrumental findings use’, had this decision to build a well been taken based on findings showing, for example, high morbidity rates associated with dehydration. Having already taken the decision to build the well, stakeholders could ask for an evaluation solely to empirically demonstrate the need for a well; this would be ‘legitimative use’. Or, they could have their well-building intervention evaluated without any intent or effort to use evaluations, but simply for ‘symbolic use’, to demonstrate their willingness to be evaluated. Then again, the well-building intervention could also undergo evaluation to provide convincing data that could be used in political claims advocating for human rights to potable water policies, thereby constituting ‘persuasive use’.
Research design
This evaluation used a qualitative single case study design and a descriptive approach to examine EPs’ discourses about EU over time [ 37 , 38 ]. This was an instrumental case study, in that HELP’s evaluation strategy was chosen for its ability to provide insight into EU [ 39 ]. To document the evolution of EU over time, two waves of data collection were conducted by the first author in Burkina Faso using a diachronic approach with an interval of 29 months (July 2009 and November 2011). The 2009 data collection lasted 5 weeks and employed individual interviews. The 1-month 2011 data collection involved individual interviews as well as one group interview. Documentation and non-participatory observation provided contextual complementary information.
Recruitment procedures
Objectives and procedures of the present study were explained to EPs upon soliciting their participation. When EPs responded positively, interviews were scheduled at a time and place of their convenience. Recruitment for individual interviews in 2009 and 2011 followed two purposeful sampling strategies [ 40 ]. The intensity sampling strategy (targeting persons intensely affected by the studied phenomenon) led us to recruit the principal evaluator and the NGO’s head of mission as the first participants [ 40 ]. Thereafter, the snowball sampling strategy was used, in which participants were asked to suggest other information-rich respondents. A conscious effort was made to limit the risks of ‘ enclicage ’ (a French term describing the risk that the researcher would be assimilated into a given clique and estranged from other groups and/or the larger group as a whole), as cautioned by Olivier de Sardan [ 41 ]. The extensive experience in the study context of one of the authors helped avoid such potential sampling biases. Data triangulation was also achieved by recruiting multiple participants with diverse relationships to HELP’s evaluation strategy as a means of obtaining varied perspectives and enhancing the study’s validity [ 42 ]. Such intra-group diversification was a conscious attempt to collect multiple viewpoints for a comprehensive appreciation of EPs’ individual and collective experiences [ 43 , 44 ].
Participants, data collection instrument and protocol
Semi-structured individual interviews were conducted in 2009 (n = 32; 15 respondents, 17 interviews) and in 2011 (n = 36; 17 respondents, 19 interviews) in Ouagadougou, Dori and Sebba. In each round of data collection, an extra interview was conducted with two EPs who had been particularly active and involved in the evaluation strategy and had more to say after a single interview; hence, the number of interviews exceeded the number of respondents by two in both collections. Table 1 presents the distribution of respondents for both data collections. Six EPs were interviewed in both 2009 and 2011. All EPs from HELP involved in the evaluation strategy were interviewed at least once, either in 2009 or 2011. EPs interviewed only in one data collection were either not working with HELP or out of the country during the other collection. Length of collaboration in the evaluation strategy ranged from three to 52 consecutive months for 16 EPs and was intermittent for the others. Eighteen EPs were locals from Burkina Faso, three were from West Africa, and five were international expats. Five were women, three held management positions, one was an evaluator, and another was a community outreach worker.
Individual interviews lasted an average of 60 minutes. Interviews (individual and group) were semi-structured and followed an interview guide flexibly enough to allow it to evolve as the study progressed [ 40 ]. Questions were open-ended and solicited descriptions of EPs’ experiences and perceptions, as they had evolved over the course of the evaluation strategy, of (1) the evaluation strategy; (2) evaluation use; (3) collaboration with other EPs; and (4) the influence of evaluation upon them, other partners and their work environment. For most EPs, questions focused on the years 2009 to 2011, but those who had collaborated in the Niger evaluation were also free to recall their experiences starting in 2007. Specific examples of interview questions are presented in Box 2.
Box 2 Interview guide: examples of questions
2009 and 2011 | What are your perceptions and experiences concerning: 1) The evaluation strategy - How did HELP’s evaluation strategy begin? - What activities were planned, realized? What were the effects observed? - When and how did you begin to collaborate in the evaluation strategy? - In which evaluation did you participate? How were you involved? - How do you feel about the way the evaluations went? Are there things you appreciated or things you did not like about the way the evaluations went? 2) Using evaluation - Among the evaluations in which you participated, which ones struck you as having something of interest? How so? - Were some of the evaluations useful? How so? Were some not useful? How so? Examples? - Were some of the evaluations used? How so? - Did you or other evaluation partners (EPs) gain something from participating in an evaluation activity? 3) Collaborating with other EPs - How would you describe the collaboration among evaluation partners? 4) Observed influences of evaluation upon yourself, other EPs and your work environment - Did you or your partners learn anything during the evaluations or from the evaluators? How so? - How have evaluations influenced you, your work? - What are the pros and cons of conducting evaluations at HELP? - What place does evaluation have at HELP? What place do you think it should have at HELP? |
2011 | - Since 2009, have you noticed changes in the evaluation strategy? How so? - How would you describe the state of the evaluation strategy now? - Have you noticed changes over time in the way evaluations were used? How so? - How would you describe the way evaluation partners have collaborated over time? - What challenges and successes have you noted about the evaluation strategy and the collaboration? - Over time, have you noticed different ways in which evaluation influenced you and/or the work and dynamics at HELP? |
The group interview was conducted at the start of the 2011 data collection period before the individual interviews, as a means of discerning interpersonal dynamics and spurring collective brainstorming on the general questions of the present study; it lasted 90 minutes. This was a small group (n = 3; a manager and two coordinators) of HELP personnel who had been responsible for evaluation-related activities. Inspired by Kitzinger’s [ 45 , 46 ] suggestions for focus groups, we used open-ended questions to foster interactions among them as a means of exploring emerging themes, norms and differences in perceptions regarding the evaluation strategy, EU and interpersonal dynamics among EPs. They were encouraged to explore different viewpoints and reasoning. Significant themes were later discussed in the individual interviews.
Interviews were conducted in French (Box 2), recorded digitally, transcribed and anonymized to preserve confidentiality. Transcripts were the primary data source for analyses.
Two additional sources of information provided insight into the study context, although not formal study data. Non-participant observation shed light upon EPs’ interpersonal dynamics and HELP’s functioning, as the first author spent 4 weeks during each of the two data collections in HELP’s offices interacting with HELP staff and with visiting partners. In 2011, she also accompanied HELP staff from all three sites on a 5-day team trip, during which a team meeting was held. Documents relevant to the evaluation strategy (e.g. evaluation plans and reports, scientific articles, policy briefs, meeting summaries, emails between EPs, advocacy documentation) were also collected to deepen understanding of the study’s context. These data provided opportunities for triangulating data sources, thereby strengthening the validity of EPs’ discourses.
Qualitative thematic content analyses were performed on the interview transcripts [ 47 ] using a mixed (inductive and deductive) approach and codebook. Coding and analysis were facilitated by the use of QDA Miner data analysis software. An adapted version of Alkin and Taut’s [ 33 ] model was used to identify and code different forms of EU. We used their conceptualizations of instrumental and conceptual EU but adapted the model, as mentioned earlier, by adding persuasive EU as a broad term encompassing the concepts of symbolic, legitimative and advocacy forms of EU. A specific code entitled ‘change’ was also created to capture any observations of changes related to EU mentioned and discussed by respondents in the 2011 interviews. For example, if a respondent in 2011 noticed that more evaluations had been conducted and disseminated and that this had led to more instances of EU, the code ‘change’ was applied to this sentence and integrated into the 2011 analyses and results (described below). Special attention was paid to ensuring that a broad range of EUs would be detected. After coding, we retrieved each type of EU and examined the coded excerpts for 2009 and for 2011 separately to identify and describe any apparent differences emerging from the respondents’ discourses on EUs between 2009 and 2011. In this manner, a thematic conceptual matrix was created, facilitating the organization and analysis of specific instrumental, conceptual and persuasive (including symbolic/legitimative) uses of evaluations in both 2009 and 2011. A summary of this matrix is presented in Table 2 [ 47 ]. The first author performed all the coding and analyses but met twice with a qualitative research consultant, six times with a co-author, and 10 times with a research colleague to discuss and verify the codebook and to ensure coding consistency and rigour over time (coding conferences). The iterative analysis process allowed for review of coded excerpts and hence continuity of the coding and interpretations. Attention was paid to capturing EPs’ interpersonal dynamics, as well as their individual and collective experiences over time [ 45 , 46 ]. As mentioned, both non-participant observation and documentation helped the first author gain a deeper understanding of HELP’s context, but neither was analyzed systematically, due to lack of time and because interview data were already abundant. Analyses were not systematically validated by a second researcher, but two EPs active in the evaluation strategy commented on and validated a draft of the present article. The research was approved by the Ministry of Health of Burkina Faso. Ethical approval for the study was granted by the Research Ethics Committee of the University of Montreal’s Faculty of Arts and Sciences and by the Health Research Ethics Committee of the Ministry of Health of Burkina Faso.
Verification
Member checking was undertaken at various times and with different EPs to strengthen the validity of the findings [ 44 ]. For example, during data collections, the first author frequently verified her comprehension of the issues raised by EPs either during the interviews or after. The different themes emerging from analyses were discussed with several respondents to see whether they reflected EPs’ experiences and whether additional themes should be included. Drafts of the articles were sent by email to four participants who were thought to be most likely to have the time to read and comment on the drafts; two were able to respond to these member checking calls. Their feedback was always integrated into the iterative analysis process and usually also into the article drafts. Such member checking took place in informal discussions, during interviews and even in email correspondence. Other strategies were used to ensure responsiveness, sensitivity and reflexivity in the researcher’s approach and to support the validity of the present study [ 48 ]; these included co-coding and code discussions with a peer, using an iterative process in the analyses, peer debriefing (discussing the research methodology and analyses with academic peers), and keeping a log book of questions, ideas, challenges and decisions related to the study [ 49 , 50 ].
We first present results on use of evaluation findings for 2009 and 2011, followed by results on use of evaluation processes for 2009 and 2011. In the 2011 interviews, respondents frequently mentioned EU examples similar to those presented in 2009. For the sake of brevity, we present only the examples from 2011 that cover new ground. Results are summarized in Table 2 ; it should be noted that the column on the left lists respondents speaking about use by intended users; hence, when external evaluators (EE) are indicated, it refers to themes discussed by evaluators about intended users’ EU, and not their own.
Use of evaluation findings in 2009 and 2011
Instrumental use of evaluation findings.
In 2009, participants described various ways in which evaluation findings were used instrumentally. An evaluator was pleasantly surprised by HELP’s interest and proactivity in implementing recommendations from a previous evaluation in Niger in 2007 (Box 1: study 9): “ They took our recommendations into consideration and completely changed their practice and the way they intervened ” (EE3). A HELP staff member corroborated this affirmation and described how they used evaluation findings to plan the exemption in Burkina Faso, paying specific attention to avoiding mistakes underscored in the previous evaluation report [ 51 ]. For example, as recommended by evaluators, HELP sought the collaboration of the DRS and MCDs – as representatives of the Ministry of Health (MoH) –right from the start of the user fee exemption program in Burkina Faso instead of setting up its intervention in parallel to the State’s health system, as had unwisely been done in Niger. EPs also noted that evaluation findings had helped them identify and resolve problems in their program and its implementation. For example, a HELP staff member recalled learning about preliminary evaluation findings (Box 1: study 7) that indicated some intended beneficiaries did not know they could be exempted from user fees. In response, HELP increased its awareness-raising efforts through radio information sessions and pamphlets. EPs also spoke about how evaluation findings had been used to identify solutions that were concrete, locally meaningful and applicable. According to a HELP staff member and MoH representatives, some findings were not used immediately but guided planning and decision-making. For example, following the presentation of an action research report (Box 1: study 15, Dori), MoH representatives decided to incorporate the recommendations into the district’s annual plan to set as priorities to improve health services quality and raise awareness of the exemption.
The 2011 interviews revealed that findings were being used for similar purposes as in 2009, including to improve practices and to guide decisions. For example, three HELP staff members referred to evaluation findings that had helped them better identify, select and recruit eligible beneficiaries (Box 1: studies 6 and 14). In that study, findings highlighted that, while indigents were a target group of the exemption, little had been done to reach out to them. This led HELP staff to test and use an effective selection strategy for indigents. Additionally, findings showing that the cost to exempt indigents was lower than expected led to a decision to increase the number of indigent beneficiaries for each health centre. Another use noted by an EP was that evaluation findings validated their decision to advocate for free healthcare, which enabled HELP to pursue its actions in this direction. Participants noted that evaluation findings were also used to identify, explain and resolve certain challenges they encountered. For instance, HELP staff recalled findings from study 7 (Box 1) showing that some intended beneficiaries were being deceived by health centre staff into paying user fees. This valuable information was used to resolve the problem by investing in efforts to raise awareness about the exemption program, its services, target beneficiaries and criteria. Another example concerned findings that demonstrated medical staff were complying with and respecting norms for medical prescriptions, contrary to rumours that they had been issuing excessive and inappropriate prescriptions since the exemption for personal gain. This valuable information guided the responses of the medical supervisors in the field, who were reassured to learn they did not need to worry much about this issue. Findings from another evaluation on workload (Box 1: study 16) suggested that, while the exemption program did increase the medical staff’s workload, it did not correspond to WHO’s definition of work overload [ 52 ] . An MoH representative noted that these findings had helped him to organize and manage his health centre’s resources, motivate his healthcare staff, and better adapt to the increase in consultations. An MoH representative also said evaluation findings were used to acknowledge accomplishments, review objectives, and correct practices when necessary. A HELP staff member correctly noted that changes in their practices (instrumental use) were preceded by changes in awareness (conceptualization).
Conceptual use of evaluation findings
In 2009, respondents described a few instances of conceptual use of findings. One useful aspect of evaluation findings was that they provided the HELP staff with another, more external perspective. For example, one staff member observed that, at HELP, “ we have an internal vision because we work inside it ” and that evaluation findings (Box 1: study 12) could shed light on their partners’ views on various issues, such as when reimbursements for medical fees arrived late. HELP staff knew the reasons for this delay were outside their control, but “ it was interesting to see how the others [partners] perceived and sometimes criticized this; some even said it was because HELP was too late with reimbursements ” (HELP Staff (HS) 4). Similarly, a funding agency representative suggested that evaluation findings gave the agency a better understanding of people’s reactions to the exemption and, hence, of the field reality. Another EP suggested that findings pointed to deficiencies in the exemption program and were helpful in reflecting upon potential solutions: “ In my opinion, evaluations gave us a lot of experience and lessons to learn from ” (HS10).
In 2011, various EPs described how learning of the evaluation findings gave them a better understanding of the impacts of their work and of the exemption program. A HELP staff member recalled findings (Box 1: study 7) demonstrating that user fees were the primary barrier to healthcare accessibility, above and beyond geographical and cultural factors. Such findings validated the exemption program’s mission and counteracted previous arguments against user fee exemptions. Many of the findings also revealed positive effects of the exemption program on, for example, health service use. Consequently, another benefit of evaluation findings was that they boosted EPs’ motivations for their work:
“ I think this study [Box 1: study 3] was really useful and it had pretty important impacts on us. Speaking of the effects on the community, that was a motivating factor for us, it enabled us to see that by going in and out of the community all the time, we were actually bringing something ” (HS22).
After evaluation reports were presented, an MoH representative noted that he felt more capable when examining the health centre’s clinical data or even dealing with his patients after hearing about the different findings. One EP explained how some findings had changed his conception of the exemption and of program evaluation. He realized evaluations could detect the multiple effects of interventions, including some unexpected ones. For example, findings revealed that mothers felt empowered since the exemption implementation, as they could consult without their husbands’ approval and money [ 53 ]. Another participant also observed that hearing about evaluation findings changed many EPs’ receptivity to program evaluation. EPs were more forthcoming and followed evaluation activities better after attending report-presentation workshops (French: ateliers de restitutions ) and hearing about the different evaluation findings. He recalled health workers saying, “… the evaluators ‘ come take our data and leav e !’ but after attending report-release workshops, they understood the findings and their utility; it encourages them to collaborate ” (HS2). Participants also believed evaluation findings enhanced their capacities and their understanding of the field reality.
Persuasive use of evaluation findings
In 2009, persuasive use of evaluation was alluded to by EPs describing how evaluations supported their advocacy work. HELP staff said HELP’s major challenge was to disseminate evidence and convince their partners. Another explained their advocacy strategy, which involved partnering with the regional MoH (DRS and MCDs) and having them disseminate evaluation findings at national MoH meetings. One participant observed that Burkina Faso’s political decentralization facilitated the participation of the regional and district level MoH representatives, since they did not need consent from their national counterparts. The overarching goal was to convince policymakers of the benefits of user fee exemptions. HELP staff and MoH EPs suggested that the evaluation strategy validated their exemption work and bolstered their advocacy: “ We hope that maybe, with the expected results, a funding agency […] perhaps even the State, can participate [in the exemption]”. Hence, HELP used findings persuasively to try to convince regional and national politicians to support and scale up the exemption in Burkina Faso. One EP noted that findings were used in project proposals and reports as a means to convince others of the worthiness of pursuing HELP’s exemption program.
In the 2011 interviews, EPs also spoke of using evaluation findings to influence partners and policymakers. HELP staff recalled partnering with University of Montreal researchers to produce and compile evidence on HELP’s exemption program. Their studies demonstrated the value of the exemption, thereby establishing the pillars of HELP’s advocacy work. Evidence suggested that lifting the financial barriers to health access was commendable and logical. HELP staff recalled presenting findings to the MoH at national and international conferences to promote adoption of a national exemption program. Some also spoke about partnering with Amnesty International to advocate for evidence-based policymaking by the State [ 24 ]. HELP frequently shared scientific documentation with its funding agency, advocating for a national exemption program. An evaluator acknowledged HELP’s limited success in convincing politicians to adopt and scale up the exemption program, which sometimes led HELP and its partners to question “ …the use of all our work? ” (EE8). He explained how HELP and the evaluation strategy’s decision-makers had opted to end the evaluation strategy activities gradually, as it had already produced sufficient knowledge on essential questions, and to focus instead on HELP’s advocacy to find ways to increase politicians’ use of scientific evidence. Funding agency representatives criticized HELP’s persuasive use, suggesting that HELP needed to be more proactive in its advocacy strategy to seek and seize every diffusion opportunity:
“ I have the impression that HELP doesn’t really know how to show the value of its research […] Diffusion activities were good but I think they could have done even better. One example is the last diffusion activity; they weren’t able to meet with the Ministry of Health, even though this is a key stakeholder ” (ECHO representative) .
Meanwhile, HELP staff suggested that further targeting diffusion efforts to community members would benefit the exemption program’s activities. One difficulty with this, alluded to by an MoH representative, was the necessity of translating many of the presentations into local languages, as many in the community did not speak French. An evaluator explained how financial constraints led to the prioritization of knowledge transfer (KT) activities targeting political leaders, in hopes this would produce greater impacts. Nevertheless, he explained how evaluators with HELP had sought creative means, such as policy briefs and short films, to reach a diverse audience, focusing particularly on policymakers.
In both 2009 and 2011, one challenge underscored by EPs was that of interesting policymakers in these evidence-based findings and in the exemption itself. In 2009, the discourse was hopeful, while the 2011 interviews expressed more disappointment and doubt regarding the feasibility of advocacy objectives. From the 2011 interviews, it was clear that HELP had used evaluation findings to try to persuade others of the value of the exemption program. Whether they succeeded in their persuasive attempts is another interesting question, distinct from the present article’s focus specifically on EPs’ own use.
Overall, EPs described instances of instrumental, conceptual and persuasive use of findings in both 2009 and 2011. However, they discussed using more evaluations in 2011 than in 2009. One evaluator asserted that there was so much more EU by EPs in 2011 that it was not comparable to 2009. An evaluator also suggested this was because only one study, along with the action research project, had been finalized by the time of our first data collection in 2009. EUs were also described in greater detail by EPs in 2011 than in 2009.
Use of evaluation processes in 2009 and 2011
Instrumental use of evaluation processes.
Recommendations are often associated with findings, as they are frequently presented in the final evaluation report. However, in 2009, EPs recalled various lessons already learned during the evaluation process. For example, HELP staff recalled having discussions with evaluators and pointing out a problem, which was that the eligibility criterion for HELP’s user fees exemption for breastfeeding mothers was too vague, because breastfeeding duration varies widely across mother/baby pairs (Box 1: study 13). Based on discussions during the evaluation process, HELP stakeholders operationalized mothers’ eligibility to 2 years following a baby’s birth, and this information was then shared via guidelines disseminated to all health centres. Further, EPs who had been involved in the 2007 evaluation in Niger (Box 1: study 9) recalled learning that, because the evaluation had only been organized near the end of the project, it was not possible to use a pre–post design, which would have been the most meaningful methodologically. Having learned from this experience, HELP coordinators consulted the evaluator while planning their Burkina Faso exemption program to ensure pre–post designs could be used in the evaluations to measure the program’s efficacy more reliably. The coordinators had worked both in Niger and then in Burkina Faso and, hence, carried over such lessons. An evaluator recalled how his being consulted at the beginning of the Burkina Faso program led HELP stakeholders to delay implementing the exemption there in order to collect baseline data, despite the ethical dilemma that delaying the exemption meant delaying saving lives. Process discussions clarified that, irrespective of when the exemption would be implemented, the duration of the program was fixed and therefore the number of lives saved in the given time frame would be identical. Moreover, if careful planning led more convincing evidence of the exemption’s beneficial effects, HELP’s advocacy would have greater persuasive power. It was also made clear that funding a series of evaluations could produce useful knowledge for advocacy. Stakeholders made use of these discussions and decided (instrumental process use) to seek funds from a funding agency. They received funding to develop the evaluation strategy, which evolved over time into an extensive series of evaluations. New collaborations and networks with different African institutions were also born out of this initial evaluation partnership.
In 2011, an evaluator suggested that the initial collaboration process between HELP and evaluators had stimulated a proliferation of partnerships and networks among EPs, which developed further into their own respective documentation and advocacy projects. An MoH representative reported having learned a great deal about writing research protocols while collaborating with the external evaluators, which subsequently led him to write his own internal research protocol. Another MoH representative also recalled an evaluation of obstetric service use in which community members were, to his surprise, stakeholders in the research process even though they had little education (Box 1: study 8). He quickly realized the added value of their participation, as they gradually understood and supported the findings, became more proactive than usual, and identified sensible means of increasing obstetrical service use. Another instrumental use described by an evaluator and an MoH representative was that their collaboration may have sparked some EPs’ interest and motivation to develop their capacities further, as several subsequently chose to pursue graduate studies in health research. The evaluator believed that, for some EPs, the experience of networking with researchers and developing new contacts with local and international supervisors may have facilitated admissions to graduate schools and scholarships.
Conceptual use of evaluation processes
In the 2009 interviews, HELP staff described experiencing capacity building during evaluations and said their methodological, conceptual and technical understanding of the different research phases had been reinforced or updated. A HELP coordinator suggested his comprehension of public health had also improved during evaluations, which aided his management of the NGO. Other conceptual changes were noted. As another HELP staff member explained, “ What was good was that we were participating and engaging [in the evaluations] so it was not something external that just fell upon us… the fact that we had to ask questions meant we had to think about it ” (HS2). Through this process, they realized they could ask pertinent questions that strengthened their confidence. One HELP staff member said that participating in evaluations sparked a “ spirit of curiosity ” necessary to ask research questions and stimulated a sense of agency in pursuing answers. He believed more needed to be done to maintain such capacities and make the staff more autonomous. Another HELP staff member described how EPs’ interactions facilitated discussions and fostered the development of a common vocabulary infused with values such as scientific rigour and evaluation use. An evaluator believed evaluation processes had also led to the harmonization of EPs’ perceptions of the exemption and its impacts.
In 2011, EPs conveyed numerous examples of conceptual process use, including capacity building in evaluation (conceptualization, application and practice). An evaluator reported improvements over time in many of the HELP staff’s research, professional and management skills. One HELP staff member said working closely with evaluators was a source of inspiration, guidance and feedback that made him feel stronger and supported. Some reported that participating in evaluations helped their thinking become more rigorous, gave them another perspective on the program, highlighted the importance of measuring program effects and heightened their receptivity to evaluation. Another HELP staff member noted that it was when EPs really got involved in evaluations that they began to understand the findings and the value of evaluation, which in turn facilitated integration of EU into the HELP organization. HELP staff member said that participating in the evaluation dissemination process had many benefits, because the preparation and interactions involved required them to reflect more actively on the findings, which, in turn, enhanced their assimilation of the findings, making those more applicable. In his opinion, evaluation processes deepened and harmonized partners’ understanding of the exemption program, helping them find a common direction. A HELP coordinator also said, “ By rubbing shoulders with the evaluation culture, we were won over! ” (HS7). He described staff as being more prudent in their communications, using language that was measured, succinct, goal-oriented, scientific and evidence-based: “ It prevents us from arguing over facts that are not backed up ” (HS7). Another HELP staff member learned that precise communication with evaluators was helpful in obtaining results in tune with his information needs. An EP explained how the evaluation strategy expanded their professional networks, which facilitated information sharing and knowledge transfer. For all these reasons, various respondents believed other humanitarian NGOs involved in emergency action would also benefit from documenting the effects of their work.
Descriptions of conceptual process use examples changed between 2009 and 2011 as EPs suggested they had learned a great deal about evaluation, which changed their attitudes and behaviour with regard to evaluation activities. In 2011, respondents had more to say and were more enthusiastic about sharing the changes in their work, attitudes and understanding brought on by evaluation. Conceptual use appeared to have increased over time. Looking back over the evolution of the strategy, an evaluator highlighted the fact that the first evaluation activities, which proved useful for HELP, opened the way for more and progressive development of the evaluation strategy as new funding was granted for each successive phase of the exemption project. In 2009, EPs were impatient to hear about the evaluation findings, but once the evaluations were completed and the results shared, EPs became much more receptive to evaluators and convinced that program evaluation was pertinent for HELP. The evaluator pointed out that, as evaluation questions were answered, more were raised, and the evaluation strategy team developed progressively more evaluation activities. This was corroborated by documentation produced and shared by the evaluation strategy team. Thereafter, EPs used evaluation findings more frequently and EU became progressively mainstreamed into HELP’s exemption program.
Persuasive use of evaluation processes
In both 2009 and 2011, no respondent described any form of persuasive process use. In no instance did EPs describe having engaged in the evaluation process simply to satisfy the wish of their funding agency, to promote their own reputation or to convince others. As noted earlier, some spoke about engaging in the evaluation process, but their focus was more on using the findings than on the evaluation process itself.
The 2011 interviews shed light on the dynamics between some HELP staff and evaluators that inevitably influenced evaluation processes and perhaps EU. While these conditions influencing EU are a topic of their own to be covered in a future article, a few details provide valuable insight into the present study findings. For example, participants suggested that some HELP staff were reluctant to participate in the evaluation process partly because they did not completely trust the motives of evaluators who, according to them, may have been more concerned about furthering their research careers than about HELP’s actual mission. They expressed their discomfort to colleagues and to evaluators, but did not object to the conduct of evaluations and, in the end, found them useful.
As described in the methods section, non-participant observation and documentation provided valuable contextual information on the evaluation strategy and EPs. While systematic analysis of these data was not feasible due to time constraints, both sources provided relevant information. Non-participant observation enabled the first author to become immersed in the study context, to detect welcoming, collaborative and friendly dynamics between most EPs, and to observe that EPs were generally at ease in communicating with each other about questions and concerns. Certain other dynamics were also apparent, such as the relatively peaceful and friendly interactions between HELP staff and EPs. HELP staff tended to joke, tease one another, and laugh together. They had social gatherings on evenings and weekends. It was also apparent that some HELP staff tended to have more affinity than others with evaluators. All evaluators were warmly welcomed by HELP staff. While reluctance to trust evaluators’ motives was discussed only in individual interviews, informal discussions revealed that these issues had been discussed explicitly in team meetings. Team meetings appeared to foster frank and direct communication. Even so, various participants mentioned that, in Burkina Faso, anyone dealing with politics learns to communicate using a “ langue de bois ”, a diplomatic way of avoiding discussing sensitive issues directly, and this was indeed observed in interviews and interpersonal dynamics.
Collected documentation relating to the evaluation strategy and to collaborations among EPs also helped the first author become immersed in the working dynamics of EPs. It corroborated EPs’ discourses about increasing efforts over time to formalize agreements together by documenting contracts, report presentations and collaboration plans. Documents relating to evaluation activities and results (e.g. reports, scientific articles, policy briefs) proliferated between 2009 and 2011, supporting EPs’ descriptions of an increase in evaluation activities and EU over time. Emails between the principal evaluators and HELP coordinators were frequent from 2009 and too numerous to examine systematically, but generally their content demonstrated frank and transparent problem-solving, brainstorming and sharing of information about activities, events and scientific articles. As noted earlier, these forms of data were collected by the first author to complement the individual and group interview data and as a means of becoming better acquainted with the EPs’ working environment.
The present study enabled us to identify and provide rich descriptions of the different forms of EU in which EPs engaged between 2009 and 2011, as HELP’s evaluation strategy was rolled out. Descriptions of EU, including instrumental, conceptual and persuasive use of findings and/or processes, were generally more elaborate and specific in 2011, and EPs emphasized that EU had increased since 2009. EPs described all the forms of EU found in Alkin and Taut’s [ 33 ] categories, with the exception of persuasive (and symbolic) process use. Indeed, evaluation findings were used instrumentally by EPs for numerous purposes, including to identify program malfunctions and come up with solutions, to guide decisions and actions, and to manage and motivate colleagues. EPs also used findings conceptually in many ways, such as learning to see their program and work from an external perspective, recognizing the value of the exemption program and of their own work, communicating and motivating staff, and gaining an appreciation for the field reality and for program evaluation. EPs also used findings in a persuasive manner to convince others to support and scale up the exemption program. Persuading political decision-makers proved challenging, which corroborates Dagenais et al.’s [ 8 ] findings in the same national context and points to the common difficulty of making policymaking more evidence-based [ 54 , 55 ]. It became clear by 2011 that scientific knowledge was abundant and accessible to anyone interested, and therefore the evaluators felt they had done their work. It had also become clear that, to conserve the scientific rigour and neutrality expected of university researchers, the principal evaluators had to rethink their involvement in advocacy activities. Negotiating where KT ended and advocacy began presented an interesting challenge for external evaluators, HELP coordinators and other EPs. Financial limitations also led to difficult decisions regarding what KT activities could be undertaken, by whom, and for whom.
Participating in evaluations also prompted many instances of process use. Overall, the evaluation process provided countless opportunities for EPs to reflect upon their program and how they worked together and interacted. It provided opportunities to develop partnerships, communicate problems, and identify and implement potential solutions. It was clear, however, that issues of mistrust regarding evaluators’ motives and the allocation of evaluation resources were still taboo for several participants and not discussed openly among EPs. This may have negatively influenced their collaboration. Finding ways to overcome such challenges might result in more successful collaboration, evaluation participation and EU. Nevertheless, evaluation activities led EPs to learn about their program, evaluation processes and research methodology. By engaging in evaluations and interacting with evaluators, EPs learned to think in a different way about programs and scientific rigour. Since Patton’s original work [ 56 ] on utilization-focused evaluations, which described the benefits of participatory approaches and process use, many authors have documented the importance of engaging participants in the evaluation process [ 5 , 57 – 62 ]. The literature suggests that participation should ideally begin at conceptualization of an evaluation study [ 31 ]. While this may be ideal, the limited time and financial resources common to humanitarian practitioners, including in HELP’s organizational context, led some EPs to disinvest or invest only partially in the evaluation strategy. This was a source of frustration for evaluators and those more invested in the evaluation strategy. Yet, some EPs described how participating principally in the dissemination phase was helpful to them as a creative way of dealing with this issue of limited time, as it led them to invest in and reflect upon all the previous phases of evaluation that had led to the results they were mandated to present. This is an interesting option to consider when participating in all stages of all the evaluations is impossible, as it was for some EPs.
The reason for the absence of persuasive (symbolic) process use was not explained by our respondents, but Højlund’s [ 63 ] thoughts on an organization’s internal propensity and its external pressures to engage in evaluations provide interesting insights. More specifically, from the individual and group interview data, it was clear that, while HELP’s funders had requested the first evaluation, EPs felt little external pressure to undertake evaluations. The propensity to evaluate came from the inside, primarily from HELP’s coordinator, and the overall motives for evaluation were clear: to have credible findings to inform advocacy for accessible health services, and to learn about and improve the exemption program. Engaging in an evaluation process for symbolic reasons simply did not seem to be a concern for EPs. Respondents intended to use the evaluation findings, but not the process, for persuasive purposes.
A frequent challenge during the present study was to determine what exactly sparked EU. For instance, in the section above on instrumental process use in 2009, we discussed how evaluation discussions led participants to reconsider their approach and to seek more evaluation resources, develop the evaluation strategy, and form new collaborative networks and partnerships. It is difficult to pinpoint exactly when and why such attitude changes and decisions occurred. Were they prompted directly by discussions during an evaluation activity, which would clearly fall under process use, or did they arise simply from EPs being immersed in an evaluation strategy and thus in frequent interaction and communication with evaluators? This points to a limitation of the present study associated with respondents’ difficulty in recalling specifically what triggered a given decision or action. This issue was discussed by Leviton and Hughes [ 35 ], who described how, under such conditions, it is difficult to decipher where conceptual use ends and instrumental use begins and, in turn, to categorize use according to a specific EU taxonomy such as that of Alkin and Taut [ 33 ].
In the real-world setting of the present study, instrumental, conceptual and persuasive uses often overlapped and were not easily teased apart. Therefore, current EU taxonomy has received its share of criticism for operationalization challenges or for constraining the scope of evaluation consequences [ 64 – 66 ]. We encountered this challenge of limited scope when, for example, EPs discussed long-lasting effects the evaluation process had on them (e.g. expanded professional network, increased funding for the evaluation strategy). While we were sufficiently able to decipher the source of such effects so that we could categorize them using Alkin and Taut’s [ 33 ] EU taxonomy, it is true that Kirkhart’s [ 66 ] integrated theory of evaluation influence is better adapted to such situations. Kirkhart implored researchers to expand the scope of EU by acknowledging the full range of evaluation influences and suggested that existing conceptualizations of EU tend to overlook the value of process use and of uses that occur unintentionally or incrementally over time [ 66 ]. However, that model would also have presented its share of challenges, as our respondents were frequently unable to provide specific information about the source, intentionality or timeframe of influence, the three principal dimensions of the model. Providing such information was difficult for them, possibly because of the sheer number of evaluation activities undertaken as part of the evaluation strategy. We therefore concur with other authors in believing that Alkin and Taut’s [ 33 ] taxonomy of EU remains relevant [ 10 ], as we found that it facilitated our in-depth examination of the multiple facets and specific forms (instrumental, conceptual, persuasive) of EU processes and findings over time. We agree with Mark [ 67 ] that, rather than reinventing the wheel, a reasonable solution would be to see the concept of evaluation use not as competing with that of evaluation influence but rather as being complementary to it. This may help researchers, evaluators and intended users attend to an evaluation’s broad array of potential consequences when planning for, conducting or studying evaluations [ 67 ].
Another potential limitation of the study stems from the high mobility and turnover among participants, such that we were able to capture the evolving perspectives of only six EPs over the two data collections. Clarke and Ramalingam [ 68 ] discussed the fact that high turnover is common in humanitarian NGOs and presents both challenges (e.g. loss of organizational memory) and opportunities (e.g. bringing on new staff in line with evolving program objectives). Interviewing the same participants in both phases of the study might have produced different results, but the present findings reflect change processes that are common to the humanitarian sector reality. Patton [ 69 ] described turnover as the Achilles’ heel of utilization-focused evaluation and discussed the importance of working with multiple intended users so that the departure of one is not necessarily detrimental to EU. Such a challenge and solution apply to the present study, in which our aim was to follow multiple intended users who were present for either part or all of the study period. In fact, those interviewed in both data collections were four of the primary intended users (from HELP), an external evaluator, and an MoH representative. Hence, the study enabled us to examine the evolution of EU and how it was influenced by interpersonal dynamics and changing realities, such as turnover, that are common to many humanitarian NGOs, through the perspectives of EPs who had experienced the evaluation strategy in a variety of ways.
A third potential limitation of the study is that all three authors have, over time and to different degrees, developed professional and friendly relationships with various EPs – the second and third authors having acted as consultants for HELP; in a collaboration that evolves over time, this is not surprising and perhaps sometimes even desirable, but may make it difficult to maintain the neutrality required of an external evaluator. Mitigating these human dimensions while navigating the numerous potential evaluator roles, as described by Pattona and LaBossière [ 70 ], may have led to forms of normative discourse. Nevertheless, it is worth noting that the first author completed the research in total independence and without interference from HELP in the data. She undertook the study without payment and received only periodic material or logistical support from HELP when necessary to conduct the data collection. Also, only the first author, who never worked as consultant for HELP, conducted the interviews and analyzed and interpreted the data. While most evaluation studies have examined a single evaluation study or a specific evaluation program at one point in time [for examples see 10], the present study examined EU over time, with data collections separated by 29 months, and during an ongoing series of evaluation studies that were part of the evaluation strategy which originated from a single evaluation study in Niger in 2007. This was challenging because the literature provided few examples to guide the conceptualization and conduct of the present study. Yet, this was also the strength of the study, as it presented an innovative standpoint from which to examine EU. Future research may provide further guidance for the study of EU following a single evaluation or multiple evaluations embedded within an organization’s routine operations. Clearly, in our study context, evaluation partners’ EU evolved over time, and the study’s design enabled us to decipher the multiple forms in which EU occurred, including not only instrumental and conceptual forms of process and findings use, but also persuasive findings use. The study’s methodology was bolstered by our ability to seek out multiple groups of participants and thereby to triangulate perspectives. An important new contribution of the present study is, in fact, that it presents the views of both evaluators and intended users.
In 2004, a report by WHO emphasized the need to enhance the use of empirical knowledge in the health sector [ 23 ]. The following year, WHO members pledged to achieve universal healthcare and again highlighted the importance of using empirical evidence to guide global health policymaking and practices [ 26 ]. Nevertheless, how exactly are evaluations performed and used in global health and humanitarian contexts? Henry [ 65 ] pointed out that most of the EU literature is theoretical or conceptual and that very little of it examines EU systematically. Sandison [ 9 ] and Oliver [ 71 ] described how empirical research on EU within humanitarian organizations is particularly rare. HELP’s user fee exemption program presented an opportunity to include an evaluation strategy to study and document the processes, challenges, successes and impacts of the program. Simultaneously, this evaluation strategy itself presented an exceptional occasion to study and understand how evaluations can be both useful and actually used in the humanitarian context. In examining EU resulting from HELP’s evaluation strategy, the present case study helps bridge the knowledge-to-action gap by shedding light on the different ways HELP and its partners used evaluations. By studying how they collaborated to infuse EU into their practice and by examining how their discourses on EU evolved between 2009 and 2011, we determined that they increasingly used evaluation processes and findings instrumentally and conceptually, and used evaluation findings persuasively. Such uses served the mission of HELP’s exemption program in numerous ways by, among other things, supporting its members’ ability to think critically, improving their collaboration, identifying problems in the program and potential solutions, facilitating decision-making, and supporting HELP’s advocacy activities. In March 2016, we learned that Burkina Faso’s Ministerial Council [ 72 ] announced that, by April 2016, a national policy would be implemented to provide free healthcare for children under five and pregnant women, and to give women free access to caesarean sections and deliveries as well as to breast and cervical cancer screenings. While numerous barriers remain between empirical knowledge and its uptake in the political arena, and while it seems particularly difficult to use pilot studies to inform public policymaking [ 21 ], there is little doubt that HELP’s pilot exemption program and its associated evaluation strategy and advocacy activities, along with the work of partner organizations, played an important role in inspiring Burkina Faso’s recent policies. In a subsequent paper, we will discuss our analyses of the conditions that appear to have influenced EU among HELP’s evaluation partners.
Abbreviations
Directeur régional de la santé (regional health director)
European Commission’s Humanitarian Aid and Civil Protection department
Evaluation partner
Non-governmental organization Hilfe zur Selbsthilfe e.V.
- Knowledge transfer
Médecin chef de district (district chief physician)
Ministry of Health
Non-governmental organization
Darcy J, Knox Clarke P. Evidence & knowledge in humanitarian action. Background paper, 28th ALNAP meeting, Washington, DC, 5–7 March 2013. London: ALNAP; 2013.
Google Scholar
Beck T. Evaluating humanitarian action: an ALNAP guidance booklet. London: ALNAP; 2003.
Crisp J. Thinking outside the box: evaluation and humanitarian action. Forced Migration Review. 2004;8:4–7.
Hallam A. Harnessing the power of evaluation in humanitarian action: An initiative to improve understanding and use of evaluation. ALNAP working paper. London: ALNAP/Overseas Development Institute; 2011.
Hallam A, Bonino F. Using evaluation for a change: insights from humanitarian practitioners. London: ALNAP/Overseas Development Institute; 2013.
ALNAP. Evaluating humanitarian action using the OECD-DAC criteria: an ALNAP guide for humanitarian agencies. London: ALNAP/Overseas Development Institute; 2006. http://www.alnap.org/pool/files/eha_2006.pdf . Accessed 11 January 2016.
Harveu P, Stoddard A, Harmer A, Taylor G, DiDomenico V, Brander L. The state of the humanitarian system: Assessing performance and progress. A pilot study. ALNAP working paper. London: ALNAP/Overseas Development Institute; 2010.
Dagenais C, Queuille L, Ridde V. Evaluation of a knowledge transfer strategy from a user fee exemption program for vulnerable populations in Burkina Faso. Global Health Promotion. 2013;20 Suppl 1:70–9. doi: 10.1177/1757975912462416 .
Article PubMed Google Scholar
Sandison P. The utilisation of evaluations. ALNAP Review of Humanitarian Action in 2005: Evaluation utilisation. London: ALNAP/Overseas Development Institute; 2006. http://www.livestock-emergency.net/userfiles/file/common-standards/ALNAP-2006.pdf . Accessed 11 January 2016.
Cousins JB, Shulha LM. A comparative analysis of evaluation utilization and its cognate fields of enquiry. In: Shaw I, Greene JC, Mark M, editors. Handbook of evaluation: policies, programs and practices. Thousand Oaks: Sage Publications; 2006. p. 233–54.
Ridde V, Heinmüller R, Queuille L, Rauland K. Améliorer l’accessibilité financière des soins de santé au Burkina Faso. Glob Health Promot. 2011;18(1):110–3. doi: 10.1177/1757975910393193 .
Ridde V, Queuille L, Atchessi N, Samb O, Heinmüller R, Haddad S. The evaluation of an experiment in healthcare user fees exemption for vulnerable groups in Burkina Faso. Field ACTions Science Reports. 2012;Special issue 7:1–8.
Ridde V, Queuille L. User fees exemption: One step on the path toward universal access to healthcare. 2010. http://www.usi.umontreal.ca/pdffile/2010/exemption/exemption_va.pdf . Accessed 11 January 2016.
HELP. Annual Report 2008. Bonn: HELP-Hilfe zur Selbshilfe e.V.; 2008. http://www.help-ev.de/fileadmin/media/pdf/Downloads/HELP_Annual_Report_engl_web.pdf . Accessed 22 November 2009.
INSD. La région du Sahel en chiffres. Ouagadougou: Ministère de l’Économie et des Finances; 2010.
World Health Organization. World health statistics 2007. Geneva: WHO; 2007.
World Health Organization. World Health Statistics 2014. Geneva: WHO; 2014.
Traoré C. Préface. In: Ridde V, Queuille L, Kafando Y, editors. Capitalisation de politiques publiques d'exemption du paiement des soins en Afrique de l'Ouest. Ouagadougou: CRCHUM/HELP/ECHO; 2012. p. 5–8.
Ridde V, Robert E, Meessen B. A literature review of the disruptive effects of user fee exemption policies on health systems. BMC Public Health. 2012;12:289.
Article PubMed PubMed Central Google Scholar
Olivier de Sardan JP, Ridde V. Public policies and health systems in Sahelian Africa: theoretical context and empirical specificity. BMC Health Serv Res. 2015;15 Suppl 3:S3.
Ridde V. From institutionalization of user fees to their abolition in West Africa: a story of pilot projects and public policies. BMC Health Serv Res. 2015;15 Suppl 3:S6.
Ridde V, Queuille L. Capitaliser pour apprendre et changer les politiques publiques d'exemption du paiement des soins en Afrique de l'Ouest: une (r)évolution en cours? In: Ridde V, Queuille L, Kafando Y, editors. Capitalisation de politiques publiques d'exemption du paiement des soins en Afrique de l'Ouest. Ouagadougou: CRCHUM/HELP/ECHO; 2012. p. 9–14.
World Health Organization. World Report on Knowledge for Better Health: Strengthening Health Systems. Geneva: WHO; 2004.
International A. Burkina Faso: Giving life, risking death. Time for action to reduce maternal mortality in Burkina Faso. Index number: AFR 60/001/2010. London: Amnesty International; 2010.
World Conference on Science. Excerpts from the declaration on science and the use of scientific knowledge. Sci Commun. 1999;21(2):183–6.
Article Google Scholar
World Health Organization. The World Health Report: Research for Universal Health Coverage. Geneva: WHO; 2013.
Ridde V, Diarra A, Moha M. User fees abolition policy in Niger. Comparing the under five years exemption implementation in two districts. Health Policy. 2011;99:219–25.
D’Ostie-Racine L, Dagenais C, Ridde V. An evaluability assessment of a West Africa based non-governmental organization's (NGO) progressive evaluation strategy. Eval Program Plann. 2013;36(1):71–9.
Shulha LM, Cousins JB. Evaluation use: theory, research, and practice since 1986. Eval Pract. 1997;18(3):195–208.
Herbert JL. Researching evaluation influence: a review of the literature. Eval Rev. 2014;38(5):388–419.
Patton MQ. Utilization-focused evaluation. 4th ed. Los Angeles: Sage Publications; 2008.
Patton MQ. Process use as a usefulism. N Dir Eval. 2007;116:99–112.
Alkin MC, Taut SM. Unbundling evaluation use. Stud Educ Eval. 2003;29:1–12.
Estabrooks C. The conceptual structure of research utilization. Res Nurs Health. 1999;22:203–16.
Article CAS PubMed Google Scholar
Leviton LC, Hughes EFX. Research on the utilization of evaluations. Eval Rev. 1981;5(4):525–48.
Weiss C. Introduction. In: Weiss C, Lexington MA, editors. Using Social Research in Pubic Policy Making. Lanham: Lexington Books; 1977.
Yin RK. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.
CAS PubMed PubMed Central Google Scholar
Yin RK. Case study research: design and methods. Thousand Oaks: Sage publications; 2014.
Stake RE. Case studies. In: Denzin NK, Lincoln YS, editors. Strategies of qualitative inquiry. 2nd ed. Thousand Oaks: Sage; 2003.
Patton MQ. Qualitative evaluation and research methods. 2nd ed. New York: Sage; 1990.
Olivier de Sardan JP. L’enquête socio-anthropologique de terrain : synthèse méthodologique et recommandations à usage des étudiants Niamey. Niger: LASDEL: Laboratoire d’études et recherches sur les dynamiques sociales et le développement local; 2003.
Creswell JW, Plano CV. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications; 2006.
Pires AP. Échantillonage et recherche qualitative: essai théorique et méthodologique. In: Poupart J, Deslauriers J-P, Groulx L-H, Laperrière A, Mayer R, Pires AP, editors. La recherche qualitative: Enjeux épisémologiques et méthodologiques. Montréal: Gaëtan Morin; 1997. p. 113–67.
Stake RE. Qualitative research: Studying how things work. New York: The Guilford Press; 2010.
Kitzinger J. The methodology of Focus Groups: the importance of interaction between research participants. Sociol Health Illness. 1994;16(1):103–21.
Kitzinger J. Qualitative research: introducing focus groups. BMJ. 1995;311(7000):299–302.
Article CAS PubMed PubMed Central Google Scholar
Miles MB, Huberman M. Qualitative data analysis: an expanded sourcebook. 2nd ed. Newbury Park: Sage Publications; 1994.
Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. Int J Qualitative Methods. 2002;1(2):1–19.
Patton MQ. Qualitative research. Wiley Online Library. 2005. doi: 10.1002/0470013192.bsa514 .
Ritchie J, Lewis J, Nicholls CM, Ormston R. Qualitative research practice: a guide for social science students and researchers. New York: Sage; 2013.
Ridde V, Diarra A. A process evaluation of user fees abolition for pregnant women and children under five years in two districts in Niger (West Africa). BMC Health Serv Res. 2009;9:89.
Antarou L, Ridde V, Kouanda S, Queuille L. La charge de travail des agents de santé dans un contexte de gratuité des soins au Burkina Faso et au Niger [Health staff workload in a context of user fees exemption policy for health care in Burkina Faso and Niger]. Bull Soc Pathol Exot. 2013;106(4):264–71.
Samb O, Belaid L, Ridde V. Burkina Faso: la gratuité des soins aux dépens de la relation entre les femmes et les soignants? Humanitaire: Enjeux, pratiques, débats. 2013;35:4–43.
Knox Clarke P, Darcy J. Insufficient evidence? The quality and use of evaluation in humanitarian action. London: ALNAP/Overseas Development Institute; 2014.
Crewe E, Young J. Bridging research and policy: Context, evidence and links. Working Paper 173. London: Overseas Development Institute; 2002. http://www.odi.org.uk/publications/working_papers/wp173.pdf . Accessed 11 January 2016.
Patton MQ. Utilization-focused evaluation. 1st ed. Thousand Oaks: Sage; 1978.
Buchanan-Smith M, Cosgrave J. Evaluation of humanitarian action: Pilot guide. London: ALNAP/Overseas Development Institute; 2013.
Cousins JB. Organizational consequences of participatory evaluation: School district case study. In: Leithwood K, Louis KS, editors. Organizational learning in schools. New York: Taylor & Francis; 1998. p. 127–48.
Cousins JB. Utilization effects of participatory evaluation. In: Kellaghan T, Stufflebeam DL, Wingate LA, editors. International handbook of educational evaluation: Part two: Practice. Boston: Kluwer; 2003. p. 245–66.
Chapter Google Scholar
Cousins JB, Earl LM. The case for participatory evaluation. Educ Eval Policy Analysis. 1992;14(4):397–418.
King JA. Developing evaluation capacity through process use. N Dir Eval. 2007;2007(116):45–59.
Patton MQ. Future trends in evaluation. In: Segone M, editor. From policies to results: Developing capacities for country monitoring and evaluation systems. Paris: UNICEF and IPEN; 2008. p. 44–56.
Højlund S. Evaluation use in the organizational context – changing focus to improve theory. Evaluation. 2014;20(1):26–43.
Henry G. Influential evaluations. Am J Eval. 2003;24(4):515–24.
Henry G. Beyond use: understanding evaluation's influence on attitudes and actions. Am J Eval. 2003;24(3):293–314.
Kirkhart KE. Reconceptualizing evaluation use: an integrated theory of influence. N Dir Eval. 2000;88:5–23.
Mark MM. Toward better research on—and thinking about—evaluation influence, especially in multisite evaluations. N Dir Eval. 2011;2011(129):107–19.
Clarke P, Ramalingam B. Organisational change in the humanitarian sector. London: ALNAP/Overseas Development Institute; 2008.
Patton MQ. Utilization-focused evaluation. 3rd ed. Thousand Oaks: Sage; 1997.
Patton MQ, LaBossière F. évaluation axée sur l'utilisation. In: Ridde V, Dagenais C, editors. Approches et pratiques en évaluation de programme. Montréal: Les Presses de l'Université de Montréal; 2009.
Oliver ML. Evaluation of emergency response: humanitarian aid agencies and evaluation influence. Dissertation, Georgia State University, 2008. http://scholarworks.gsu.edu/pmap_diss/23 . Accessed 11 Jan 2016.
Le Ministère du Burkina Faso. Compte-rendu du Conseil des ministres du mercredi 2 mars 2016. Portail officiel du gouvernement du Burkina Faso. Ouagadougou: Le Ministre de la Communication et des Relations avec le Parlement; 2016.
Download references
Acknowledgments
The authors wish to thank the two peer reviewers, whose feedback was especially helpful in improving the manuscript. Over the course of this study, Léna D’Ostie-Racine received funding from the Strategic Training Program in Global Health Research, a partnership of the Canadian Institutes of Health Research and the Québec Population Health Research Network. She was later also funded by the Fonds de recherche du Québec - Société et culture. The authors wish to express their utmost gratitude for the kind assistance and proactive participation of HELP managers and staff, the external evaluators, the district health management teams of Dori and Sebba in Burkina Faso, and the ECHO representatives, who together made this study possible. The authors also wish to thank Ludovic Queuille for his support throughout the study and for his insightful comments on previous drafts of the present article. The authors are also thankful to Didier Dupont for his consultations on qualitative analyses and to Karine Racicot for her remarkable help in reviewing and clarifying the application of the codebook. We also wish to thank all those, including Zoé Ouangré and Xavier Barsalou-Verge, who helped transcribe the interviews, which contained a vast array of African, Canadian and European accents. Our gratitude also goes out to all colleagues who provided support and insights throughout the study and/or commented on drafts of this article.
Authors’ contributions
All three authors conceptualized and designed the research project. Throughout the research project, LDR worked under the supervision, guidance and support of CD and VR. She developed the interview questions, collected the data, developed the thematic codebook, transcribed some interviews, and analyzed and interpreted the data independently. She also produced the manuscript. CD and VR reviewed and commented on drafts of the manuscript, providing input and guidance. All authors read and approved the final manuscript.
Authors’ information
Léna D’Ostie-Racine is a PhD student at the University of Montreal in research/clinical psychology. Her research thesis focuses on the use of program evaluation and conditions that influence the use of program evaluation processes and results, as well as on the development of an evaluation culture within the context of a humanitarian NGO promoting health equity.
Christian Dagenais, PhD, is associate professor at the University of Montreal. His research interests are centred around program evaluation and knowledge transfer. He coordinated a thematic segment of the Canadian Journal of Program Evaluation in 2009 and is a co-author of the book Approches et pratiques en évaluation de programme published in 2012. Since 2009, he has led the RENARD team ( www.equiperenard.ca ), which is funded by the Fonds de recherche du Quebec – Société et culture and is the first cross-disciplinary group in Quebec devoted to studying knowledge transfer in social interventions, including educational, health and community services.
Valéry Ridde, PhD, is associate professor of global health in the Department of Social and Preventive Medicine and the Research Institute (IRSPUM) of the University of Montreal School of Public Health. His research interests are centred around program evaluation, global health and healthcare accessibility ( www.equitesante.org ). VR holds a Canadian Institutes of Health Research (CIHR) funded Research Chair in Applied Public Health [CPP 137901].
Sources of support
The first author received financial support from the Fonds de recherche du Québec – Société et culture (FRQSC) and support from Équipe RENARD.
Author information
Authors and affiliations.
Department of Psychology, University of Montreal, Pavillon Marie-Victorin, Room C355, P.O. Box 6128, Centre-ville Station, Montreal, Quebec, H3C 3J7, Canada
Léna D’Ostie-Racine & Christian Dagenais
Department of Social and Preventive Medicine, University of Montreal School of Public Health (ESPUM), Montreal, Canada
Valéry Ridde
University of Montreal Public Health Research Institute (IRSPUM), Montreal, Canada
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Léna D’Ostie-Racine .
Ethics declarations
Competing interests.
The first author has benefited from HELP’s logistical assistance. The second and third authors have both worked as consultants for HELP. The funders and the NGO HELP did not take part in decisions on the study design, data collection or analysis, nor in the preparation and publication of the manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and permissions
About this article
Cite this article.
D’Ostie-Racine, L., Dagenais, C. & Ridde, V. A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso. Health Res Policy Sys 14 , 37 (2016). https://doi.org/10.1186/s12961-016-0109-0
Download citation
Received : 27 February 2015
Accepted : 29 April 2016
Published : 26 May 2016
DOI : https://doi.org/10.1186/s12961-016-0109-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Utilization
- Program evaluation
- Burkina Faso (West Africa)
- User fee exemption
Health Research Policy and Systems
ISSN: 1478-4505
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
- Featured Collections
Case Pack: Program Evaluation
If you are like me, one of the key barriers in adopting cases is finding the right cases for your course. With this in mind, the HKS Case Program created this guide to simplify the process of identifying and selecting a case in the realm of program evaluation.
Cases help bring some of the course material to life and can be a great complement to other pedagogic approaches. For example, while the students in my course learn about different methods to evaluate the impact of the program, using the Jamaica PATH case has helped them more clearly see the advantages and disadvantages of these methods and, perhaps just as importantly, the tradeoffs that are often made in trying to select an evaluation design in the real world.
I hope this guide is as useful to you as it has been to me.
These short cases are designed to improve learning on a host of topics, from theory of change to impact evaluations, while underscoring the everyday tradeoffs organizations face when trying to do scientifically rigorous work in the practical constraints of the real world.
Theme I: Using Evidence
A. theory of change.
Case : Women as Leaders: Lessons from Political Quotas in India Length : 7 pages Learning Objective : Students learn how to assess the theory of change of an intervention aimed at improving political representation of women in India and also learn how to (and how not to) assess the causal effect of an intervention.
Case : Scared Straight: Freeport City Council Takes on Juvenile Delinquency Length : 4 pages Learning Objective : Students learn what is meant by "evidence" and what different forms of evidence they can draw on.
B. The Role of Evidence in Social Programs
Case : Providing Pensions for the Poor: Targeting Cash Transfers for the Elderly in Mexico Length : 5 pages Learning Objective : Introduces students to the use of evidence in social programs in the context of having to decide between 3 possible targeting mechanisms that the government of Mexico considered when launching a pension program for the elderly. Students are put in a position to confront some challenging tradeoffs between targeting accuracy and political, financial, and operational feasibility.
Theme II: Impact Evaluations
A. importance of evaluation design.
Video Case : New York City’s Teen ACTION Program: An Evaluation Gone Awry Length : 2 videos (total 10 minutes); 4 page background note. Learning Objective : Highlights the importance of good evaluation design, and raises key issues around statistical power, measurement error, internal validity, etc.
B. How to Design an Impact Evaluation (Non-Experimental Methods)
Case : Designing Impact Evaluations: Assessing Jamaica’s PATH Program Length : 4 pages Learning Objective : Students are asked to assess the strengths and weaknesses of three possible evaluation designs using as criteria the scientific rigor, political feasibility, logistical implications, and financial feasibility of each design. They are put in a position to confront the inevitable tradeoffs when trying to balance the scientific rigor of an evaluation design with the practical constraints of implementing it.
C. How to Design an Impact Evaluation (Experimental Methods)
Case : Devil in the Details: Designing a Social Impact Bond Agreement in Medellin Length : 15 pages Learning Objective : Under the challenge of crafting a Social Impact Bond to help reduce the teenage pregnancy rate in Medellin (Colombia), students are asked to grapple with key issues in designing an impact evaluation, including selecting treatment and control groups and identifying outcome measures that are both rigorous and practical to collect under the constraints the protagonists of the case face.
Theme III: Qualitative Analysis
Case : The Geography of Poverty: Exploring the Role of Neighborhoods in the Lives of Urban, Adolescent Poor Length : 5 pages Learning Objective : Students learn about the functions of qualitative research and closely examine how qualitative research can enhance the interpretation of quantitative data in the context of a mixed methods evaluation of Moving To Opportunity, a very ambitious anti-poverty program that consisted in offering low-income families in five U.S. cities (Baltimore, Boston, Chicago, Los Angeles and New York) the opportunity to move to neighborhoods with low levels of poverty.
Faculty Profile
Dan Levy , Senior Lecturer in Public Policy and Faculty Chair of the Kennedy School's SLATE Initiative , teaches courses in quantitative methods and program evaluation. He has served as a senior researcher at Mathematica Policy Research, a faculty affiliate at the Poverty Action Lab (MIT), and as consultant to several organizations including the World Bank.
Recent Cases by Dan
Betting Private Capital on Fixing Public Ills: Instiglio Brings Social Impact Bonds to Colombia Providing Pensions for the Poor: Targeting Cash Transfers for the Elderly in Mexico Devil in the Details: Designing a Social Impact Bond Agreement in Medellin
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- My Bibliography
- Collections
- Citation manager
Save citation to file
Email citation, add to collections.
- Create a new collection
- Add to an existing collection
Add to My Bibliography
Your saved search, create a file for external citation management software, your rss feed.
- Search in PubMed
- Search in NLM Catalog
- Add to Search
A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso
Affiliations.
- 1 Department of Psychology, University of Montreal, Pavillon Marie-Victorin, Room C355, P.O. Box 6128, Centre-ville Station, Montreal, Quebec, H3C 3J7, Canada. [email protected].
- 2 Department of Psychology, University of Montreal, Pavillon Marie-Victorin, Room C355, P.O. Box 6128, Centre-ville Station, Montreal, Quebec, H3C 3J7, Canada.
- 3 Department of Social and Preventive Medicine, University of Montreal School of Public Health (ESPUM), Montreal, Canada.
- 4 University of Montreal Public Health Research Institute (IRSPUM), Montreal, Canada.
- PMID: 27230298
- PMCID: PMC4880829
- DOI: 10.1186/s12961-016-0109-0
Background: Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations.
Methods: The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU) by a humanitarian NGO based in Burkina Faso. This organization developed an evaluation strategy in 2008 to document the implementation and effects of its maternal and child healthcare user fee exemption program. Program evaluations have been undertaken ever since, and the present study examined the discourses of evaluation partners in 2009 (n = 15) and 2011 (n = 17). Semi-structured individual interviews and one group interview were conducted to identify instances of EU over time. Alkin and Taut's (Stud Educ Eval 29:1-12, 2003) conceptualization of EU was used as the basis for thematic qualitative analyses of the different forms of EU identified by stakeholders of the exemption program in the two data collection periods.
Results: Results demonstrated that stakeholders began to understand and value the utility of program evaluations once they were exposed to evaluation findings and then progressively used evaluations over time. EU was manifested in a variety of ways, including instrumental and conceptual use of evaluation processes and findings, as well as the persuasive use of findings. Such EU supported planning, decision-making, program practices, evaluation capacity, and advocacy.
Conclusions: The study sheds light on the many ways evaluations can be used by different actors in the humanitarian sector. Conceptualizations of EU are also critically discussed.
Keywords: Burkina Faso (West Africa); Evaluation use; Healthcare; Knowledge transfer; Program evaluation; User fee exemption; Utilization.
PubMed Disclaimer
Similar articles
- Evaluation of a knowledge transfer strategy from a user fee exemption program for vulnerable populations in Burkina Faso. Dagenais C, Queuille L, Ridde V. Dagenais C, et al. Glob Health Promot. 2013 Mar;20(1 Suppl):70-9. doi: 10.1177/1757975912462416. Glob Health Promot. 2013. PMID: 23549706
- An evaluability assessment of a West Africa based Non-Governmental Organization's (NGO) progressive evaluation strategy. D'Ostie-Racine L, Dagenais C, Ridde V. D'Ostie-Racine L, et al. Eval Program Plann. 2013 Feb;36(1):71-9. doi: 10.1016/j.evalprogplan.2012.07.002. Epub 2012 Jul 21. Eval Program Plann. 2013. PMID: 22885653
- Free versus subsidised healthcare: options for fee exemptions, access to care for vulnerable groups and effects on the health system in Burkina Faso. Yaogo M. Yaogo M. Health Res Policy Syst. 2017 Jul 12;15(Suppl 1):58. doi: 10.1186/s12961-017-0210-z. Health Res Policy Syst. 2017. PMID: 28722559 Free PMC article.
- Stakeholder perceptions and experiences from the implementation of the Gratuité user fee exemption policy in Burkina Faso: a qualitative study. Banke-Thomas A, Offosse MJ, Yameogo P, Manli AR, Goumbri A, Avoka C, Boxshall M, Eboreime E. Banke-Thomas A, et al. Health Res Policy Syst. 2023 Jun 6;21(1):46. doi: 10.1186/s12961-023-01008-3. Health Res Policy Syst. 2023. PMID: 37280694 Free PMC article.
- Strengthening equitable health systems in West Africa: The regional project on governance research for equity in health systems. Keita N, Uzochukwu B, Ky-Zerbo O, Sombié I, Lokossou V, Johnson E, Okeke C, Godt S. Keita N, et al. Afr J Reprod Health. 2022 May;26(5):81-89. doi: 10.29063/ajrh2022/v26i5.9. Afr J Reprod Health. 2022. PMID: 37585100 Review.
- Validating the evaluation capacity scale among practitioners in non-governmental organizations. Ngai SS, Cheung CK, Li Y, Zhao L, Wang L, Jiang S, Tang HY, Yu EN. Ngai SS, et al. Front Psychol. 2022 Dec 23;13:1082313. doi: 10.3389/fpsyg.2022.1082313. eCollection 2022. Front Psychol. 2022. PMID: 36619086 Free PMC article.
- Darcy J, Knox Clarke P. Evidence & knowledge in humanitarian action. Background paper, 28th ALNAP meeting, Washington, DC, 5–7 March 2013. London: ALNAP; 2013.
- Beck T. Evaluating humanitarian action: an ALNAP guidance booklet. London: ALNAP; 2003.
- Crisp J. Thinking outside the box: evaluation and humanitarian action. Forced Migration Review. 2004;8:4–7.
- Hallam A. Harnessing the power of evaluation in humanitarian action: An initiative to improve understanding and use of evaluation. ALNAP working paper. London: ALNAP/Overseas Development Institute; 2011.
- Hallam A, Bonino F. Using evaluation for a change: insights from humanitarian practitioners. London: ALNAP/Overseas Development Institute; 2013.
- Search in MeSH
LinkOut - more resources
Full text sources.
- BioMed Central
- Europe PubMed Central
- PubMed Central
Other Literature Sources
- scite Smart Citations
- Citation Manager
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
This website may not work correctly because your browser is out of date. Please update your browser .
A case study focuses on a particular unit - a person, a site, a project. It often uses a combination of quantitative and qualitative data.
Case studies can be particularly useful for understanding how different elements fit together and how different elements (implementation, context and other factors) have produced the observed impacts.
There are different types of case studies, which can be used for different purposes in evaluation. The GAO (Government Accountability Office) has described six different types of case study:
1. Illustrative : This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy. (These are often used to complement quantitative data by providing examples of the overall findings).
2. Exploratory : This is also descriptive but is aimed at generating hypotheses for later investigation rather than simply providing illustration.
3. Critical instance : This examines a single instance of unique interest, or serves as a critical test of an assertion about a program, problem or strategy.
4. Program implementation . This investigates operations, often at several sites, and often with reference to a set of norms or standards about implementation processes.
5. Program effects . This examines the causal links between the program and observed effects (outputs, outcomes or impacts, depending on the timing of the evaluation) and usually involves multisite, multimethod evaluations.
6. Cumulative . This brings together findings from many case studies to answer evaluative questions.
The following guides are particularly recommended because they distinguish between the research design (case study) and the type of data (qualitative or quantitative), and provide guidance on selecting cases, addressing causal inference, and generalizing from cases.
This guide from the US General Accounting Office outlines good practice in case study evaluation and establishes a set of principles for applying case studies to evaluations.
This paper, authored by Edith D. Balbach for the California Department of Health Services is designed to help evaluators decide whether to use a case study evaluation approach.
This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies.
Expand to view all resources related to 'Case study'
- Broadening the range of designs and methods for impact evaluations
- Case studies in action
- Case study evaluations - US General Accounting Office
- Case study evaluations - World Bank
- Comparative case studies
- Compasss: Comparative methods for systematic cross-case analysis
- Dealing with paradox – Stories and lessons from the first three years of consortium-building
- Designing and facilitating creative conversations & learning activities
- Estudo de caso: a avaliação externa de um programa
- Evaluation tools
- Evaluations that make a difference
- Introduction to qualitative research methodology
- Methods for monitoring and evaluation
- Qualitative research & evaluation methods: Integrating theory and practice
- Reflections on innovation, assessment and social change processes: A SPARC case study, India
- Toward a listening bank: A review of best practices and the efficacy of beneficiary assessment
- UNICEF webinar: Comparative case studies
- Using case studies to do program evaluation
'Case study' is referenced in:
- Week 32: Better use of case studies in evaluation
Back to top
© 2022 BetterEvaluation. All right reserved.
- Case Study Evaluation Approach
- Learning Center
A case study evaluation approach can be an incredibly powerful tool for monitoring and evaluating complex programs and policies. By identifying common themes and patterns, this approach allows us to better understand the successes and challenges faced by the program. In this article, we’ll explore the benefits of using a case study evaluation approach in the monitoring and evaluation of projects, programs, and public policies.
Table of Contents
Introduction to Case Study Evaluation Approach
The advantages of a case study evaluation approach, types of case studies, potential challenges with a case study evaluation approach, guiding principles for successful implementation of a case study evaluation approach.
- Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programs
A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups.
An individual, a location, or a project may serve as the focal point of a case study’s attention. Quantitative and qualitative data are frequently used in conjunction with one another.
It also allows the researcher to gain insights into how people react to external influences. By using a case study evaluation approach, researchers can gain insights into how certain factors such as policy change or a new technology have impacted individuals and communities. The data gathered through this approach can be used to formulate effective strategies for responding to changes and challenges. Ultimately, this monitoring and evaluation approach helps organizations make better decision about the implementation of their plans.
This approach can be used to assess the effectiveness of a policy, program, or initiative by considering specific elements such as implementation processes, outcomes, and impact. A case study evaluation approach can provide an in-depth understanding of the effectiveness of a program by closely examining the processes involved in its implementation. This includes understanding the context, stakeholders, and resources to gain insight into how well a program is functioning or has been executed. By evaluating these elements, it can help to identify areas for improvement and suggest potential solutions. The findings from this approach can then be used to inform decisions about policies, programs, and initiatives for improved outcomes.
It is also useful for determining if other policies, programs, or initiatives could be applied to similar situations in order to achieve similar results or improved outcomes. All in all, the case study monitoring evaluation approach is an effective method for determining the effectiveness of specific policies, programs, or initiatives. By researching and analyzing the successes of previous cases, this approach can be used to identify similar approaches that could be applied to similar situations in order to achieve similar results or improved outcomes.
A case study evaluation approach offers the advantage of providing in-depth insight into a particular program or policy. This can be accomplished by analyzing data and observations collected from a range of stakeholders such as program participants, service providers, and community members. The monitoring and evaluation approach is used to assess the impact of programs and inform the decision-making process to ensure successful implementation. The case study monitoring and evaluation approach can help identify any underlying issues that need to be addressed in order to improve program effectiveness. It also provides a reality check on how successful programs are actually working, allowing organizations to make adjustments as needed. Overall, a case study monitoring and evaluation approach helps to ensure that policies and programs are achieving their objectives while providing valuable insight into how they are performing overall.
By taking a qualitative approach to data collection and analysis, case study evaluations are able to capture nuances in the context of a particular program or policy that can be overlooked when relying solely on quantitative methods. Using this approach, insights can be gleaned from looking at the individual experiences and perspectives of actors involved, providing a more detailed understanding of the impact of the program or policy than is possible with other evaluation methodologies. As such, case study monitoring evaluation is an invaluable tool in assessing the effectiveness of a particular initiative, enabling more informed decision-making as well as more effective implementation of programs and policies.
Furthermore, this approach is an effective way to uncover experiential information that can help to inform the ongoing improvement of policy and programming over time All in all, the case study monitoring evaluation approach offers an effective way to uncover experiential information necessary to inform the ongoing improvement of policy and programming. By analyzing the data gathered from this systematic approach, stakeholders can gain deeper insight into how best to make meaningful and long-term changes in their respective organizations.
Case studies come in a variety of forms, each of which can be put to a unique set of evaluation tasks. Evaluators have come to a consensus on describing six distinct sorts of case studies, which are as follows: illustrative, exploratory, critical instance, program implementation, program effects, and cumulative.
Illustrative Case Study
An illustrative case study is a type of case study that is used to provide a detailed and descriptive account of a particular event, situation, or phenomenon. It is often used in research to provide a clear understanding of a complex issue, and to illustrate the practical application of theories or concepts.
An illustrative case study typically uses qualitative data, such as interviews, surveys, or observations, to provide a detailed account of the unit being studied. The case study may also include quantitative data, such as statistics or numerical measurements, to provide additional context or to support the qualitative data.
The goal of an illustrative case study is to provide a rich and detailed description of the unit being studied, and to use this information to illustrate broader themes or concepts. For example, an illustrative case study of a successful community development project may be used to illustrate the importance of community engagement and collaboration in achieving development goals.
One of the strengths of an illustrative case study is its ability to provide a detailed and nuanced understanding of a particular issue or phenomenon. By focusing on a single case, the researcher is able to provide a detailed and in-depth analysis that may not be possible through other research methods.
However, one limitation of an illustrative case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a single unit, it may not be representative of other similar units or situations.
A well-executed case study can shed light on wider research topics or concepts through its thorough and descriptive analysis of a specific event or phenomenon.
Exploratory Case Study
An exploratory case study is a type of case study that is used to investigate a new or previously unexplored phenomenon or issue. It is often used in research when the topic is relatively unknown or when there is little existing literature on the topic.
Exploratory case studies are typically qualitative in nature and use a variety of methods to collect data, such as interviews, observations, and document analysis. The focus of the study is to gather as much information as possible about the phenomenon being studied and to identify new and emerging themes or patterns.
The goal of an exploratory case study is to provide a foundation for further research and to generate hypotheses about the phenomenon being studied. By exploring the topic in-depth, the researcher can identify new areas of research and generate new questions to guide future research.
One of the strengths of an exploratory case study is its ability to provide a rich and detailed understanding of a new or emerging phenomenon. By using a variety of data collection methods, the researcher can gather a broad range of data and perspectives to gain a more comprehensive understanding of the phenomenon being studied.
However, one limitation of an exploratory case study is that the findings may not be generalizable to other contexts or populations. Because the study is focused on a new or previously unexplored phenomenon, the findings may not be applicable to other situations or populations.
Exploratory case studies are an effective research strategy for learning about novel occurrences, developing research hypotheses, and gaining a deep familiarity with a topic of study.
Critical Instance Case Study
A critical instance case study is a type of case study that focuses on a specific event or situation that is critical to understanding a broader issue or phenomenon. The goal of a critical instance case study is to analyze the event in depth and to draw conclusions about the broader issue or phenomenon based on the analysis.
A critical instance case study typically uses qualitative data, such as interviews, observations, or document analysis, to provide a detailed and nuanced understanding of the event being studied. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.
The critical instance case study is often used in research when a particular event or situation is critical to understanding a broader issue or phenomenon. For example, a critical instance case study of a successful disaster response effort may be used to identify key factors that contributed to the success of the response, and to draw conclusions about effective disaster response strategies more broadly.
One of the strengths of a critical instance case study is its ability to provide a detailed and in-depth analysis of a particular event or situation. By focusing on a critical instance, the researcher is able to provide a rich and nuanced understanding of the event, and to draw conclusions about broader issues or phenomena based on the analysis.
However, one limitation of a critical instance case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific event or situation, the findings may not be applicable to other similar events or situations.
A critical instance case study is a valuable research method that can provide a detailed and nuanced understanding of a particular event or situation and can be used to draw conclusions about broader issues or phenomena based on the analysis.
Program Implementation Program Implementation
A program implementation case study is a type of case study that focuses on the implementation of a particular program or intervention. The goal of the case study is to provide a detailed and comprehensive account of the program implementation process, and to identify factors that contributed to the success or failure of the program.
Program implementation case studies typically use qualitative data, such as interviews, observations, and document analysis, to provide a detailed and nuanced understanding of the program implementation process. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.
The program implementation case study is often used in research to evaluate the effectiveness of a particular program or intervention, and to identify strategies for improving program implementation in the future. For example, a program implementation case study of a school-based health program may be used to identify key factors that contributed to the success or failure of the program, and to make recommendations for improving program implementation in similar settings.
One of the strengths of a program implementation case study is its ability to provide a detailed and comprehensive account of the program implementation process. By using qualitative data, the researcher is able to capture the complexity and nuance of the implementation process, and to identify factors that may not be captured by quantitative data alone.
However, one limitation of a program implementation case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific program or intervention, the findings may not be applicable to other programs or interventions in different settings.
An effective research tool, a case study of program implementation may illuminate the intricacies of the implementation process and point the way towards future enhancements.
Program Effects Case Study
A program effects case study is a research method that evaluates the effectiveness of a particular program or intervention by examining its outcomes or effects. The purpose of this type of case study is to provide a detailed and comprehensive account of the program’s impact on its intended participants or target population.
A program effects case study typically employs both quantitative and qualitative data collection methods, such as surveys, interviews, and observations, to evaluate the program’s impact on the target population. The data is then analyzed using statistical and thematic analysis to identify patterns and themes that emerge from the data.
The program effects case study is often used to evaluate the success of a program and identify areas for improvement. For example, a program effects case study of a community-based HIV prevention program may evaluate the program’s effectiveness in reducing HIV transmission rates among high-risk populations and identify factors that contributed to the program’s success.
One of the strengths of a program effects case study is its ability to provide a detailed and nuanced understanding of a program’s impact on its intended participants or target population. By using both quantitative and qualitative data, the researcher can capture both the objective and subjective outcomes of the program and identify factors that may have contributed to the outcomes.
However, a limitation of the program effects case study is that it may not be generalizable to other populations or contexts. Since the case study focuses on a particular program and population, the findings may not be applicable to other programs or populations in different settings.
A program effects case study is a good way to do research because it can give a detailed look at how a program affects the people it is meant for. This kind of case study can be used to figure out what needs to be changed and how to make programs that work better.
Cumulative Case Study
A cumulative case study is a type of case study that involves the collection and analysis of multiple cases to draw broader conclusions. Unlike a single-case study, which focuses on one specific case, a cumulative case study combines multiple cases to provide a more comprehensive understanding of a phenomenon.
The purpose of a cumulative case study is to build up a body of evidence through the examination of multiple cases. The cases are typically selected to represent a range of variations or perspectives on the phenomenon of interest. Data is collected from each case using a range of methods, such as interviews, surveys, and observations.
The data is then analyzed across cases to identify common themes, patterns, and trends. The analysis may involve both qualitative and quantitative methods, such as thematic analysis and statistical analysis.
The cumulative case study is often used in research to develop and test theories about a phenomenon. For example, a cumulative case study of successful community-based health programs may be used to identify common factors that contribute to program success, and to develop a theory about effective community-based health program design.
One of the strengths of the cumulative case study is its ability to draw on a range of cases to build a more comprehensive understanding of a phenomenon. By examining multiple cases, the researcher can identify patterns and trends that may not be evident in a single case study. This allows for a more nuanced understanding of the phenomenon and helps to develop more robust theories.
However, one limitation of the cumulative case study is that it can be time-consuming and resource-intensive to collect and analyze data from multiple cases. Additionally, the selection of cases may introduce bias if the cases are not representative of the population of interest.
In summary, a cumulative case study is a valuable research method that can provide a more comprehensive understanding of a phenomenon by examining multiple cases. This type of case study is particularly useful for developing and testing theories and identifying common themes and patterns across cases.
When conducting a case study evaluation approach, one of the main challenges is the need to establish a contextually relevant research design that accounts for the unique factors of the case being studied. This requires close monitoring of the case, its environment, and relevant stakeholders. In addition, the researcher must build a framework for the collection and analysis of data that is able to draw meaningful conclusions and provide valid insights into the dynamics of the case. Ultimately, an effective case study monitoring evaluation approach will allow researchers to form an accurate understanding of their research subject.
Additionally, depending on the size and scope of the case, there may be concerns regarding the availability of resources and personnel that could be allocated to data collection and analysis. To address these issues, a case study monitoring evaluation approach can be adopted, which would involve a mix of different methods such as interviews, surveys, focus groups and document reviews. Such an approach could provide valuable insights into the effectiveness and implementation of the case in question. Additionally, this type of evaluation can be tailored to the specific needs of the case study to ensure that all relevant data is collected and respected.
When dealing with a highly sensitive or confidential subject matter within a case study, researchers must take extra measures to prevent bias during data collection as well as protect participant anonymity while also collecting valid data in order to ensure reliable results
Moreover, when conducting a case study evaluation it is important to consider the potential implications of the data gathered. By taking extra measures to prevent bias and protect participant anonymity, researchers can ensure reliable results while also collecting valid data. Maintaining confidentiality and deploying ethical research practices are essential when conducting a case study to ensure an unbiased and accurate monitoring evaluation.
When planning and implementing a case study evaluation approach, it is important to ensure the guiding principles of research quality, data collection, and analysis are met. To ensure these principles are upheld, it is essential to develop a comprehensive monitoring and evaluation plan. This plan should clearly outline the steps to be taken during the data collection and analysis process. Furthermore, the plan should provide detailed descriptions of the project objectives, target population, key indicators, and timeline. It is also important to include metrics or benchmarks to monitor progress and identify any potential areas for improvement. By implementing such an approach, it will be possible to ensure that the case study evaluation approach yields valid and reliable results.
To ensure successful implementation, it is essential to establish a reliable data collection process that includes detailed information such as the scope of the study, the participants involved, and the methods used to collect data. Additionally, it is important to have a clear understanding of what will be examined through the evaluation process and how the results will be used. All in all, it is essential to establish a sound monitoring evaluation approach for a successful case study implementation. This includes creating a reliable data collection process that encompasses the scope of the study, the participants involved, and the methods used to collect data. It is also imperative to have an understanding of what will be examined and how the results will be utilized. Ultimately, effective planning is key to ensure that the evaluation process yields meaningful insights.
Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programmes
Using a case study approach in monitoring and evaluation allows for a more detailed and in-depth exploration of the project’s success, helping to identify key areas of improvement and successes that may have been overlooked through traditional evaluation. Through this case study method, specific data can be collected and analyzed to identify trends and different perspectives that can support the evaluation process. This data can allow stakeholders to gain a better understanding of the project’s successes and failures, helping them make informed decisions on how to strengthen current activities or shape future initiatives. From a monitoring and evaluation standpoint, this approach can provide an increased level of accuracy in terms of accurately assessing the effectiveness of the project.
This can provide valuable insights into what works—and what doesn’t—when it comes to implementing projects and programs, aiding decision-makers in making future plans that better meet their objectives However, monitoring and evaluation is just one approach to assessing the success of a case study. It does provide a useful insight into what initiatives may be successful, but it is important to note that there are other effective research methods, such as surveys and interviews, that can also help to further evaluate the success of a project or program.
In conclusion, a case study evaluation approach can be incredibly useful in monitoring and evaluating complex programs and policies. By exploring key themes, patterns and relationships, organizations can gain a detailed understanding of the successes, challenges and limitations of their program or policy. This understanding can then be used to inform decision-making and improve outcomes for those involved. With its ability to provide an in-depth understanding of a program or policy, the case study evaluation approach has become an invaluable tool for monitoring and evaluation professionals.
Leave a Comment Cancel Reply
You must be logged in to post a comment.
How strong is my Resume?
Only 2% of resumes land interviews.
Land a better, higher-paying career
Jobs for You
Monitoring, evaluation and learning advisor.
- Toronto, ON, Canada
- Cuso International
Country Development Cooperation Strategy Advisor
Democracy and governance advisor, senior advisor, education, youth, and child development advisor, monitoring & evaluation advisor, director of organizational development/ organizational/change management specialist, mis/mel specialist, deputy chief of party, chief of party, senior project manager.
- United States
Local Economic Expert
Social cohesion and conflict prevention expert, labor market and workforce development expert, services you might be interested in, useful guides ....
How to Create a Strong Resume
Monitoring And Evaluation Specialist Resume
Resume Length for the International Development Sector
Types of Evaluation
Monitoring, Evaluation, Accountability, and Learning (MEAL)
LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)
Sign Up & To Get My Free Referral Toolkit Now:
IMAGES
VIDEO
COMMENTS
WEBQualitative research methods can play a powerful role in program evaluation, but they frequently are misunderstood and poorly implemented, giving rise to the idea that they are just not as rigorous and credible as quantitative methods.
WEBQualitative research methods can play a powerful role in program evaluation, but they frequently are misunderstood and poorly implemented, giving rise to the idea that they are just not as rigorous and credible as quantitative methods.
Qualitative research gathers participants' experiences, perceptions, and behaviors through open-ended research questions. In evaluations, qualitative research can provide participant-informed insights into the implementation, outcomes, and impacts of programs. Using qualitative approaches, evaluators can incorporate the unique perspectives of ...
Typically gathered in the field, that is, the setting being studied, qualitative data used for program evaluation are obtained from three sources (Patton, 2002): In-depth interviews that use open-ended questions: "Interviews" include both one-on-one interviews and focus groups.
Using Case Studies. to doProgram. Evaluation. valuation of any kind is designed to document what happened in a program. Evaluation should show: 1) what actually occurred, 2) whether it had an impact, expected or unexpected, and 3) what links exist between a program and its observed impacts.
In educational contexts, case studies can be used to illustrate, test, or extend a theory, or assist other educators to analyze or shape their own practices. For example, Nogueiras, Iborra, and Kunnen (2019) used a case study to investigate the process of transformative learning for students in a counseling master's degree program.
This guide, written by Professor Frank Vanclay of the Department of Cultural Geography, University of Groningen, provides notes on planning and implementing qualitative case study research.It outlines the use of a variety of different evaluation options that can be used in outcomes assessment and provides examples of the use of story based approaches with a discussion focused on their ...
23 Program Evaluation Notes. Notes. 24 Community-Based Research: Understanding the Principles, Practices , Challenges, and ... or a policy. Below I identify different ways in which case study is used before focusing on qualitative case study research in particular. However, first I wish to indicate how I came to advocate and practice this form ...
Abstract. The profession of educational and social program evaluation has expanded exponentially around the globe since the mid-1960s and continues to receive the considerable attention of theorists, methodologists, and practitioners. The literature on it is wide and deep, reflecting an array of definitions and conceptions of purpose and social ...
The Power of Case Studies 196 ... • Determining improvements and changes to a program To introduce qualitative evaluation methods, it is important to first elab- ... Qualitative research has a long history, particularly in disciplines like anthro-pology and sociology, and there have been important changes over time in the ...
understanding the program's context, participants' perspectives, the inner dynamics of situations, and questions related to participants' experiences, and where generalization is not a goal, a case study design, with an emphasis on the collection of qualitative data, might be most appropriate. Case studies
A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...
individualization lend themselves to case studies. • Capture and communicate stories—in certain program settings a focus on "stories" is less threatening and more friendly than conducting case studies. Evaluation models: The following evaluation models are especially amenable to qualitative methods—determine which you will use.
Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy. This is the last in a series of seven articles describing non ...
Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations. The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU ...
Although case studies have been discussed extensively in the literature, little has been written about the specific steps one may use to conduct case study research effectively (Gagnon, 2010; Hancock & Algozzine, 2016).Baskarada (2014) also emphasized the need to have a succinct guideline that can be practically followed as it is actually tough to execute a case study well in practice.
Case: The Geography of Poverty: Exploring the Role of Neighborhoods in the Lives of Urban, Adolescent Poor Length: 5 pages Learning Objective: Students learn about the functions of qualitative research and closely examine how qualitative research can enhance the interpretation of quantitative data in the context of a mixed methods evaluation of Moving To Opportunity, a very ambitious anti ...
1 Introduction. One of the most commonly used yet misunderstood research designs is qualitative descriptive design or qualitative description (Turale 2020).Although this design is prevalent in nursing and midwifery studies and other health sciences (Bradshaw, Atkinson, and Doody 2017; Doyle et al. 2020), a systematic review of qualitative descriptive studies identified inconsistencies in how ...
Background: Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations. Methods: The current qualitative evaluation employed an instrumental case study design to ...
The GAO (Government Accountability Office) has described six different types of case study: 1. Illustrative: This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy. (These are often used to complement quantitative data by providing examples of the overall findings).
PDF | On Mar 1, 2016, Trista Hollweck published Robert K. Yin. (2014). Case Study Research Design and Methods (5th ed.). Thousand Oaks, CA: Sage. 282 pages. | Find, read and cite all the research ...
evaluation model as the conceptual framework, this study used a goal-free evaluation approach. This program evaluation used a qualitative case study research design centered on a purposeful sample of 8 TQP participant interviews. Data were analyzed through coding and thematic analysis. Overall, TQP participants felt that the program was
A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...
Improved knowledge about, and positive attitudes towards, people with mental illness could lead to improved support and decreased stigma for people with mental illness, including their families and carers. The aims of our study were to evaluate the perspectives of community leaders about the usefulness of a cluster randomized trial of a problem‐solving and Story‐bridge based mental health ...
4 Methods. This study employed a qualitative multiple case study design, following the approach of Stake (1995, 2016).This design, aligned with constructivist epistemology (Stake 1995), enabled us to explore the exercise of nurses' clinical leadership in a natural setting.Le Moigne's constructivism (2001, 2006, 2012) was selected as the philosophical foundation for this study, as it aligns ...
Executive summary New York City's FRESH (Food Retail Expansion to Support Health) program is designed to promote access to healthy foods, particularly in underserved areas, by providing financial and zoning incentives to new grocery stores. This fiscal note brings together a range of qualitative research, analysis, and stakeholder conversations to...