- Technical Support
- Find My Rep
You are here
Research Methods for Counseling An Introduction
- Robert J. Wright
- Description
See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .
For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.
SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com
Supplements
Password-protected Instructor Resources include the following:
- A Microsoft® Word® test bank is available containing multiple choice, true/false, short answer, and essay questions for each chapter. The test bank provides you with a diverse range of pre-written options as well as the opportunity for editing any question and/or inserting your own personalized questions to effectively assess students’ progress and understanding.
Well organized. Covered all required content, and presented concepts in a manner that is relevant and meaningful to mental health counselors.
Worked great, a great, helpful, thoughtful text.
Covers each area in a comprehensive and concise manner. This is a great book to encourage research for future counselors.
KEY FEATURES
- Shows students how to prepare research proposals and presentations.
- Offers techniques for designing valid research and evaluating published research.
- Provides insight into research areas identified by CACREP as essential for counselors.
- Includes over 150 illustrative excerpts from counseling research articles, offering context for understanding key research concepts.
- Features Case in Point boxes to illustrate resech ideas in practice
- Begins each chapter with Introductions and Themes and Objectives to prepare students to better understand the material that follows.
- Contains discussion questions in each chapter help students further analyze the content.
Sample Materials & Chapters
For instructors, select a purchasing option.
- Search Menu
- Browse content in Arts and Humanities
- Browse content in Archaeology
- Anglo-Saxon and Medieval Archaeology
- Archaeological Methodology and Techniques
- Archaeology by Region
- Archaeology of Religion
- Archaeology of Trade and Exchange
- Biblical Archaeology
- Contemporary and Public Archaeology
- Environmental Archaeology
- Historical Archaeology
- History and Theory of Archaeology
- Industrial Archaeology
- Landscape Archaeology
- Mortuary Archaeology
- Prehistoric Archaeology
- Underwater Archaeology
- Zooarchaeology
- Browse content in Architecture
- Architectural Structure and Design
- History of Architecture
- Residential and Domestic Buildings
- Theory of Architecture
- Browse content in Art
- Art Subjects and Themes
- History of Art
- Industrial and Commercial Art
- Theory of Art
- Biographical Studies
- Byzantine Studies
- Browse content in Classical Studies
- Classical Literature
- Classical Reception
- Classical History
- Classical Philosophy
- Classical Mythology
- Classical Art and Architecture
- Classical Oratory and Rhetoric
- Greek and Roman Archaeology
- Greek and Roman Papyrology
- Greek and Roman Epigraphy
- Greek and Roman Law
- Late Antiquity
- Religion in the Ancient World
- Digital Humanities
- Browse content in History
- Colonialism and Imperialism
- Diplomatic History
- Environmental History
- Genealogy, Heraldry, Names, and Honours
- Genocide and Ethnic Cleansing
- Historical Geography
- History by Period
- History of Agriculture
- History of Education
- History of Gender and Sexuality
- Industrial History
- Intellectual History
- International History
- Labour History
- Legal and Constitutional History
- Local and Family History
- Maritime History
- Military History
- National Liberation and Post-Colonialism
- Oral History
- Political History
- Public History
- Regional and National History
- Revolutions and Rebellions
- Slavery and Abolition of Slavery
- Social and Cultural History
- Theory, Methods, and Historiography
- Urban History
- World History
- Browse content in Language Teaching and Learning
- Language Learning (Specific Skills)
- Language Teaching Theory and Methods
- Browse content in Linguistics
- Applied Linguistics
- Cognitive Linguistics
- Computational Linguistics
- Forensic Linguistics
- Grammar, Syntax and Morphology
- Historical and Diachronic Linguistics
- History of English
- Language Variation
- Language Families
- Language Evolution
- Language Reference
- Language Acquisition
- Lexicography
- Linguistic Theories
- Linguistic Typology
- Linguistic Anthropology
- Phonetics and Phonology
- Psycholinguistics
- Sociolinguistics
- Translation and Interpretation
- Writing Systems
- Browse content in Literature
- Bibliography
- Children's Literature Studies
- Literary Studies (Modernism)
- Literary Studies (Romanticism)
- Literary Studies (American)
- Literary Studies (Asian)
- Literary Studies (European)
- Literary Studies (Eco-criticism)
- Literary Studies - World
- Literary Studies (1500 to 1800)
- Literary Studies (19th Century)
- Literary Studies (20th Century onwards)
- Literary Studies (African American Literature)
- Literary Studies (British and Irish)
- Literary Studies (Early and Medieval)
- Literary Studies (Fiction, Novelists, and Prose Writers)
- Literary Studies (Gender Studies)
- Literary Studies (Graphic Novels)
- Literary Studies (History of the Book)
- Literary Studies (Plays and Playwrights)
- Literary Studies (Poetry and Poets)
- Literary Studies (Postcolonial Literature)
- Literary Studies (Queer Studies)
- Literary Studies (Science Fiction)
- Literary Studies (Travel Literature)
- Literary Studies (War Literature)
- Literary Studies (Women's Writing)
- Literary Theory and Cultural Studies
- Mythology and Folklore
- Shakespeare Studies and Criticism
- Browse content in Media Studies
- Browse content in Music
- Applied Music
- Dance and Music
- Ethics in Music
- Ethnomusicology
- Gender and Sexuality in Music
- Medicine and Music
- Music Cultures
- Music and Culture
- Music and Media
- Music and Religion
- Music Education and Pedagogy
- Music Theory and Analysis
- Musical Scores, Lyrics, and Libretti
- Musical Structures, Styles, and Techniques
- Musicology and Music History
- Performance Practice and Studies
- Race and Ethnicity in Music
- Sound Studies
- Browse content in Performing Arts
- Browse content in Philosophy
- Aesthetics and Philosophy of Art
- Epistemology
- Feminist Philosophy
- History of Western Philosophy
- Metaphysics
- Moral Philosophy
- Non-Western Philosophy
- Philosophy of Action
- Philosophy of Law
- Philosophy of Religion
- Philosophy of Language
- Philosophy of Mind
- Philosophy of Perception
- Philosophy of Science
- Philosophy of Mathematics and Logic
- Practical Ethics
- Social and Political Philosophy
- Browse content in Religion
- Biblical Studies
- Christianity
- East Asian Religions
- History of Religion
- Judaism and Jewish Studies
- Qumran Studies
- Religion and Education
- Religion and Health
- Religion and Politics
- Religion and Science
- Religion and Law
- Religion and Art, Literature, and Music
- Religious Studies
- Browse content in Society and Culture
- Cookery, Food, and Drink
- Cultural Studies
- Customs and Traditions
- Ethical Issues and Debates
- Hobbies, Games, Arts and Crafts
- Lifestyle, Home, and Garden
- Natural world, Country Life, and Pets
- Popular Beliefs and Controversial Knowledge
- Sports and Outdoor Recreation
- Technology and Society
- Travel and Holiday
- Visual Culture
- Browse content in Law
- Arbitration
- Browse content in Company and Commercial Law
- Commercial Law
- Company Law
- Browse content in Comparative Law
- Systems of Law
- Competition Law
- Browse content in Constitutional and Administrative Law
- Government Powers
- Judicial Review
- Local Government Law
- Military and Defence Law
- Parliamentary and Legislative Practice
- Construction Law
- Contract Law
- Browse content in Criminal Law
- Criminal Procedure
- Criminal Evidence Law
- Sentencing and Punishment
- Employment and Labour Law
- Environment and Energy Law
- Browse content in Financial Law
- Banking Law
- Insolvency Law
- History of Law
- Human Rights and Immigration
- Intellectual Property Law
- Browse content in International Law
- Private International Law and Conflict of Laws
- Public International Law
- IT and Communications Law
- Jurisprudence and Philosophy of Law
- Law and Society
- Law and Politics
- Browse content in Legal System and Practice
- Courts and Procedure
- Legal Skills and Practice
- Primary Sources of Law
- Regulation of Legal Profession
- Medical and Healthcare Law
- Browse content in Policing
- Criminal Investigation and Detection
- Police and Security Services
- Police Procedure and Law
- Police Regional Planning
- Browse content in Property Law
- Personal Property Law
- Study and Revision
- Terrorism and National Security Law
- Browse content in Trusts Law
- Wills and Probate or Succession
- Browse content in Medicine and Health
- Browse content in Allied Health Professions
- Arts Therapies
- Clinical Science
- Dietetics and Nutrition
- Occupational Therapy
- Operating Department Practice
- Physiotherapy
- Radiography
- Speech and Language Therapy
- Browse content in Anaesthetics
- General Anaesthesia
- Neuroanaesthesia
- Clinical Neuroscience
- Browse content in Clinical Medicine
- Acute Medicine
- Cardiovascular Medicine
- Clinical Genetics
- Clinical Pharmacology and Therapeutics
- Dermatology
- Endocrinology and Diabetes
- Gastroenterology
- Genito-urinary Medicine
- Geriatric Medicine
- Infectious Diseases
- Medical Oncology
- Medical Toxicology
- Pain Medicine
- Palliative Medicine
- Rehabilitation Medicine
- Respiratory Medicine and Pulmonology
- Rheumatology
- Sleep Medicine
- Sports and Exercise Medicine
- Community Medical Services
- Critical Care
- Emergency Medicine
- Forensic Medicine
- Haematology
- History of Medicine
- Medical Ethics
- Browse content in Medical Skills
- Clinical Skills
- Communication Skills
- Nursing Skills
- Surgical Skills
- Browse content in Medical Dentistry
- Oral and Maxillofacial Surgery
- Paediatric Dentistry
- Restorative Dentistry and Orthodontics
- Surgical Dentistry
- Medical Statistics and Methodology
- Browse content in Neurology
- Clinical Neurophysiology
- Neuropathology
- Nursing Studies
- Browse content in Obstetrics and Gynaecology
- Gynaecology
- Occupational Medicine
- Ophthalmology
- Otolaryngology (ENT)
- Browse content in Paediatrics
- Neonatology
- Browse content in Pathology
- Chemical Pathology
- Clinical Cytogenetics and Molecular Genetics
- Histopathology
- Medical Microbiology and Virology
- Patient Education and Information
- Browse content in Pharmacology
- Psychopharmacology
- Browse content in Popular Health
- Caring for Others
- Complementary and Alternative Medicine
- Self-help and Personal Development
- Browse content in Preclinical Medicine
- Cell Biology
- Molecular Biology and Genetics
- Reproduction, Growth and Development
- Primary Care
- Professional Development in Medicine
- Browse content in Psychiatry
- Addiction Medicine
- Child and Adolescent Psychiatry
- Forensic Psychiatry
- Learning Disabilities
- Old Age Psychiatry
- Psychotherapy
- Browse content in Public Health and Epidemiology
- Epidemiology
- Public Health
- Browse content in Radiology
- Clinical Radiology
- Interventional Radiology
- Nuclear Medicine
- Radiation Oncology
- Reproductive Medicine
- Browse content in Surgery
- Cardiothoracic Surgery
- Gastro-intestinal and Colorectal Surgery
- General Surgery
- Neurosurgery
- Paediatric Surgery
- Peri-operative Care
- Plastic and Reconstructive Surgery
- Surgical Oncology
- Transplant Surgery
- Trauma and Orthopaedic Surgery
- Vascular Surgery
- Browse content in Science and Mathematics
- Browse content in Biological Sciences
- Aquatic Biology
- Biochemistry
- Bioinformatics and Computational Biology
- Developmental Biology
- Ecology and Conservation
- Evolutionary Biology
- Genetics and Genomics
- Microbiology
- Molecular and Cell Biology
- Natural History
- Plant Sciences and Forestry
- Research Methods in Life Sciences
- Structural Biology
- Systems Biology
- Zoology and Animal Sciences
- Browse content in Chemistry
- Analytical Chemistry
- Computational Chemistry
- Crystallography
- Environmental Chemistry
- Industrial Chemistry
- Inorganic Chemistry
- Materials Chemistry
- Medicinal Chemistry
- Mineralogy and Gems
- Organic Chemistry
- Physical Chemistry
- Polymer Chemistry
- Study and Communication Skills in Chemistry
- Theoretical Chemistry
- Browse content in Computer Science
- Artificial Intelligence
- Computer Architecture and Logic Design
- Game Studies
- Human-Computer Interaction
- Mathematical Theory of Computation
- Programming Languages
- Software Engineering
- Systems Analysis and Design
- Virtual Reality
- Browse content in Computing
- Business Applications
- Computer Games
- Computer Security
- Computer Networking and Communications
- Digital Lifestyle
- Graphical and Digital Media Applications
- Operating Systems
- Browse content in Earth Sciences and Geography
- Atmospheric Sciences
- Environmental Geography
- Geology and the Lithosphere
- Maps and Map-making
- Meteorology and Climatology
- Oceanography and Hydrology
- Palaeontology
- Physical Geography and Topography
- Regional Geography
- Soil Science
- Urban Geography
- Browse content in Engineering and Technology
- Agriculture and Farming
- Biological Engineering
- Civil Engineering, Surveying, and Building
- Electronics and Communications Engineering
- Energy Technology
- Engineering (General)
- Environmental Science, Engineering, and Technology
- History of Engineering and Technology
- Mechanical Engineering and Materials
- Technology of Industrial Chemistry
- Transport Technology and Trades
- Browse content in Environmental Science
- Applied Ecology (Environmental Science)
- Conservation of the Environment (Environmental Science)
- Environmental Sustainability
- Environmentalist Thought and Ideology (Environmental Science)
- Management of Land and Natural Resources (Environmental Science)
- Natural Disasters (Environmental Science)
- Nuclear Issues (Environmental Science)
- Pollution and Threats to the Environment (Environmental Science)
- Social Impact of Environmental Issues (Environmental Science)
- History of Science and Technology
- Browse content in Materials Science
- Ceramics and Glasses
- Composite Materials
- Metals, Alloying, and Corrosion
- Nanotechnology
- Browse content in Mathematics
- Applied Mathematics
- Biomathematics and Statistics
- History of Mathematics
- Mathematical Education
- Mathematical Finance
- Mathematical Analysis
- Numerical and Computational Mathematics
- Probability and Statistics
- Pure Mathematics
- Browse content in Neuroscience
- Cognition and Behavioural Neuroscience
- Development of the Nervous System
- Disorders of the Nervous System
- History of Neuroscience
- Invertebrate Neurobiology
- Molecular and Cellular Systems
- Neuroendocrinology and Autonomic Nervous System
- Neuroscientific Techniques
- Sensory and Motor Systems
- Browse content in Physics
- Astronomy and Astrophysics
- Atomic, Molecular, and Optical Physics
- Biological and Medical Physics
- Classical Mechanics
- Computational Physics
- Condensed Matter Physics
- Electromagnetism, Optics, and Acoustics
- History of Physics
- Mathematical and Statistical Physics
- Measurement Science
- Nuclear Physics
- Particles and Fields
- Plasma Physics
- Quantum Physics
- Relativity and Gravitation
- Semiconductor and Mesoscopic Physics
- Browse content in Psychology
- Affective Sciences
- Clinical Psychology
- Cognitive Neuroscience
- Cognitive Psychology
- Criminal and Forensic Psychology
- Developmental Psychology
- Educational Psychology
- Evolutionary Psychology
- Health Psychology
- History and Systems in Psychology
- Music Psychology
- Neuropsychology
- Organizational Psychology
- Psychological Assessment and Testing
- Psychology of Human-Technology Interaction
- Psychology Professional Development and Training
- Research Methods in Psychology
- Social Psychology
- Browse content in Social Sciences
- Browse content in Anthropology
- Anthropology of Religion
- Human Evolution
- Medical Anthropology
- Physical Anthropology
- Regional Anthropology
- Social and Cultural Anthropology
- Theory and Practice of Anthropology
- Browse content in Business and Management
- Business History
- Business Ethics
- Business Strategy
- Business and Technology
- Business and Government
- Business and the Environment
- Comparative Management
- Corporate Governance
- Corporate Social Responsibility
- Entrepreneurship
- Health Management
- Human Resource Management
- Industrial and Employment Relations
- Industry Studies
- Information and Communication Technologies
- International Business
- Knowledge Management
- Management and Management Techniques
- Operations Management
- Organizational Theory and Behaviour
- Pensions and Pension Management
- Public and Nonprofit Management
- Strategic Management
- Supply Chain Management
- Browse content in Criminology and Criminal Justice
- Criminal Justice
- Criminology
- Forms of Crime
- International and Comparative Criminology
- Youth Violence and Juvenile Justice
- Development Studies
- Browse content in Economics
- Agricultural, Environmental, and Natural Resource Economics
- Asian Economics
- Behavioural Finance
- Behavioural Economics and Neuroeconomics
- Econometrics and Mathematical Economics
- Economic Methodology
- Economic History
- Economic Systems
- Economic Development and Growth
- Financial Markets
- Financial Institutions and Services
- General Economics and Teaching
- Health, Education, and Welfare
- History of Economic Thought
- International Economics
- Labour and Demographic Economics
- Law and Economics
- Macroeconomics and Monetary Economics
- Microeconomics
- Public Economics
- Urban, Rural, and Regional Economics
- Welfare Economics
- Browse content in Education
- Adult Education and Continuous Learning
- Care and Counselling of Students
- Early Childhood and Elementary Education
- Educational Equipment and Technology
- Educational Strategies and Policy
- Higher and Further Education
- Organization and Management of Education
- Philosophy and Theory of Education
- Schools Studies
- Secondary Education
- Teaching of a Specific Subject
- Teaching of Specific Groups and Special Educational Needs
- Teaching Skills and Techniques
- Browse content in Environment
- Applied Ecology (Social Science)
- Climate Change
- Conservation of the Environment (Social Science)
- Environmentalist Thought and Ideology (Social Science)
- Social Impact of Environmental Issues (Social Science)
- Browse content in Human Geography
- Cultural Geography
- Economic Geography
- Political Geography
- Browse content in Interdisciplinary Studies
- Communication Studies
- Museums, Libraries, and Information Sciences
- Browse content in Politics
- African Politics
- Asian Politics
- Chinese Politics
- Comparative Politics
- Conflict Politics
- Elections and Electoral Studies
- Environmental Politics
- European Union
- Foreign Policy
- Gender and Politics
- Human Rights and Politics
- Indian Politics
- International Relations
- International Organization (Politics)
- International Political Economy
- Irish Politics
- Latin American Politics
- Middle Eastern Politics
- Political Theory
- Political Behaviour
- Political Economy
- Political Institutions
- Political Methodology
- Political Communication
- Political Philosophy
- Political Sociology
- Politics and Law
- Public Policy
- Public Administration
- Quantitative Political Methodology
- Regional Political Studies
- Russian Politics
- Security Studies
- State and Local Government
- UK Politics
- US Politics
- Browse content in Regional and Area Studies
- African Studies
- Asian Studies
- East Asian Studies
- Japanese Studies
- Latin American Studies
- Middle Eastern Studies
- Native American Studies
- Scottish Studies
- Browse content in Research and Information
- Research Methods
- Browse content in Social Work
- Addictions and Substance Misuse
- Adoption and Fostering
- Care of the Elderly
- Child and Adolescent Social Work
- Couple and Family Social Work
- Developmental and Physical Disabilities Social Work
- Direct Practice and Clinical Social Work
- Emergency Services
- Human Behaviour and the Social Environment
- International and Global Issues in Social Work
- Mental and Behavioural Health
- Social Justice and Human Rights
- Social Policy and Advocacy
- Social Work and Crime and Justice
- Social Work Macro Practice
- Social Work Practice Settings
- Social Work Research and Evidence-based Practice
- Welfare and Benefit Systems
- Browse content in Sociology
- Childhood Studies
- Community Development
- Comparative and Historical Sociology
- Economic Sociology
- Gender and Sexuality
- Gerontology and Ageing
- Health, Illness, and Medicine
- Marriage and the Family
- Migration Studies
- Occupations, Professions, and Work
- Organizations
- Population and Demography
- Race and Ethnicity
- Social Theory
- Social Movements and Social Change
- Social Research and Statistics
- Social Stratification, Inequality, and Mobility
- Sociology of Religion
- Sociology of Education
- Sport and Leisure
- Urban and Rural Studies
- Browse content in Warfare and Defence
- Defence Strategy, Planning, and Research
- Land Forces and Warfare
- Military Administration
- Military Life and Institutions
- Naval Forces and Warfare
- Other Warfare and Defence Issues
- Peace Studies and Conflict Resolution
- Weapons and Equipment
- < Previous chapter
- Next chapter >
9 Methodologies in Counseling Psychology
Nancy E. Betz Department of Psychology The Ohio State University Columbus, Ohio
Ruth E. Fassinger, College of Graduate and Professional Studies, John F. Kennedy University, Pleasant Hill, CA
- Published: 18 September 2012
- Cite Icon Cite
- Permissions Icon Permissions
This chapter reviews quantitative and qualitative methodologies most frequently used in counseling psychology research. We begin with a review of the paradigmatic bases and epistemological stances of quantitative and qualitative research, followed by overviews of both approaches to empirical research in counseling psychology. In these overviews, our goal is to provide a broad conceptual understanding of the “why” of these methods. Among the quantitative methods receiving attention are analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA), factor analysis, structural equation modeling, and discriminant analysis. We include discussion of such qualitative methods as grounded theory, narratology, phenomenology, ethnography, and participatory action research. Important general issues in designing qualitative studies are also discussed. The chapter concludes with a discussion of mixed methods in research and the importance of knowledge of both major approaches in order to maximally utilize findings from our rich and diverse counseling psychology literature.
All research in counseling psychology is based on the fundamental principle that we learn about people—their thoughts, feelings, and behaviors—by “observing” them in some systematic manner. Certainly, we learn much about people from the processes of informal observation, as might be the case when we watch someone interact at a party and make inferences about his or her social skills. But to advance science, our observations must be done under conditions that include some sort of control over the process of gathering those observations. The observations can be gained in a variety of ways: via formalized assessments or measures, from interviews or other kinds of narratives, through structured viewing and tracking of behavior, or even from cultural artifacts that provide information about the phenomenon of interest.
Broadly speaking, observations can be divided into those yielding numeric representations and those yielding linguistic representations, and each offers a unique and heuristically valuable approach to explaining the phenomenon of interest. The manner in which we examine the observations to make sense of them is determined by the nature of those observations, and science provides us with a wide variety of methods from which to choose. These methods are the subject of this chapter.
In this chapter, we begin with a review of the paradigmatic bases and epistemological stances of quantitative and qualitative research, followed by overviews of both approaches to empirical research in counseling psychology. In these reviews, our goal is to provide a broad conceptual understanding of the “why” of these methods rather than a detailed technical “how-to” description of any method. We note that our somewhat differential coverage of quantitative and qualitative approaches in the chapter reflects the disproportionate attention to quantitative methods currently seen in our field, an empirical imbalance that we hope will be rectified over the next decade. We also point out that, although we cover quantitative and qualitative approaches separately in this chapter for ease in presentation, combining these approaches can yield the most informative programs of research over time. Thus, we conclude the chapter with a brief discussion of mixed methods research and of future directions in research methodology.
Paradigmatic Bases and Epistemological Stances of Qualitative Research
Ponterotto (2002, 2005 ) has written cogently about the paradigmatic bases and epistemological stances underlying quantitative and qualitative research in counseling psychology, using a four-category system that distinguishes among positivist, post-positivist, constructivist-interpretivist, and critical-ideological paradigms. Beginning with several concepts integral to the philosophy of science, Ponterotto ( 2005 ) outlines the major premises of the four paradigms based on their assumptions regarding ontology (nature of reality), epistemology (acquisition of knowledge), axiology (role of values), rhetorical structure (language of research), and methodology (research procedures). Generally, it can be said that positivist and post-positivist paradigms most often undergird quantitative research, whereas constructivist-interpretivist and critical-ideological paradigms most often form the foundation for qualitative research. However, these distinctions are not completely orthogonal; the post-positivist position, for example, can typify the work of some qualitative as well as quantitative researchers, and some of the specific “qualitative” approaches (e.g., participatory action research, ethnography) may incorporate different forms of quantitative data if appropriate to the goals of the study.
Positivist and Post-positivist Paradigms
Positivism is often called the “received view” in psychology (Guba & Lincoln, 1994, cited in Ponterotto, 2005 ). It is based on the theory-driven, hypothesis-testing, deductive methods of the natural sciences, and involves a controlled approach to generating hypotheses about a phenomenon of interest, collecting carefully measured observations, testing hypotheses for verification using descriptive and inferential statistics, and producing general theories and cause-and-effect models that seek to predict and control the phenomenon under investigation. Ontologically, positivism assumes an objective reality that can be apprehended and measured, and epistemologically, it posits separation between the researcher and the participant, so that the researcher can maintain objectivity in knowing. Axiologically, it presumes the absence of values in research, and methodologically, it focuses on discovering reality as accurately and dispassionately as possible (hence the emphasis on experimentation and quasi-experimentation, as well as psychometric rigor). Finally, its rhetorical structure of detachment, neutrality, and third-person voice is aimed at capturing the objectivity that has characterized the entire research endeavor.
Post-positivism shares much in common with positivism, the main distinction being that post-positivism assumes human fallibility and, ontologically, accepts a true reality that can be apprehended and measured only imperfectly. Epistemologically, axiologically, methodologically, and rhetorically, the ideal of objectivity, the assumption of researcher–participant independence, the prevention of values from entering research, the controlled use of the hypothetico-deductive method, and the detached scientific voice, respectively, are acknowledged as goals that may be realized only imperfectly in actual practice. For this reason, the post-positivist paradigm embraces theory falsification rather than verification, while maintaining the remainder of the positivist assumptive structure. Both the positivist and post-positivist paradigms stand in contrast to the two broad classes of qualitative research paradigms described below
Constructivist-Interpretivist Paradigms
In contrast to the ontological assumption of a fixed, external, measurable reality that can be apprehended, the constructivist-interpretivist paradigm assumes a relativist notion of multiple, equally valid realities that are constructed in the minds of actors and observers; that is, there is no objective reality that exists apart from the person who is either experiencing or processing the reality, or both. Thus, any account of a phenomenon is necessarily an experientially driven, co-constructed account influenced by the narrator/actor/participant and the listener/observer/researcher, both of whom bring their unique interpretive lenses (shaped by context, history, individual differences, and other forces) to the co-constructed account, which itself is created within a particular experiential context. Epistemologically speaking, then, there cannot be separation between the participant and the researcher, as the process of coming to know and understand is a transactional process that relies on the relationship between them and their mutual construction and interpretation of a lived experience. Imperative to building the kind of relationship that facilitates a full and detailed sharing of lived experience is connection to and entry into the participant’s world on the part of the researcher; thus, methodologically, the research process often involves intense and prolonged contact between researchers and participants, utilizing narrative, observational, contextual, historical, and other kinds of data that reveal deep or hidden aspects of the lived experiences under investigation.
It should be obvious that the role of the researcher differs markedly in this paradigm from the positivist or post-positivist researcher position. Researcher subjectivity in a constructivist-interpretivist paradigm is not only acknowledged but becomes an integral part of the research process. Axiologically, values are acknowledged as important, but “bracketed” so that they do not unduly influence the lived experience or the perception of that experience being shared and documented, and researcher reflexivity (in the form of deep reflection, self-monitoring, and immersion in the participant’s world) is vitally important to maintaining the integrity of the researcher–participant relationship. Rhetorically, the intense involvement of both the researcher and the participant in the constructivist-interpretivist research process is captured in first-person language, detailed descriptions of how interpretations were generated, direct quotations from the primary data sources (e.g., narratives), and personal reflections of the researcher, including statements regarding values and expectations that likely influenced the work.
Critical-Ideological Paradigms
The critical-ideological paradigm shares a great deal of its assumptive structure with the constructivist-interpretivist paradigm, upon which it is built. However, it is more radical in its goals, which include disruption of the power inequalities of the societal status quo and the liberation and transformation of individual lives. Ontologically, it assumes relativist, constructed realities, but it focuses on historically and societally situated power relations that permeate those realities, and it seeks to dismantle the power structures that have been socially constructed to oppress particular groups of people (e.g., women, people of color, sexual minorities). Epistemologically, the mutual, transactional relationship between participant and researcher embedded in the constructivist-interpretivist paradigm is presumed but expanded to a more dialectical aim of “inciting transformation in the participants that leads to group empowerment and emancipation from oppression” (Ponterotto, 2005 , p. 131). Thus, axiologically speaking, values on the part of the researcher not only are acknowledged and described, but become the driving force behind the ultimate goal of social change.
As might be expected in an approach in which research constitutes a social intervention, the constructivist-interpretivist methodological practice of deeply connecting to individuals to document their lived experiences becomes liberationist in the critical-ideological paradigm. As such, the researcher joins with participants not only as a reflective, empathic chronicler of their lived experiences, but as a passionate advocate for the social change that would empower and emancipate them. This kind of approach necessitates much more prolonged engagement between researchers and participants than is typical in behavioral research and often results in deeply forged alliances that are maintained long after the formal research project has ended. The advocacy stance on the part of the researcher also is reflected in the rhetorical structures used in critical-ideological research, which include description of the societal (interpersonal and intergroup relationships, institutional, community, and policy) changes that resulted or are expected to result from the research.
The constructivist-interpretivist and critical-ideological paradigms give rise to and subsume most of the specific qualitative approaches being used in psychology currently. Although qualitative approaches share some basic philosophical and epistemological premises, each of the extant approaches has features that distinguish it from other approaches. However, some approaches are more fully developed and articulated than others, leading qualitative researchers to borrow and combine aspects of other approaches (e.g., coding procedures, interviewing techniques) in the implementation of studies that may emanate from very different philosophical and epistemological foundations and goals. In addition, a host of contextual factors shape the practical application of qualitative research aims, such as academic publish-or-perish institutional structures, lack of qualitative expertise and training in most graduate programs, and lack of resources or support for the kind of sustained effort considered ideal in these approaches. Thus, the variability among approaches (as well as the specific philosophical underpinnings of a study) may be masked by compromises in conducting the actual study.
Because qualitative approaches differ substantially based on their particular paradigms and epistemological assumptions throughout all phases of the research (and the closer the adherence to core tenets of the approaches, the greater the differences), every qualitative research project must begin with a thoughtful consideration of these issues (Denzin & Lincoln, 2000 ; Patton, 2002 ; Ponterotto, 2002, 2005 ). Quantitative approaches, on the other hand, because they tend to share in common a positivist or post-positivist paradigm involving hypothetico-deductive theory verification or theory falsification methods, find their variability in approach primarily in the kinds of measurement (e.g., interval or categorical) and statistical analyses (e.g., correlation or analysis of variance) utilized. In the following section, we provide an overview of quantitative methods.
Quantitative Methods
As noted above, quantitative methods are based on positivist or post-positivist epistemologies in which theories are used to guide hypothesis generation and hypothesis testing regarding phenomena of interest. Hypotheses are examined using carefully defined and obtained empirical observations, which are assumed to represent important abstract constructs. These hypotheses are tested using descriptive and inferential statistics. The usefulness of quantitative methods depends on the quality of the observational data, which in this case refers to the quality of measurement. Measurement is the process by which we assign numbers to observations, usually of human characteristics or behaviors. Measures (scores) could be derived from a vocational interest inventory, from a measure of depressive symptoms, from a measure of client’s liking for the therapist, or from an index of problematic eating behaviors. What is essential is the quality of measurement, usually grounded in the concepts of reliability and validity. Without reliable and valid measures, further data analysis is futile, a waste of time. Detailed discussion of the quality of measures is outside the scope of this chapter, but see the chapter by Swanson ( 2011 , Chapter 8 , this volume).
The next sections will summarize different types of data analysis using quantitative methods. It is important to note that complete coverage of statistical methods would require several textbooks, obviously beyond the scope of this chapter. We cover the most commonly used methods in counseling psychology research and refer readers to well-known introductory statistics books and to advanced volumes such as Tabachnick and Fidell’s ( 2007 ) excellent text Using Multivariate Statistics.
Describing Observations
Two basic areas of introductory statistics are scales of measurement (Stevens, 1951 ) and descriptive statistics, and they will be mentioned here only briefly. Scales of measurement—nominal, ordinal, interval, and ratio—are important because our use of quantitative methods often depends on the kind of data we have. Nominal, or categorical, scales do not actually represent measurement but rather category membership, for example gender or marital status. Next in level of measurement is the ordinal scale, or rank orders. League standings in baseball and class rank are ordinal scales. These numbers have a “greater than” or “less than” quality, but the intervals between numbers are not necessarily (or even usually) equal. Next in level is the interval scale, in which not only order but interval size are presumed to be meaningful. Most test data are interval scale data. Finally, ratio scales have a true zero point as well as ordinal meaning and the assumption of equal intervals. Ratio scales are often found in the physical or psychophysical sciences.
Basic descriptive statistics are also important—in their eagerness to move to inferential statistics, many researchers underemphasize the basic importance of descriptive statistics, or how the numbers “look.” If we recall that numbers reflect the observations of people, basic summary descriptions of those numbers become interesting. Observations can be described using frequency distributions (numbers of observations at each score point or score interval, often called histograms [bar diagrams] or frequency polygons). The most important frequency distribution is the normal distribution , or bell curve , which has certain very useful properties, most especially predicted percentages of cases that fall above and below each z score point on the normal curve. These points are used in determining such critical statistics as standard error of measurement and standard error of estimate.
The best known descriptive statistics are measures of central tendency and variability . Measures of central tendency include the arithmetic mean, median, and mode, and indices of variability include the range, the variance, and the standard deviation (the square root of the variance). Descriptive statistics are important in and of themselves. For example, we may want to answer the question, “How depressed are college students at our university?” Not only the mean score on our measure of depression but the range and variance of scores are critical information—for example, how many of our students are at risk of suicidal thoughts or behavior? We usually compare score means across gender or race/ethnicity, for which we need group standard deviations as well as means. We may want to compare the scores of a new sample with the original normative sample. We may want to compare the scores of those receiving an intervention to those in a control group. Thus, the mean, variance, and standard deviation are fundamental to many types of analysis.
Inferential Statistics—Group Differences
When we want to compare means—such as those of women and men, normative versus new samples, or those of treatment versus control groups—we usually begin with the assumption that we are examining a sample or samples from a population/populations. Only in very rare instances would we assume that we have assessed the entire population, and we will not deal with that possibility here. Because we are assessing samples from populations, we are using inferential statistics, making educated guesses about population parameters from sample statistics. In doing this, we assume that a population parameter has a sampling distribution with a mean and standard deviation, and that sample values taken from the population will array themselves around the population mean—this array of values is known as the sampling distribution . Generally speaking, the larger the standard deviation of the sampling distribution, the more variation in means we will have when sampling from that distribution. The key question for our statistical methods— t -tests, analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA)—is to estimate the probability that the means have come from the same versus different populations (we do not refer to z tests here because they assume that we have sampled the entire population).
Sum of Squares
Before proceeding, it may be helpful to review the concept of sum of squares (SS), which is critical to the variance calculations required by all these tests. Recall that a sample has a mean that is an estimate of the population parameter. And it has variability of individual scores around that mean. The wider the variability, the greater will be the variance, or its positive square root, the standard deviation. The variance and standard deviation are based on the sum of squared deviations of individual values from the sample mean, divided by the degrees of freedom, generally n -1 to reflect that the mean used is the sample rather than the population mean. The concept of SS is very important in ANOVA and regression as well, as it is a basis of the general linear model (GLM) fundamental to many of our statistical methods.
Given the above, we begin with a null hypothesis, which usually is that group means are not different from each other. The test statistic, in this case t , is the difference between our two means divided by the standard error of the difference. This latter value is the combined standard deviations divided by the combined degrees of freedom. Most computer programs will give the value of t for both pooled and separate variances—pooled variance estimates are appropriate if it can be assumed the two population variances are equal. Normally, the statistical software will provide as an option a test of homogeneity of variance, often Levene’s test, Hartley’s F max test (Kanji, 1993 ), or Bartlett’s test (see Glass & Hopkins, 1984; Kanji, 1993 ). If the hypothesis of homogeneity of variance is rejected, then the separate variance estimates must be used.
The obtained value of t is taken to the table of percentile points of the t distribution, sometimes called the critical values of the t distribution. Using the degrees of freedom, we determine the critical value for rejection of the null Ho at the prescribed value of α—for example, if our two groups each had an N of 21, then our df would be 40 and the critical values of t are 2.021 ( p < .05), 2.704 ( p < .01), and 3.551 ( p < .001). As sample sizes increase, t approaches the value of z . At df = infinity, the critical t at .05 is 1.96, which is also the .05 critical value for the z statistic.
Confidence Intervals and Effect Sizes
In addition to acceptance or rejection of the null hypothesis, we should compute both confidence intervals and effect sizes. There has been much criticism of null hypothesis significance testing (NHST) based on the fact that it allows us only to reject a null hypothesis (which usually is not what we actually want to know), at an arbitrary and necessarily dichotomous level, instead of giving us a probability that our desired hypothesis is true. Cohen ( 1994 ) argues that what we want to know is, “Given these data, what is the probability that Ho is true?” but that what it actually indicates is, “Given that Ho is true that is the probability of these (or more extreme) data?” (p. 997).
Confidence intervals improve hypothesis testing by providing estimates of the range of values that likely includes the true population value. When based on a t test, the confidence interval is the mean difference plus or minus a value computed as the critical value of t , multiplied by the standard error of the difference. Thus, we derive a confidence interval that contains the true (population) difference with a probability of 1- p ; if that interval includes zero, then we have not rejected the null hypothesis of no difference between the means.
Also, because the statistical significance of t tests is dependent on sample size, we could find that a trivial difference in means was statistically significant but practically unimportant. Likewise, we might overlook a potentially important difference if small sample sizes prevented it from reaching statistical significance. Because of this, it is always advisable to also calculate effect size, an index of the importance of the difference or, as noted by Kirk ( 1996 ) “practical significance.” There are several indices of effect size in terms of standardized mean differences, most commonly Cohen’s ( 1969 ) d and Glass’ ( 1976 ) δ. Cohen’s d is the mean difference divided by the combined (pooled) standard deviation. It essentially can be interpreted as the size of the difference in standard deviation units, and is directly related to the amount of overlap between two score distributions. Greater effect size equals less overlap between the distributions. With caution, in that all interpretations must be based on the context and purposes of the research, Cohen ( 1988 ) recommended the following ranges of d : .20 to .50 is a small to medium effect, .50 to .80 is a medium to large effect, and over .80 is a large effect. There is not a one-to-one correspondence between statistical significance and effect size—a difference can be statistically significant but not practically important and vice versa.
Effect size is especially important since a difference of a given magnitude can be more or less important based on the overall spread of the score distributions. Assume we have a mean difference of 5 score points between men and women; if the standard deviation of the combined distributions is 10, we have an effect size ( d ) of .5, what Cohen would describe as a moderately large effect. But if the standard deviation were 30, d would be .17, less than what Cohen would require for a small effect. In the first case, the difference is one-half a standard deviation, but in the second case it is only one-sixth a standard deviation.
One major use of effect size data is in the technique of meta-analysis , first coined by Glass ( 1976 ). Meta-analysis is used to quantitatively summarize the findings of many studies of the same general topic. In the case of mean differences, we often use it to summarize a number of studies of a treatment versus a control group. In the classic study of Smith and Glass ( 1977 ), 833 comparisons of psychotherapy treatment versus control groups were done, yielding a definitive conclusion of the effectiveness of psychotherapy. Meta-analysis involves searching the literature for all relevant studies, extracting the statistic of interest (in this case the effect size of a comparison of means) and then averaging the effect sizes across studies, often weighted by sample size or, in some cases, by judgments of the quality of the studies. Meta-analyses can also be done within subgroups, such as gender or race/ethnicity; in the case of Smith and Glass ( 1977 ), analyses were done for each theoretical approach to therapy. Readers may consult Glass ( 2006 ) for a comprehensive discussion of how to conduct a meta-analysis.
ANOVA is used to simultaneously test the differences between two or more means. When there are only two means, its results are identical to those from a t test, but it is more versatile in that three or more means can be tested for difference simultaneously. The null hypothesis is that the population means (μ) are equal: μ 1 = μ 2 = μ 3, etc. This is also called an omnibus test of the equality of means .
The method of ANOVA utilizes the decomposition of variance components. In a simple one-way analysis of variance, when there is only one independent variable, the total variance is the sum of the variance between treatment and the variance within treatments. We estimate these using the concept of sums of squares mentioned previously. The total SS is the squared deviation of each score in the entire set from the grand mean of all the scores. The SS Between is the SS from the squared deviation of each group mean from the grand mean, and the SS Within is the sum of squares of each individual’s score from the mean of his or her group.
To convert these into variances we obtain the mean squares (MS) by dividing the SS by the degrees of freedom, usually shown as the Greek letter ν. For the between-group SS, ν is J–1, where J is the number of groups, and for the within-group SS, ν is n –J–1, where n is the total sample size and J is again the number of groups. The test statistic F , is MSB/MSW, or the between-group variance divided by the within-group variance. We take this value to the F table using the df of 1 and 17. The closer the value of F is to 1.0, the less likely it is to be statistically significant—if the MSB/MSW is close to 1, then we can assume that they are estimating the same variance, the population variance, and that the groups do not differ from each other. If the value of F is statistically significant, we can reject the null hypothesis of group mean equality. Note that, if there are three or more groups, a significant result does not tell us specifically which groups are different from each other—that is examined using post hoc tests of means, such as the Tukey, Tukey-B, and Scheffe tests.
Next in complexity to one-way ANOVA is two- or more-way ANOVA, used with factorial designs. A factorial design involves two or more independent variables, for example type of treatment and gender. The IVs can be manipulated or formed into “natural groups” (such as gender or ethnicity, groups, which existed a priori and are not manipulated). When each level of one IV is paired with each level of the other, we call it a completely crossed design . This is probably the most commonly used factorial design, although there are many others—see research design texts (e.g., Heppner, Wampold, & Kivlighan, 2008 ) for possibilities. A two-way ANOVA can provide one or more main effects and/or an interaction effect.
In a two-way ANOVA, we calculate the MS for each independent variable and the MS for the interaction between the independent variables. Each MS is divided by MSW(within) to get the F for that effect. These are taken to the appropriate cell in the F table to determine whether we accept or reject the null hypothesis for that effect. Again, with an interaction or any main effect involving more than two levels, we must do post hoc tests.
Effect Sizes
In the same way that we use effect sizes to evaluate the practical importance of the differences when we have done t tests, effect sizes also should be provided for ANOVA and MANOVA. The effect sizes we use here are in terms of variance accounted for, which is also known as strength of association. η 2 is the proportion of sample total variance that is attributable to an effect in an ANOVA design. It is the ratio of effect variance to total variance; Cohen ( 1988 ) suggests that η 2 = .01, .09, and .25 correspond to small, medium, and large effect sizes, respectively. ω 2 estimates the effect size in the population (Tabachnick & Fidell, 2007 ). The intraclass correlation also can be used as an index of effect size in ANOVA models.
Vacha-Haase and Thompson ( 2004 ) provide an excellent table summarizing strategies for obtaining effect sizes for different analyses using IBM’s SPSS software suite (p. 477).
Confidence Intervals For Effect Sizes
There are two final important trends to mention in hypothesis testing. First, the American Psychological Association (APA) Task Force on Statistical Inference included as a recommendation that confidence intervals be provided for effect sizes themselves (see Tabachnick & Fidell, 2007 , and Vacha-Haase & Thompson, 2004 , for further details). Moreover, some journal editors are now recommending that the statistic called p rep (probability of replication) should be reported instead of the p itself. The p rep is the probability of replicating an effect and is itself a function of the effect size and the sample size (Killeen, 2005 ). The value of p rep is inversely related to p —for example, in a given study, p values of .05, .01, and .001 might correspond respectively to p rep values of .88, .95, and .99 (Killeen, 2005 ). Note that pr ep is a positive way of attaching a probability to the likelihood that we will find the same effect again, instead of a statistical sign that we should reject a null hypothesis.
MANOVA is appropriate when we have several dependent variables (DVs). Using the logic of type 1 error, rejecting the null hypothesis at p < .05 means that there is a .05 chance of error, that is, of falsely rejecting the null hypothesis. When we do several tests at the .05 level, we compound that probability of error—we have what is called “experiment-wise error” or “family-wise error.” With two DVs, the error rate is approximately .10, and with five DVs it is .23 (see Haase & Ellis, 1987 , for the formula). Clearly, these levels of experiment-wise error will lead to excessive type 1 error.
One method of correction for this error is known as the Bonferroni correction , which sets the per comparison level of α at approximately α/p, where p equals the number of dependent variables (Tabachnick & Fidell, 2007 ). This correction is adequate if the variables are not correlated, or if they are highly correlated, but it is not preferable if the variables are mildly correlated, which will be true in most cases. In these cases, MANOVA can be used. It controls the experiment-wise error at the original α, .05, .01, or .001, whatever probability had been determined as the critical value. Like ANOVA, MANOVA can involve only one factor or it can involve a factorial design with two or more IVs. The analysis yields a multivariate F based on an “omnibus” or simultaneous tests of means, which describes the probability of type 1 error over all the tests made. If the multivariate F is statistically significant, we may proceed to examine the univariate F statistics, the same as those we would receive in a univariate ANOVA, to determine which dependent variables are contributing to the significant overall F . The F statistics in MANOVA are provided by one or more of four statistical tests—Wilkes’ λ, Pillai’s trace, Hotelling’s trace, and Roy’s criterion (see Haase & Ellis, 1987 ).
Describing Relationships
Methods for studying relationships between and among variables have at least two important uses within psychology. The most basic use is to further the understanding of human behavior by helping to elucidate the interrelationships of behavior, personality, and functioning. Another important use is that of prediction; when we understand relationships, we can use them to predict future behavior. The study of relationships, including those used in prediction, begins with the topics of correlation and regression. The ideas of association, or covariation, are fundamental in science, including psychology. Many of our quantitative methods are based on the study of relationships.
The index of correlation we use depends on the nature of our variables, but the best known and most frequently used is the Pearson product moment correlation r. This statistic describes the relationship between two interval scale variables, and its calculation is based on the cross-products of the deviation of each score from its own mean. These values can be positive in value, indicating that, as one variable becomes large, the other one does as well; negative, indicating that larger values of one are associated with smaller values of the other; or zero, meaning that there is no association between the two variables. Pearson correlations range from 1 to –1, and it is the absolute value of the correlation that indicates its strength—a correlation of –.5 is as strong as one of .5.
Correlations can be interpreted in terms of statistical significance, percentage of variance accounted for, and effect size. Statistical significance when applied to a correlation means that the correlation is statistically different from zero. The null hypothesis for a correlation is that the population parameter ρ is equal to 0. All statistics texts contain a table that presents this as a function of sample size—the larger the sample size, the smaller an r must be to be statistically different from zero. For example with an N of 100, a correlation of .20 is significant at p < .05, whereas if the N were 1,000, a correlation of .06 would be significant at that level. As with all hypothesis testing, we specify the level of type 1 error we are willing to tolerate, and test our hypothesis at the .05, .01, or .001 levels. If the N is large enough, very small correlations can be statistically significant. For example, the critical value of r ( p < .01) in a sample of 10,000 is .02.
The latter is an example of a case in which a statistically significant value may be practically insignificant. The square of the r is the coefficient of determination, which is the percentage of variance shared by the two variables. Thus, a correlation of .10, when squared, is .01, meaning that only 1% of the variance is shared between the two variables, and 99% of their variance is not shared. Under most circumstances, this would be a trivial association. In the example given above, the statically significant correlation of .02 when N = 10,000, we are accounting for only an infinitesimal .0004% (4/10 of 1%) of the shared variance between the two variables.
In addition to the percent of shared variance, practical importance also is reflected by effect size, as it is with the description of mean differences in t tests and ANOVA. The value of r is itself an established index of effect size, and its interpretation is based on its relationship to the most common measures of effect size, such as Cohen’s d ( 1988 ). Cohen ( 1992 ) attaches values of r of .10, .30, and .50 to small, medium, and large effect sizes, respectively, but it should be recalled that these describe, respectively, only 1%, 9%, and 25% of the shared variance.
Table 9.1 contains a matrix of correlations among eight personality variables measured by the Healthy Personality Inventory (Borgen & Betz, 2008 ), an inventory designed to reflect the emphasis on positive psychology and the healthy personality. The table provides the bivariate correlations among the eight variables. As is standard practice, no values are shown for the diagonal, as they are 1.0 (the correlation of a variable with itself, although in some cases the value of coefficient α is shown instead). The table is bilaterally symmetric, so it is necessary to show only the upper or lower diagonal. In some cases, values for one gender are shown above the diagonal, and those for the other gender are shown below the diagonal. Values of r that are statistically significant for an N of 206 are provided in a note below the table.
Values for 206 college students. For an N of 206, values of r of .14, 18, and .23 are significant at the .05, .01, and .001 levels, respectively, for a two-tailed test.
It is important to note four cautions. The value of r reflects only linear relationships. If there is curvilinearity in the relationship, then the relationship is better described by the statistic η. Second, the value of the correlation coefficient will be restricted if there is restriction in range in either or both of the two variables being studied. If there is little variability in the scores, there is less chance for changes on one to be reflected by changes in the other. There are formulas for corrections for restriction in range (see Hogan, 2007 , p. 122), and these are often used in predictive validity studies (see Sireci & Talento-Miller, 2006 ). But unless there is some reasonable expectation that score ranges can be increased, then this correction is unduly optimistic and not reflective of what actual relationships will be found in the data.
A third caution is that we must resist the temptation to conclude that a statistically significant correlation is “significantly larger” than a nonsignificant correlation. For example, if we had an N of 50, a correlation of .28 would be significant (at p < .05), whereas one of .25 would not be statistically significant, yet they are not statistically different from each other. This must be tested by the z test for the significance of the difference between two values of ρ (see Glass & Hopkins, 1996 ).
Finally, correlation does not imply causation. Correlation reflects the covariation among two variables, but does not allow any assumptions about whether or not one causes the other or whether both are caused by a third (or more) variable. For example, we might find that depression and loneliness are correlated. We could postulate that depression leads to loneliness, that loneliness makes people depressed, or that some third variable like low self-esteem or perceived social inadequacy causes both depression and loneliness. Other research designs (e.g., experimental examinations of treatments for low self-esteem, depression, or loneliness, or structural equation modeling) are required to address questions of causality.
Other Correlation Coefficients
Although r is the most commonly used index of correlation, a number of others are suitable when one or both variables are not interval in nature. φ is often used with two categorical variables, whereas the contingency coefficient is used with two polychotomous variables (categorical variables with more than two categories, such as marital status or race/ethnicity). The relationship between a dichotomous variable (such as right–wrong or true–false answers) and a continuous variable, such as total test score, is indexed by the point-biserial coefficient if we assume a true dichotomy, or the biserial coefficient, which assumes that the dichotomous answer actually reflects an underlying continuum and is therefore dichotomized. The biserial is not a Pearson product moment r , so its absolute value can exceed 1.0. Correlations between data, when both sets of numbers are ordinal, can be computed using the Spearman rank correlation (see Glass & Hopkins, 1996 ).
Although the most basic reason for studying covariation is to understand the myriad relationships in human behavior, characteristics, and functioning, in many settings these relationships also are used to predict behavior, and in these cases we use the method of regression. One of the oldest uses of regression is based on the relationship between high school grades and performance in college. Using a scatter plot, we could place the predictor variable, high school grades, on the horizontal axis and the criterion, college grade point average (GPA), on the vertical axis. The relationship between these two sets of scores would be described by Pearson’s r . A regression equation is an equation for a line Y ′ = bX + a , where X is the value of the predictor variable and Y ′ is the predicted value of the criterion. In this equation, the a is the Y intercept, the value of Y where the regression line crosses the y axis or, in other words, the value of y corresponding to an X of 0. The b is the slope of the line and is a direct function of the correlation r between X and Y. The slope of the line is the rate of change in Y as a function of changes in X. Given the formula Y ′ = bX + a , we can estimate a person’s score on the criterion variable, given his or her score on the predictor variable. The regression line is known as the “line of best fit” and is determined mathematically as that equation which minimizes the errors of prediction of the criterion from the predictor. In new samples, we then could use this equation to make predictions of collegiate performance from high school GPA.
Multiple Regression
In many cases, we wish to use multiple predictor variables to predict a criterion—the simplest example of this is the use of both scholastic aptitude test scores and high school GPA to predict college GPA. In this case, the formula for a line is generalized to multiple predictors and takes the form Y ′ = b 1 X 1 + b 2 X 2 +… b n X n + a. The quality of prediction is based on the strength of the multiple correlation coefficient R , describing the relationship between a linear composite or summary of the predictor variables and the criterion variable Y. And R 2 , like r 2 , is referred to as the coefficient of determination .
Variables can be entered into a regression analysis in several different ways. In simultaneous entry, all variables are entered together, and each is evaluated according to what it adds after the other variables have been accounted for. In sequential or hierarchical regression, variables are entered in a specific order as determined by the researcher. In stepwise regression, variables are entered one at a time according to statistical criteria—forward, backward, and stepwise entry can be used (see Tabachnick & Fidell, 2007 ).
Regardless of the method of variable entry utilized, weights in the equation should be cross-validated. Because multiple regression is a maximization procedure, meaning that it selects the weights that will maximize predictive efficacy in that particular sample, it is subject to shrinkage in subsequent samples. Therefore it is recommended that the equation be cross-validated. This is done by dividing the original sample in two, with the development sample being larger (see Tabachnick & Fidell, 2007 ). In the cross validation step, the predictive weights derived for the first sample are applied to the second sample, and the resulting R 2 determined. The second R 2 is probably a more realistic estimate of the predictive power of the set of variables. The method of double cross validation involves separately obtaining an original set of weights on each half of the sample, and then applying the within sample weights to the other sample. The average of the two R 2 is probably a good estimate of the predictive efficacy of the variable set.
Meta-analysis
Meta-analysis, described previously in the discussion of group differences, is used frequently in the study of predictive validity. We use meta-analysis in this context to summarize across many predictive validity studies—the summary if we do find evidence for predictive validity across studies is often called “validity generalization.” For example, DeNeve and Cooper ( 1998 ) published a meta-analysis of 1,538 correlation coefficients from 148 studies of the relationships of 137 personality variables to measures of subjective well being. In brief, they found that Big Five Neuroticism was the strongest (negative) predictor of life satisfaction and happiness and the strongest predictor of negative affect. The strongest predictors of positive affect were Big Five Extraversion and Agreeableness.
Moderators and Mediators
Two other types of variables often are used in predictive and other correlational studies—moderator variables and mediator variables. Perhaps because the two terms are similar and/or perhaps because they both involve a “third variable” that influences the interpretation and meaning of a bivariate correlation, these terms are often confused or are assumed to be equivalent. This is not the case.
As mentioned, both moderators and mediators are third variables that can be involved in examining the relationship between two other variables. A moderator (see Frazier, Tix, & Barron, 2004 ) is a third variable that influences the strength of relationship of two other variables to each other but is not related itself to either one. That third variable can be categorical, such as gender, or interval, such as job satisfaction. For example, two variables may be more strongly related in women than in men or more strongly related in more highly versus less highly satisfied workers. A moderator in ANOVA terms is an interaction in which the effect of one variable depends on the level of the other. It may be suggested that a moderator leads to “differential predictability” of criterion from predictor variables.
The analytic methods used to identify moderators include the z test comparison of two correlations after conversion to Fisher’s Z (see Glass & Hopkins, 1996 ) and moderated multiple regression (see Tabachnick & Fidell, 2007 ). Shown in Figure 9.1A is a hypothetical example wherein social support moderates the relationship between stress and distress. It is postulated that, for people high in social support, the correlation between stress and distress is lower than for those low in social support. The moderator effect is shown by an arrow leading down to the arrow showing the relationship between stress and distress. Suppose that, for high support individuals, the correlation is only .10, whereas for low support individuals it is higher, .50. If these are shown to differ significantly using the z test following transformation to Fisher’s Z , then we can conclude a moderator effect for social support in this study. Moderator effects should always be replicated and the search for them should be based on theoretical considerations rather than “data snooping.”
A mediator is a variable that represents the generative mechanism by which one variable affects another (Baron & Kenny, 1986 ). In this case, we have a relationship between two variables, but we postulate that the intervening mechanism is the relationship of each with a third variable, the mediator. Figure 9.1B shows a postulated mediator relationship for the same variables shown in Figure 9.1A . Say, for example, that we postulate that stress causes people to avoid social support, which causes them distress. If the relationship between stress and distress is significantly reduced when the path to social support is considered, we may have a mediator. Baron and Kenny postulate that finding a mediator requires four steps (shown in the figure): (1) that the predictor is related to the criterion (c); (2) that the predictor is related to the mediator (a); (3) that the mediator is related to the criterion or outcome (b); and (4) that the strength of the relationship between predictor and criterion is significantly reduced (c') after the variance due to the mediator is removed. This can be tested using multiple regression, structural equation modeling, or the Sobel ( 1982 ) test, a handy and easy-to-use test available online (e.g., www.quantpsy.org ), or as a subroutine of SPSS and other software.
Discriminant Analysis
Discriminant analysis is a topic that could be covered either in the section on MANOVA or in this section on multiple regression in that it has purposes and procedures similar to both (see Sherry, 2006 , for an extended description of discriminant analysis). Probably the most common use of discriminant analysis is the use of multiple predictor variables to predict a categorical criterion—thus, it is like multiple regression except that the criterion variable is categorical rather than continuous. It is like MANOVA in that it tells us which of a set of variables differs significantly as a function of group membership. Probably its most frequent use would be to use a set of predictor variables to predict success versus failure, for example, in a job training program or in completion of a college degree. Like regression, it yields a set of weights that are applied to the predictors to yield the maximally predictive composite of scores to predict group membership.
As another possibility, discriminant analysis could be used as a follow-up to a significant MANOVA. MANOVA tells us whether or not a set of variables significantly differentiates two or more groups, controlling for the experiment-wise error by giving us a multivariate F . Post hoc univariate tests tell us for which variables significant group differences exist, but they do not tell us which variables contribute most strongly to the overall group separation. Discriminant analysis will give us discriminant weights, analogous to regression weights, which will tell us the strongest contributors to the group differences. Effect sizes could be used with the MANOVA to determine the variables leading to the largest differences between the groups, but that method would not control for the intercorrelations among the predictors.
( A ) Hypothetical example of social support as moderator variable. ( B ) Hypothetical example of social support as mediator variable.
Like MANOVA, discriminant analysis requires a set of two or more variables for two or more groups. The method of analysis involves a search for the linear equation that best differentiates the groups, so it is (like multiple regression) a maximization procedure and must be cross-validated. The analysis yields at least one discriminant function, analogous to a regression equation, which contains a set of β weights that are applied to the variables. Like β weights in regression, the weights indicate the importance of the variables in separation, or differentiating, the groups. The maximum number of discriminant functions is the number of groups minus 1 or the number of predictors, whichever is smaller. Of the discriminant functions, none, one, or more can be statistically significant. If not significant, then the function is not making a meaningful contribution to our understanding of group differences.
For predictive purposes, the weights are applied to each individual’s scores and compared with what are termed group centroids to estimate the probabilities of group membership. If the discriminant weights are applied to the mean scores within each group, the results, two or more centroids, depending on the number of groups differentiated, will be maximally separate from each other. We assign each individual’s score composite to the closest centroid. The number of correct versus incorrect assignments is known as the “hit rate” and is compared to the probability of correct assignment by chance. For example if we have three groups of equal size, the probability of making correct assignments by chance is .333. To the extent that the discriminant weights can improve on that, which we examine using the z test for the difference between proportions (Glass & Hopkins, 1996 ), the discriminant function is enhancing prediction. Cross-validation can be done using a holdout sample, double cross validation, and the “jackknife” method. In the latter method, one case is held out at a time, and the discriminant function is calculated based on the remaining cases. The weights are applied to the case held out to make a group assignment. This is done for each case, and the probability of correct classification is based on the cumulative number of correct classifications across all cases.
An excellent example of the use of discriminant analysis in counseling psychology research is the study of Larson, Wei, Wu, Borgen, and Bailey ( 2007 ) of the degree to which personality and confidence measures differentiated four college major groups in 312 Taiwanese college students. Personality and confidence measures each differentiated the college major groups well, but the combination of both significantly improved prediction beyond either set used alone.
Other Related Analyses
Less often used in counseling psychology research but worth knowing about are logistic regression and multiway frequency analysis (MFA; or its extension, log-linear analysis). Both are used with data in which some or all are categorical. Logistic regression (see Tabachnick & Fidell, 2007 , for a full description) is used to predict a categorical dependent variable (criterion) from a set of interval and/or categorical variables. It is used extensively in the medical field. For example, gender and whether or not a smoker (both categorical), and body mass index and amount of exercise per week (both interval scale), could be used to predict whether or not someone has a heart attack before age 50. Logistic regression is similar to discriminant analysis except that the latter uses only continuous predictor variables (unless categorical variables are dummy coded, e.g., assigning 1 to female and 2 to male).
Multiway frequency analysis, or an extension called log-linear analysis , is used to examine the relationships among multiple categorical variables. If we have only two categorical variables, we use the χ-square test of independence to investigate the relationship (vs. independence) between the two variables. For example, we could examine the relationship between gender and whether or not a student dropped out of college before finishing. If we have three or more categorical variables (for example, race/ethnicity and whether or not the student is a first-generation college student in addition to gender and completion of college), we would use MFA. For more information on all of these methods see Tabachnick and Fidell ( 2007 ).
Finally, one problem which is common to all quantitative data analyses is the problem of missing data. Several recent papers (Schafer & Graham, 2002 ; Sterner, 2011 ) have detailed the types of missing data and methods for handling each type. Usually less problematic are those instances where missing data are assumed to occur at random, while more serious problems may be caused in instances where there is non-randomness, or systematicity, in the missing data – for example if missing data is significantly more likely among one gender or ethnic group than the other gender or another ethnic group. These articles provide excellent suggestions for handling missing data in all of these cases and provide recommendations for statistical software that can be useful in each of these cases.
Examining Structure and Dimensionality
Factor analysis.
Factor analysis has been one of the most widely used analytical procedures in psychological research. It began with the work of Charles Spearman ( 1904 ) on the structure of mental abilities. He developed a mathematical model specifying that ability tests were composed of two factors—a general ability factor ( g ) and a specific factor ( s ). Factor analysis has grown into a family of methods that enable us to study the structure and dimensionality of measures and of sets of variables. For example, we could use factor analysis to determine dimensions underlying several indices of social behavior or to ask how many underlying dimensions of personality there are in a new measure we have constructed. In recent years, factor analytic methods have been differentiated as exploratory factor analyses (EFA) and confirmatory factor analyses (CFA).
Exploratory Factor Analysis
As defined by Fabrigar, Wegener, MacCallum, and Strahan ( 1999 ): “The primary purpose of EFA is to arrive at a more parsimonious conceptual understanding of a set of measured variables by determining the number and nature of common factors needed to account for the pattern of correlations among the measured variables” (p. 275). Exploratory factor analysis is used when the researcher has no a priori theories about the structure of the measure or construct, or when a priori theories have not been supported by confirmatory factor analyses. The method utilizes a matrix of either correlations or covariances describing the relationships among the variables to be analyzed. The variables can be measures or items, for each of which there is a matrix of scores for a sample of people. The correlation or covariance matrix is a symmetrical matrix showing the relationships of each variable with every other variable (or item).
Given such a matrix, software from packages such as SPSS, SAS, CEFA, BMDP, Systat, or RAMONA are used to do the analyses. However, any EFA involves a sequential series of considerations that will determine the results of the analysis. These considerations are the nature of the variables and sample, the appropriate method of analysis, method of factor extraction, number of factors to extract, and method of rotation. In addition, the interpretation or naming of the factors and the decision as to whether or not to compute factor scores follow the analyses themselves.
Assumptions Regarding The Data
In both EFA and CFA, certain assumptions about the data are necessary. First, quality solutions result only from quality data—measures (or items, if it is to be a factor analysis of an item set) must be carefully selected to represent a defined domain of interest (Fabrigar et al., 1999 ). Just as in scale construction itself, the quality of the scale depends on the care put into defining the construct or domain of interest. There should be evidence for item or scale reliability. MacCallum, Widman, Zhang, and Hong ( 1999 ) suggest that, if one has an idea of the common factors to be represented, three to five measured variables (MVs) per factor will provide stable and interpretable results. If the researcher does not have hypotheses about the number of common factors, then the domain of variables should be delineated carefully and as many of those variables as possible included in the study. The data should be interval or quasi-interval in nature and be normally distributed, although the latter criterion depends on the method of factor extraction used. Some researchers have found that both EFA and CFA are relatively robust in the face of non-normality. However, less biased fit indices and more interpretable and more replicable solutions may follow when data are normally distributed.
Sample Size and Variable Independence
Although there has been much discussion of necessary sample sizes for factor analysis, a generally accepted guideline is five to ten participants per variable or item, if the analysis is at the item level (Joreskog & Sorbom, 2008 ). If sample sizes are larger than that, they may be divided into subsamples, so that the solution can be replicated. However, other authors have demonstrated that when common factors are overdetermined (three or four variables per factor) and communalities are high (averaging at least .70), smaller sample sizes (e.g., N = 100) are often sufficient (MacCallum et al., 1999 ). When the reverse is true—that is, factors are less well determined or communalities are lower—even very large sample sizes (up to N = 800) may not be sufficient. It is clear there are no simple answers to the question of sample size.
Methods of Factor Extraction
It is necessary at the outset to differentiate two different types of analysis: principal components analysis (PCA) and factor analysis (EFA or CFA). The major difference between them is that PCA analyzes all the variance among the variables, both common and unique, where unique variance includes that specific to the variable and also error variance. It is designed to rescale the original variables into a new set of components that can be equal in number to the original set but that are now uncorrelated with each other. It is not designed to elucidate underlying structure or latent variables (LVs) but to rescale or reassign the variables in the analysis. Generally speaking, it is not considered a method of factor analysis (Fabrigar et al., 1999 ), but if the researcher’s goal is to determine the linear composite of variables that retains as much information as possible from the original set of variables, then PCA is appropriate. An example of an appropriate use would be analysis of a large set of vocational interest items, where the purpose was to assign them to interest scales, retaining as much variance as possible from the original set.
If the purpose of the analysis is to more parsimoniously describe the underlying dimensions common to a set of variables, also known as the underlying LVs , then common variance analysis is much more appropriate. The common factor model is implemented by model-fitting methods, also known as factor extraction techniques . The major ones are maximum likelihood (ML) and principal axis factoring (PAF). All use only common variance in the estimation of communalities. The advantage of ML procedures is that they are accompanied by a large number of fit indices that can be used to evaluate the goodness of fit of the factor model to the data (see Browne, Cudeck, Tateneni, & Mels, 2004 ). However, they also require the assumption of multivariate normality. Principal axis factoring does not require such distributional assumptions but also provides fewer fit indices.
Although PCA places 1’s in the diagonal of the correlation matrix, common FA uses a communality estimate in the diagonal, where commonality refers to the shared common variance of that variable with other variables in the set. There are several commonality estimates typically used, including the largest correlation of a variable with any other variable in the set, the squared multiple correlation (SMC) of the variable with the remaining variables, and iterated estimates based on preliminary SMCs.
Number of Factors To Extract
The decision regarding the number of factors to extract should be based on a balance of parsimony with theoretical meaningfulness. In theory, we want to arrive at a smaller number of fundamental LVs, but we also want those LVs to be important and to accurately define the domain of interest. Especially if our goal is to explore a reduced number of factors that reflect underlying LVs, it is pointless to extract minor or trivial factors. However, researchers generally agree that it is more problematic to underfactor (to select too few factors) than to overfactor—in the former case, we may overlook important aspects of the behavioral domain, whereas in the latter case, we may simply end up focusing on an unimportant or trivial aspect of behavior.
There are several approaches to determining how many factors to extract, all of them in some way attempting to operationalize factor importance, as we only want to extract important factors. One basis for decisions is how much variance a factor accounts for. A variable’s contribution to a factor is represented by the square of the factor loading (factor loadings are analogous to correlations, which, when squared, represent the proportion of variance accounted for). In an unrotated solution, the factor contribution is known as the eigenvalue . The best known and most commonly used is the Kaiser-Guttman criterion (Gorsuch, 1983 ), in which factors having eigenvalues greater than 1 are extracted. This method is appropriate only for PCA or for other methods where 1’s are in the diagonal (such as α or image FA) and should not be used in common factor analyses where communality estimates are in the diagonal. This is the default in some statistical packages, although it tends to lead to overfactoring (more than an optimal number of components or factors) (Zwick & Velicer, 1986 ).
A frequently used method is the scree plot (Zwick & Velicer, 1986 ), in which the values of the eigenvalues are plotted, in order of factor extraction, on the vertical axis. The point at which the plot levels out (or the slope of the line approaches zero) is where factoring should stop. Common sense should be used, however, as there may be cases in which the scree plot would lead to the inclusion of factors with eigenvalues below 1.0. In some cases, there is no clear leveling off, or there is more than one leveling off point. A logical criterion for number of factors to extract is to include only factors having at least two or three variables loading highly on them. If only one variable loads on a factor, then it is questionable whether that variable reflects an underlying latent dimension. Other methods include parallel analysis (Hayton, Allen, & Scarpello, 2004 ) and root mean square error of approximation (RMSEA; Steiger, 1990 ), in which maximum likelihood estimation is used to extract factors (see also Browne & Cudeck, 1993 ).
The results of a factor analysis yield solutions based on mathematical maximization procedures, rather than solutions that are psychologically or intuitively satisfying. Factor rotation is designed to lead to a more interpretable set of factors. Methods of rotation are generally classified as orthogonal or oblique. An orthogonal rotation yields factors that are uncorrelated, whereas an oblique rotation allows factors to be correlated.
To understand rotation methods, it is necessary to understand Thurstone’s ( 1947 ) concept of simple structure . Simple structure defines a maximal interpretability and simplicity of a factor structure, such that each factor is described by less than the total number of variables, and each variable should be described by only one factor. Ideally, each factor should be loaded on by at least two but fewer than a majority of variables.
Orthogonal rotational methods include varimax and quartimax; varimax (Kaiser, 1958) is regarded as the best orthogonal rotation (Fabrigar et al., 1999 ) and is often the default in computer packages. Comparing the two, varimax is more likely to “spread out” the variance across factors, reducing the predominance of the general factor or of specific factors and increasing the number of common factors (factors on which a few variables load strongly). Quartimax has the opposite effect—emphasizing general and specific factors and de-emphasizing common factors.
Oblique rotations are generally considered preferable because they allow correlated factors, and the reality is that most psychological variables are at least partially correlated naturally. If the factors are truly uncorrelated, oblique rotations will yield an orthogonal set of factors. Oblique rotations provide the correlations among factors and, therefore, second-order factor analyses, where the factor intercorrelations are themselves factor analyzed to examine high-order structures. Oblique rotations include direct oblimin and promax . Most quantitative researchers view direct oblimin as preferable because the mathematical functions minimized (and maximized) in factor rotation are made explicit (Browne, 2001 ).
Regardless of which method of rotation is used, a matrix of factor structure coefficients will result that is different from the coefficients generated before rotation. The structure coefficients represent the correlations between the variables and the factors. For clarity of interpretation, it is best if the coefficients are either large or very small (near zero). The rule of thumb is to retain on a factor any variable with a loading of .40 or greater, although there may be instances where loadings as small as .30 or as large as .50 are determined as the minimum (Floyd & Widaman, 1995). A loading of .40 indicates a reasonably strong contribution of the variable to determining the “nature” of that factor.
Another feature of the results will be the percentage of variance accounted for by each factor and the total percentage of variance accounted for by the solution. A general rule is that, to meaningfully explain the interrelationships in the data, a factor structure should account for from 50% to 80% of the common variance. If factors 1 and 2 account for 40% and 20% of the variance and factor 3 accounts for only 2% of the variance, we may decide that factor 3 is too trivial to divert attention from the two more significant factors.
Table 9.2 shows the factor matrix resulting when principal axis factor analysis is applied to the correlation matrix shown in Table 9.1 , the eight variables from the Healthy Personality Inventory. Direct oblimin (an oblique) rotation was used. Using the decision criterion that factors with eigenvalues over 1.0 should be retained led to the retention of two factors with eigenvalues of 4.1 and 1.8, respectively; two factors were also indicated by the scree plot. Table 9.2 shows the resulting factor structure matrix. The most important factor in terms of variance accounted for is shown first—this factor accounts for 47% of the common variance. The second factor accounts for an additional 18% of the common variance. For the factor loadings shown for each variable on the two factors, larger loadings mean that the variable is more important to the definition of the factor. As we used oblique rotation, we allowed the factors to be correlated (they correlate r = .37). It is most useful to name the factors based on the variables that load highly on them. Thus, in the example shown, Factor 1 was named Productivity Styles, as it had strong loadings from the variables of Confident, Organized, Detail Oriented, and Goal Directed; and Factor 2 was named Interpersonal Styles, as it included Outgoing, Energetic, Adventurous, and Assertive.
N = 206; Highest loading of each scale on a factor is shown. Factor 1 accounts for 47% of the common variance, whereas factor 2 accounts for an additional 18% of the common variance. Factor 1 was named Productivity Styles, whereas Factor 2 was named Interpersonal styles. From Borgen, F. H., & Betz, N. E. ( 2008 ). Career self-efficacy and personality: Linking career confidence and the healthy personality. Journal of Career Assessment , 16 , 22–43.
Scores on the factors themselves also can be computed. Factor scores are useful if we wish to predict some type of criterion behavior from a concise set of factor scores. For example, assume that we have a battery of 15 ability tests—verbal ability, math ability, and spatial ability—that we wish to use to predict job performance. When calculated in the same sample, the factor score can be computed as the sum of the score on each variable multiplied by its weight on the factor(s) on which it loads significantly. However, since factor analysis is a maximization procedure, if used in subsequent samples, it has been shown that simple unit weighting of all the variables loading on a factor provides more stable results (Gorsuch, 1983 ).
Confirmatory Factor Analysis
Confirmatory factor analysis is used when we have an a priori hypothesis about the structure or dimensionality of the data or domain of behavior. There are several different types of such uses. One use is to examine the factor structure and/or construct validity of a scale or a set of measures: If a measure is postulated to have three underlying factors, we could use CFA to verify (or not) that structure.
The first step in CFA is to specify the model to be tested; that is, we specify which measures should load on which factor. In many cases, the CFA was preceded by an EFA to get an idea of the factor structure of the domain in question, and this structure is then tested using CFA. Hypotheses about relationships (of items or measures to factors or among factors) are operationalized in the model, usually with Thurstone’s simple structure in mind. We typically desire high loadings of items on one and only one factor, and specification of a factor by a few strongly loading items. In some cases, we postulate that one or more factors may be correlated.
Statistical software is used to compare the estimated covariance matrix of the specified model to the actual matrix of covariances found in the data. A number of possible software programs are available; all of these software packages are updated periodically. They include the SPSS subroutine AMOS (which must be purchased separately from the standard package), LISREL (Joreskog & Sorbom, 2008 ), EQS (Bentler, 1995 ), and Mplus (Methuen & Methuen, 1998). According to Kahn ( 2006 ), the latter three yield comparable results for a CFA, although Mplus may have more user-friendly syntax and also provides other multivariate analyses not available on other packages.
A number of fit indices are available to evaluate the fit of the model to the data. Fit indices indicate how well the actual covariances (relationships) in the data correspond to those in the hypothesized model (see Kahn, 2006 , for a full explanation). The traditional χ-square test of goodness of fit is best known and indicates the differences between the model-hypothesized covariances and those found in the data; the larger the value of the χ-square statistic, the greater the discrepancy between the hypothesized and actual models. Thus, a statistically significant χ-square indicates a lack of fit of the hypothesized model to the data. But the χ-square statistic is highly sensitive to large sample sizes and/or a large number of observed variables and often leads to the rejection of models that are good, if not perfect. One solution to this problem is the χ-square test of close fit (not perfect) developed by Brown and Cudeck ( 1993 ). This index seems to perform better across a range of sample sizes and models.
Other fit indices are not adversely affected by sample size. These include the Bentler-Bonnett non-normed fit index (NNFI; also known as the Tucker Lewis index) and the Comparative Fit Index (CFI). Using criteria suggested by Browne and Cudeck ( 1993 ) and Hu and Bentler ( 1999 ), models with CFI and NNFI (TLI) of at or above .95 indicate an excellent fit, whereas those between .90 and .94 indicate an adequate fit. The standardized root mean-squared residual (SRMR) and the RMSEA (Bentler, 1995 ) are other fit indices; for these indices, values at or below .05 indicate an excellent fit while those between .06 and .10 indicate an adequate fit. Confidence intervals also are provided for the RMSEA. Most authors (e.g., McCallum & Austin, 2000 ) recommend using multiple fit indices, paying particular attention to the RMSEA due to its sensitivity and its provision of a confidence interval.
Although it makes intuitive sense that CFA would be used most often to confirm structures (tentatively) established using EFA, there are instances in which the reverse sequence can be put to good use. One example of the latter can be found in Forester, Kahn, and Hesson-McInnis ( 2004 ), who reported the results of confirmatory and exploratory factor analyses of three previously published inventories of research self-efficacy. Using a sample of 1,004 graduate students in applied psychology programs, Forester et al. began with a CFA of each of the inventories separately, finding poor fit of each to its postulated factor structure. They then used EFA to evaluate the structure of the combined total of 107 items from the three inventories, arriving at a four-factor structure in which 58 items loaded at least .50 on one and only one factor. Of course, the next logical step in this research effort would be to return to CFA to examine whether the four-factor structure holds in new samples.
Confirmatory factor analysis also is well-suited to comparing factor structures in new demographic groups or across groups (such as gender or race/ethnicity). Often, EFA will have been used to derive a factor structure in original normative samples dominated by white males (particularly in older instruments), so it is crucial to demonstrations of construct validity that the factor structure be validated, or explored anew, in other groups with which we wish to use the measure(s). Kashubeck-West, Coker, Awad, Stinson, Bledman, and Mintz ( 2008 ), for example, found, using CFA, that factor structures in three inventories of body image and eating behavior that had been derived from white samples demonstrated poor fit in samples of African American women. They subsequently used EFA to explore the factor structure in the African American samples.
Multidimensional Scaling
Although used less frequently that factor analysis, the structure of a set of variables or items also can be described by multidimensional scaling (MDS). Multidimensional scaling provides the structure of variables in multidimensional (usually two-dimensional) space. Analysis of proximity or similarity data (which can be represented as correlations between variables) yields a series of points in two dimensional space, where each point represents a variable or item and the closeness of the points represents the variables’ similarity to each other. A good example of the use of MDS in counseling psychology research is the work of Hansen, Dik, and Zhou ( 2008 ). They analyzed 20 leisure interest scales using both EFA and nonmetric MDS and found two dimensions of leisure interests in college students and retirees—expressive-instrumental (e.g., arts and crafts vs. individual sports) and affiliative–nonaffiliative (e.g., shopping vs. gardening). With MDS, each leisure interest can be described on these two dimensions. For example, shopping would be a more affiliative expressive activity, whereas arts and crafts would be an expressive but less affiliative activity; team sports would be instrumental and affiliative, whereas building and repairing would be instrumental but less affiliative. For more information about MDS, readers may consult Fitzgerald and Hubert ( 1987 ).
Examining Causal Models
Structural equation models.
Structural equation modeling is actually a family of methods that subsumes many of the methods we have discussed so far. In the general case, it is a method of statistically testing a network of interrelationships among variables. It subsumes multiple regression analysis and confirmatory factor analysis but also includes path analysis and testing of full structural equation models. To understand the distinctions among these methods, it is useful to define two possible components of a structural model.
Elements and Path Diagrams
The elements of a structural model are MVs, usually shown as squares or rectangles, and LVs, usually shown as ellipses or circles. Measured variables are (as they sound) those that are measured directly, whereas LVs are unobservable constructs. In addition to measured and latent variables, the model must postulate relationships among variables, including error terms. These relationships are represented as unidirectional (one-way) and bidirectional (two-way) arrows. The values assigned to or resulting from directional relationships are regression coefficients, whereas those for nondirectional relationships are covariances (or correlations if variables are standardized). Variables in the model can be endogenous or exogenous. Endogenous (dependent) variables are those in which there is a directional influence to the variable from one or more other variables in the system. Exogenous (independent) variables are those that do not have directional influence from within the system; their influences may be unknown or may not be of interest in the current model.
( A ) Path model of predictors of loneliness in college students. ( B ) Path model predicted 45% of the variance in loneliness. From Hermann, K. ( 2005 ). Path models of the relationships of instrumentality and expressiveness, social self-efficacy, and self-esteem to depressive symptoms in college students. Unpublished Ph.D. Dissertation, Department of Psychology, Ohio State University.
The simplest model is a path model, for which each LV is directly measured—thus, this actually models relationships among a series of measures (and directional relationships are shown using arrows). A sample path model is shown in Figure 9.2A , where the researcher (Hermann, 2005 ) was examining variables related to loneliness in college students. Note that all variables are shown as rectangles, as the model does not incorporate latent variables; that is, each variable is assumed to be measured fully by one scale. Note also that only instrumentality in this model is exogenous—all other variables are endogenous; that is, they are postulated to be predicted by variables earlier in the model.
A full structural model consists of two parts: a measurement model, which represents the relationships of LVs and their indicators; and a structural model, which represents the interrelationships between LVs, both independent and dependent. In a full structural equation model (SEM), the measurement model is tested first to evaluate the fidelity by which the measures are valid indicators of the construct; following that, the full structural model is tested (see Figure 9.3 for an example of a full structural model).
Measurement and structural mode1s of intuitive eating.
The steps in SEM are (1) model specification, (2) identification, (3) estimation, and (4) modification (Schumaker & Lomax, 2004 ). In the first step, the researcher hypothesizes the relationships (including lack of relationship) among all variables. As in CFA, relationships between variables, also known as parameters or paths , must be either specified in advance or determined from the analysis of the correlation or covariance matrix. A free parameter is one whose value is unknown and must be estimated, whereas a fixed parameter is one we determine in advance; the latter also is known as a constrained parameter (Weston & Gore, 2006 ). Three types of parameters are necessary in a structural model. First, direct effects parameters specify relationships between a LV and its postulated MVs (known as factor loadings ) and between LVs (known as path coefficients ). To scale the MVs, it is common to set one of the factor loadings for each LV at 1.0, which has the effect of standardizing the set. Parameters other than those set at 1.0 need to be estimated (shown as asterisks). Error terms for dependent measured and latent variables also must be either fixed or estimated, and covariances among exogenous variables are specified as parameters as well.
Model Identification
This refers to the relationship between the number of parameters to be estimated and the number of data points in the correlation or covariance matrix. The number of elements in a correlation matrix is equal to the number of variables k by the following formula: [ k ( k + 1)]/2; if there are six variables, there are [6(7)]/2 = 21 elements in the matrix. Subtracting the number of parameters to be estimated from the number of elements yields the degrees of freedom for the analysis—if it is positive, the model is said to be overidentified, which is the optimal situation.
Structural equation modeling software is needed to estimate the free parameters and provide fit indices for the fit of the postulated model to the data. This software includes the same software programs used with CFA, including LISREL, AMOS (SPSS), Mplus, and EQS. Most programs describe path coefficients as either standardized β weights or unstandardized β weights, including standard errors, analogous to the results of a regression analysis. The statistical significance of weights can be computed (or may be provided by the software), and the sizes of standardized weights may be compared directly as indicators of relative importance.
The worth of the model tested can be evaluated by the significance and size of the path coefficients (indicating the strength of relationships among the variables), the amount of variance accounted for in endogenous variables, and indices of model fit. Like CFA, fit indices include the χ-square goodness of fit, in which a nonsignificant value is indicative of fit, and the χ-square test of close fit, postulated to be a more realistic examination of fit. Other indices are the NNFI (Tucker-Lewis Index), CFI, RMSEA, and SRMR (see Weston & Gore, 2006 , p. 742, for full descriptions). Criteria for good and adequate fit were described previously for CFA, but it is important to recognize that fit indices do not always agree with one another, so the use of multiple indicators of fit is recommended (MacCallum & Austin, 2000 ).
Figure 9.2B shows the results of simple path analysis of the model presented in Figure 9.2A using a sample of 696 college students (Herman, 2005 ). The path coefficients are regression coefficients showing relationships ranging from .58 (between instrumentality and social self-efficacy) and –.44 (between self-efficacy and loneliness) to as small as .03 (between instrumentality and self-esteem). All paths except the latter were statistically significant in testing this model. Results concerning the fit of the model were mixed, with acceptable values of RMSEA, NNFI, and CFI, but a statistically significant χ-square (which, it should be recalled, is sensitive to large sample sizes). Further testing indicated that the model demonstrated a good fit in males (N = 346) but an inadequate fit in females (N = 350) (see Hermann & Betz, 2006 , for the final published findings).
Figure 9.3 presents an example of a full structural equation model of intuitive eating developed and tested by Avalos ( 2005 ). Three indicators (or parcels of items) were constructed from the scales measuring each LV following the recommendations of Russell, Kahn, Spoth, and Altmaier ( 1998 ). Parcels were constructed by using EFA to derive the loadings of scale items on a single factor—items were successively assigned to the three parcels from highest to lowest loadings, so that the quality of the parcels as measures of the LVs is roughly comparable.
The measurement and structural components of the model were tested in 461 college women. Testing of the measurement model using CFA indicated fit indices ranging from adequate fit (RMSEA = .060) to excellent fit (CFI = .982, TLI = .975, and SRMR = .041). All indicators/item parcels loaded significantly on their latent factors, suggesting that all latent factors were adequately measured. The paths from the LVs to the parcels indicate the parcel loadings—in essence the factor saturation of each parcel; as can be seen, all parcels loaded highly on their respective LVs. Fit indices for the structural model also were adequate (RMSEA = .058) to excellent (CFI = .982; TLI = .977, SRMR = .046). All paths between LVs were statistically significant and ranged from .28 to .63. Thus, this is a plausible model (although not the only plausible model that could be hypothesized), and it shows a possible causal pathway by which variables related to acceptance can facilitative intuitive eating.
Modification Indices
When the model is not fitting optimally (one or more of the fit indices indicates poor or inadequate fit), some researchers use modification indices to attempt to improve it. Done through what is known as a specification search , two major modification indices are the Wald test, which uses a χ-square difference test to indicate any (non-zero) paths that might profitably be eliminated, and the Lagrange multiplier (LM) test, which uses a χ-square difference test to indicate any new paths that would significantly improve the model if added (Bentler, 1995 ).
MacCallum, Roznowski, and Necowitz ( 1992 ) suggested that, to avoid a data-driven model that capitalizes too much on sample specificity, only changes that are theoretically meaningful, based on prior evidence, should be made. And because modification of structural equation models based on statistical indices has been challenged as data-driven and often unstable across samples, it is important to cross-validate the modified model (MacCallum & Austin, 2000 ). This is done using calibration and validation samples, of which the first should be about two-thirds of the entire sample to provide stable initial parameter estimates.
Structural equation modeling can be used to compare models, for example, by testing models across populations (e.g., Fassinger, 1990 ; Lent et al., 2005 ). Standard error of the mean also can be used to examine longitudinal designs (e.g., Tracey, 2008 ) and to explore experimental designs more generally (see Russell, Kahn, Spoth, & Altmaier, 1998 ).
It should be clear to readers that many analytic methods are appropriate for use with a variety of kinds of quantitative data. Careful consideration of the purposes of the research and the type of data at hand or accessible will facilitate meaningful and useful analyses. In the following section, we turn to qualitative research methods.
Qualitative Methods
Qualitative approaches to research increasingly are being used in counseling psychology, resulting in what Ponterotto ( 2005 ) described as “a gradual paradigm shift from a primary reliance on quantitative methods to a more balanced reliance on quantitative and qualitative methods” (p. 126). In contrast to the nomothetic perspective of quantitative approaches, which seeks to identify large-scale normative patterns and universal laws, qualitative approaches take an idiographic perspective, focusing instead on in-depth understanding of the lived experiences of individuals or small groups. As outlined in the first half of this chapter, quantitative methods rely on quantifying carefully measured observations amassed from large samples (with some measure of control over the variables as the ideal) and statistically analyzing data to produce models of relationship and prediction thought to apply to the general population. Qualitative methods, on the other hand, rely on detailed or “thick” description of context-specific phenomena, most typically as narratives voiced by relatively small numbers of persons, with transparent interpretation by the researcher into descriptions, summaries, stories, or theories thought to capture the complexity of the phenomena under investigation. In quantitative research, the researcher seeks to remain distant and objective to avoid contaminating the data gathering process, such that the data stand as accurately as possible as a representation of an assumed reality apart from the researcher. In qualitative research, however, data collection occurs through the relationship between the researcher and the participant(s) in a co-creative process, and consideration of the subjectivity of the researcher is woven deliberately into every phase of the research process.
Thus, the perspectives, purposes, processes, and products of qualitative research are very different from those of quantitative research, and they require different mind sets and different standards for assessing quality and rigor. Readers should keep these complexities in mind in reviewing the following section, in which we present brief overviews of the most commonly used qualitative approaches within psychology and/or those most likely to enter the repertoire of counseling psychologists. We note that these do not represent the full range of qualitative approaches available; discourse analysis and case study methods, for example, offer considerable possibilities in counseling psychology (the former in studying counseling interactions and the latter in organizational consultation, for example), but they do not appear to have been embraced within our field at this time. We also note that, due to space limitations, we simply present broad descriptions of some of the distinctive features of these approaches, and readers should consult several excellent handbooks and overviews of qualitative methods (e.g., Camic, Rhodes, & Yardley, 2003; Creswell et al., 2007 ; Denzin & Lincoln, 2000 ; Patton, 2002 ; Ponterotto, Haverkamp, & Morrow, 2005 ) to learn about these and other methods in greater detail.
Common Qualitative Research Methods
Grounded theory.
Rooted in sociology and symbolic interactionism, grounded theory is a highly influential qualitative approach that is widely used throughout the health, social, and behavioral sciences, including counseling psychology (Charmaz, 2000 ; Fassinger, 2005 ; Henwood & Pigeon, 2003 ; Rennie, 2000 ). Developed by Glaser and Strauss ( 1967 ) and further articulated by these researchers and colleagues (e.g., Glaser, 1992 , 2000 ; Strauss, 1987 ; Strauss & Corbin, 1998 ), grounded theory is so named because its aim is to produce theories that are “grounded” in participants’ lived experiences within a social context. The central question of grounded theory is: “What theory emerges from systematic comparative analysis and is grounded in fieldwork so as to explain what has been and is observed?” (Patton, 2002 , p. 133).
Theory-building takes place inductively and iteratively using a method of “constant comparison,” in which data collection, coding, conceptualizing, and theorizing occur concurrently in a process of continually comparing new data to emerging concepts until theoretical saturation is reached (no new information is being generated); at this point, data collection/analysis ends and relationships among the emergent constructs are articulated in the form of an innovative theoretical statement about the behavior under investigation. Data usually consist of detailed narratives obtained in extensive interviews with participants, although other forms of data (e.g., observations, archival documents, case notes) can be used as well. The relationship between the researcher and the participant forms the foundation for the participant’s deep sharing of the lived experience, and there is an expectation that participants’ perspectives and feedback will be included throughout the process of data analysis and theory articulation, thus ensuring that the theory remains grounded in the participant’s lived experiences (Charmaz, 2000 ; Fassinger, 2005 ; Henwood & Pigeon, 2003 ).
Although there is some debate about the appropriate paradigmatic home for grounded theory, it most often is presented as a constructivist-interpretivist approach (Charmaz, 2000 ; Fassinger, 2005 ; Henwood & Pigeon, 2003 ). This makes sense, given its ontological and epistemological assumptions that researchers and participants will, through their relationships, co-construct accounts of the deep meanings of subjectively experienced realities, as well as its axiological and methodological foci on revealing, recording, and monitoring the expectations and interpretive lenses of the researcher. However, Fassinger ( 2005 ) has argued that the considerable flexibility of the grounded theory approach allows for its conceptualization and use across a broad paradigmatic range, from, for example, a post-positivist attempt to triangulate quantitative data to the liberationist aims of giving voice to and empowering marginalized populations characterized by the critical-ideological paradigm.
Fassinger ( 2005 ) further asserts that grounded theory can serve as a paradigmatic bridge for researchers. It allows those researchers holding fast to positivist and post-positivist empirical values to begin to venture into more naturalistic territory using the highly specified, rigorous analysis procedures of grounded theory. On the other hand, those who are oriented toward radical social reformation can find in this approach a means to tackle some of society’s most challenging problems. The adaptability of grounded theory is particularly well-suited to counseling psychology, as exemplified by the wide range of studies in our field that have used this approach successfully. Examples include the work of Fassinger and her colleagues (Gomez et al., 2001 ; Noonan et al., 2004 ; Richie et al., 1997 ), as well as Morrow and Smith ( 1995 ), Rennie ( 1994 ) and Kinnier, Tribbensee, Rose, and Vaughan ( 2001 ).
Narratology
Although narratives and narrative analysis techniques are used widely in many different approaches to qualitative research, Hoshmand ( 2005 ) uses the term “narratology” to denote a distinct qualitative perspective that is informed by narrative theory. Shaped broadly by the work of narrative theorists such as Foucault and Ricouer and articulated within psychology by Polkinghorne (1998, 2005 ) and others, the narratological approach to research is “concerned with the structure, content, and function of the stories that we tell each other and ourselves in social interaction” (Murray, 2003 , p. 95). Its central question is: “What does this narrative or story reveal about the person and the world from which it came? How can this narrative be interpreted to understand and illuminate the life and culture that created it?” (Patton, 2002 , p. 133).
Narratology relies on a “narrative mode of understanding” human experience (Hoshmand, 2005 , p. 180) in which the researcher interrogates narratives of individuals’ lived experiences for the story-like elements that underlie those narratives. In this approach, narratives are considered to be storied accounts of experience that have an internal, developmental coherence containing plot-like elements, thematic meanings, self-presentational style aspects, and temporal and causal sequences, and are mediated by culture, historical time, and other contextual elements. Narratological inquiry seeks both to understand narratives as well as to construct storied accounts of particular lived phenomena. Data may consist of documents already rendered in narrative form (e.g., interviews, oral histories, biographies, journals) or may be more loosely organized pieces of information (e.g., chronological events, observations, cultural artifacts) that will be translated into narrative form by the researcher in the data analysis process. Analyzing data may take several forms (e.g., linguistic/literary, grounded, contextual), but each approaches the narrative holistically within its social context, and arranges its elements into a coherently and chronologically sequenced account of experience (Hoshmand, 2005 ; Murray, 2003 ).
Hoshmand ( 2005 ) asserts that narratological research approaches are still evolving, and that what exist currently to guide researchers are simply concepts and principles rather than a unified method per se. Paradigmatically, narratological approaches appear to be constructivist-interpretivist in their reliance on the co-construction of the storied account and the importance of researcher positionality. However, Hoshmand ( 2005 ) distinguishes the narrative mode of understanding, which is focused on “descriptive and discovery-oriented research involving configural patterns of interpretation and a part-to-whole logic of argumentation” (p. 181), from the “paradigmatic mode of interpretation brought to bear on narrative data such as the theorizing stage of grounded theory” (p. 181), reinforcing the difference between narrative analysis and narratological inquiry. The focus of the narratological approach on the formation and expression of individual and cultural identity through story also renders it particularly useful for multicultural research, an area of interest to many counseling psychologists. Examples include Winter and Daniluk (2004) and Hardy, Barkham, Field, Elliott, and Shapiro ( 1998 ).
Ethnography
Spawned from cultural anthropology at the turn of the 20th century, including such giants as Boas and Malinowski, ethnography has found its way slowly into contemporary psychology, highlighted recently for counseling psychologists by Suzuki and her colleagues (Suzuki, Ahluwalia, Mattis, & Quizon, 2005 ). Focused on groups of people within their cultures and communities, the central question of ethnography is: “What is the culture of this group of people?” (Patton, 2002 , p. 132).
The ethnographic approach focuses on studying the cultural and community life (behaviors, language, artifacts) of individuals, and relies on the researcher functioning as a participant-observer in extensive fieldwork under conditions of prolonged engagement with the community (e.g., 6 months to 2 years or more). Interviewing and direct observation are the chief means of data collection, although archival records, surveys, and other documentation may be used as well. The end product of ethnographic research is the creation of narratives that are thought to capture the lived experiences of people in their complex cultural contexts, an aim that is consistent with and amenable to the multicultural emphasis within counseling psychology (Miller, Hengst, & Wang, 2003 ; Suzuki et al., 2005 ).
Ethnographic approaches can span the paradigmatic spectrum from post-positivist methods that rely largely on observations and quantitatively organized data (particularly in seeking out negative cases or contradictory information) to critical-ideological aims of giving voice to and thus empowering marginalized populations, especially if used in multicultural research in counseling psychology, as advocated by Suzuki et al. ( 2005 ). However, in its ideal form, ethnography probably most closely fits the constructivist-interpretivist paradigm in its epistemological focus on the awareness of the “subjectivities” and “guesthood” of the researcher, a position of genuine connection with participants balanced by enough distance to avoid compromising data collection or interpretation (Miller et al., 2003 ).
Indeed, one of the most intense debates within the ethnography literature concerns how and to what extent the insider or outsider status of the researcher influences the investigation, a debate that focuses, at its heart, on the relative roles of researchers and participants in co-constructing the final account of the lived experience under investigation (Miller et al., 2003 ; Suzuki et al., 2005 ). From a methodological perspective, the expectation that cultural immersion and a reflexive research stance will produce narratives and observational data that constitute an accurate or true reflection of lived cultural experience implicitly recognizes the subjectivity of the researcher in the co-construction of the account (as well as the need to monitor that subjectivity). Moreover, the assumption that ethnographers will decide upon “skill sets, material goods, or resources that they can and will gift to the community” (Suzuki et al., 2005 , p. 211; italics ours) also acknowledges the distance of the researcher even in the final procedural stages of a study that may have involved months or years of connection with participants.
The many types of ethnographic approaches available to researchers (e.g., memoir, life history, narrative ethnography, auto-ethnography) suggest wide variability in types of data and methods of interpreting those data (Miller et al., 2003 ; Suzuki et al., 2005 ). Moreover, aspects of the ethnographic approach can be found in similar research methods that may be more familiar to counseling psychology researchers (e.g., community-based research). These approaches offer considerable heuristic value, particularly in multicultural counseling psychology research. Examples, can be found in Miller, Wang, Sandel, and Cho (2002), Pipher (2002), and Suzuki, Prendes-Lintel, Wertlieb, and Stallings ( 1999 ).
Phenomenology
Rooted in the work of philosopher Edmund Husserl and the later American phenomenological and existential psychologists, the phenomenological approach has as its central question: “What is the meaning, structure, and essence of the lived experience of this phenomenon for this person or group of people?” (Patton, 2002 , p. 132). Phenomenology is a descriptive method of investigating the life-worlds of individuals, wherein the researcher “attempts to grasp the essence of the individual’s life experience through imaginative variation” (Wertz, 2005 , p. 172).
In this approach, the researcher seeks to enter empathically into the participant’s life-world to understand and communicate the subjective meaning of an individual’s lived experience. This is accomplished through a reflective process of suspending assumptions and biases and focusing on a phenomenon itself, then imaginatively varying concrete instances of the phenomena to distill their essential features, culminating in a description that is thought to portray the essence of that lived experience (Giorgi & Giorgi, 2003 ; Wertz, 2005 ). This kind of “intentional analysis begins with a situation just as it has been experienced—with all its various meanings—and reflectively explicates the experiential processes through which the situation is lived” (Wertz, 2005 , p. 169).
Data are collected as descriptions, and although typically they are direct verbal or written accounts from participants and others who interact with and/or know participants or can provide some kind of insight on the phenomenon under investigation, data also may consist of other forms of expression such as drawings, fictional accounts, poetry, and the like. Analysis consists of generating “situated descriptions” of the participant’s experience, organized sequentially or thematically, that then are mined for underlying psychological meanings and processes. The descriptions finally are synthesized into a case study representation that can be considered together with other cases to locate general themes and experiences, as well as variations in “knowledge of types” (Wertz, 2005 , p. 173). The final product is a context-bound descriptive presentation of the psychological structure of participants’ experiences in a specific life domain (Giorgi & Giorgi, 2003 ; Wertz, 2005 ).
Wertz ( 2005 ) locates phenomenology as the historical birthplace of contemporary qualitative research, and yet he also distinguishes phenomenology from other qualitative approaches in its unwavering commitment to bracketing researcher presuppositions and biases and its singular emphasis on pure description. Giorgi and Giorgi ( 2003 ) argue that phenomenology as a method is distinct from phenomenology as a philosophical endeavor, and it generally is acknowledged that phenomenology shares many procedural elements with other qualitative approaches (e.g., Giorgi & Giorgi, 2003 ; Wertz, 2005 ). As a very well-established research approach (including an entire curriculum devoted to phenomenology at Duquesne University), and one with high relevance to many areas of psychology, phenomenology has much to offer counseling psychologists. Examples can be found in Arminio ( 2001 ), Friedman, Friedlander, and Blustein ( 2005 ), and Muller and Thompson (2003).
Participatory Action Research
Emanating from the work of Kurt Lewin (Fine et al., 2003 ) and embodied in the writings of liberationists such as Frantz Fanon and Paulo Friere (Kidd & Kral, 2005 ), participatory action research is widely used in community psychology, as well as in other social science fields. Participatory action research (PAR; Fine et al., 2003 ; Kidd & Kral, 2005 ) has as its goal the creation of knowledge that directly benefits a group or community (typically marginalized, disenfranchised, or disempowered in some way) through political and social empowerment. Its central question is: How are systems of power and privilege manifested in the lived experiences of this person or group of people, and how can knowledge be gained and used to raise consciousness, emancipate, and empower this person and group?
It is an approach in which researchers and participants work collaboratively over an extended period of time to assess a need or problem in a particular social group, gather and analyze data, and implement results aimed at the “conscientization” (raising consciousness) of and giving voice to individual participants, such that their collective empowerment leads directly to social action and change. Participatory action research takes an unabashedly political stance, and, ideally, the values of the researcher and the participants mesh to drive the social change agenda. The involvement of the researcher is prolonged and intensive, and the success of a PAR project is judged by the manner and extent of changes that have occurred in the lives of participants (Fine et al., 2003 ; Kidd & Kral, 2005 ).
Although PAR clearly fits within the critical-ideological paradigm, based on its focus on power relations and structural inequality, in its goals of individual and group empowerment and social change, and its positioning of the researcher as a collaborator, it actually is more of a hybrid approach in many of its features. Data in a PAR project can take virtually any form (including quantitative surveys and statistical analyses of archival data), and its final products may include a wide range of artifacts, such as position papers, policy statements, charts and tables, records, and even speaking or lobbying activities. Kidd and Kral ( 2005 ) assert that PAR is not actually a method but rather is “the creation of a context in which knowledge development and change might occur—much like building a factory in which tools may be made rather than necessarily using tools already at hand” (p. 187). In this sense, PAR is much like organizational consultation in its collaborative approach to assessing needs, gathering data about what is happening in the collective, ensuring that all are given voice in articulating problems and determining future directions, and building readiness for implementation of clearly specified changes and goals.
Almost all experts in PAR note its challenges in practical use, including lack of time and resources for the prolonged engagement that PAR requires, resistance within traditional psychology to the overtly radical change agenda PAR espouses, and deeply entrenched societal and professional disrespect and disdain for the stigmatized, disenfranchised groups that PAR usually seeks to empower. Moreover, lack of knowledge and training in the PAR approach, and the emotional and psychological energy PAR requires from researchers (including the need for flexibility, good group management skills, and the ability to share power) make it difficult for some researchers, particularly novices. Finally, the volatile and changing nature of social groups and social problems can render the research-intervention goal of PAR a moving target, and there are often contextual barriers that make community participation and change extraordinarily difficult. Nevertheless, PAR is well-suited to the diversity and social justice focus within counseling psychology, and it provides unprecedented ways to enact the scientist–practitioner–advocate model of professionalism (Fassinger, 2001 ; Fassinger & O’Brien, 2000 ) becoming ever more popular in our field. Examples include Leff, Costigan, and Power ( 2004 ), O’Neill, Small, and Strachan (2003), and Fine et al. ( 2003 ).
In this final section on the various qualitative methods, we include two approaches developed by counseling psychologists that are (so far) used by small groups of researchers confined to counseling psychology. These approaches are consensual qualitative research developed by Hill and her colleagues (CQR; Hill, Knox, Thompson, Williams, Hess, & Ladany, 2005 ) and the action-project method of Young and his colleagues (Young, Valach, & Domene, 2005 ).
Consensual qualitative research, the better known of the two approaches, was developed in the mid-1990s in an effort to create an easy-to-use method of summarizing narrative data, and it has been used primarily in counseling-related investigations to date. Most qualitative methods assume, implicitly or explicitly, that more than one researcher will be participating in data gathering and/or data coding, monitoring, and interpretation. Consensual qualitative research, however, clearly delineates a team approach to collecting data through structured interviews (or counseling sessions) that are consistent across participants, with systematic coding and summarizing of data utilizing interjudge ratings, discussion, and consensus.
Hill and colleagues have claimed that CQR fits within a constructivist-interpretivist paradigm (Hill et al., 2005 ), but there is considerable disagreement on this point. Most experts acknowledge that it is strongly post-positivist in its use of theoretically or empirically generated structures for framing the study and starting the coding process, its reliance on consistency across participants in data gathering (including the search for negative cases or disconfirming evidence), its goal of achieving inter-rater agreement in coding, its quantitatively oriented analytic techniques and rhetorical structures, and its overall attempt to maintain researcher objectivity as much as possible throughout the research process. Indeed, it bears considerable similarity to simple content analysis in its aims and procedures. Nevertheless, it offers clearly specified procedures and a substantial number of model studies (especially related to counseling processes) for those interested in undertaking their own investigations. Moreover, it provides a viable starting point for researchers interested in qualitative techniques but needing more gradual movement in that direction. An example is Juntunen, Barraclough, Broneck, Seibel, Winrow, and Morin ( 2001 ).
The action-project method (Young, Valach, & Domene, 2005 ) is based in action theory and concerns itself with intentional, goal-directed behavior. It utilizes a three-dimensional model of action that incorporates perspectives on action and levels of action organization into four action systems: individual action, joint action, the action project, and the career (Young et al., 2005 , p. 217). Individual and joint actions are short-term, everyday occurrences that cumulatively compose the longer-term “project” in their common themes and goals, which, in turn, results in a long-term organization of projects into a “career” of action that has significant importance in one’s life.
Young et al. ( 2005 ) insist that the action-project method does not fit into any of the existing qualitative paradigms, but rather “represents a unique epistemology and research paradigm” (p. 218). Certainly, the data collection procedures in the action-project method are distinctive and worth noting. They can include several taped dialogues over an extended period of time (e.g., 6 months) in which each subsequently is replayed and commented upon separately by each participant, the data coded and summarized for distribution back to the participants, supplemented by journal entries, phone conversations, and electronic communications. All of these data are captured in an analysis that is fed back to participants in what is essentially a behavioral intervention, a process that may be repeated many times with the same participants in the course of a study. Indeed, the action-project approach resembles a highly specified behavioral counseling process, and it is not surprising that it offers considerable utility in understanding interpersonal interactions and their impact on the goals and behaviors of the individuals involved. Counseling psychologists with strong interests in the integration of science and practice might find the action-project method especially compelling. An example is found in Young et al. ( 1997 ).
Basic Issues to Consider in Qualitative Research
Regardless of the specific qualitative approach a researcher decides to adopt, there are a number of basic issues and challenges with which every researcher must grapple. In this section, we review sampling, data collection, researcher role, data analysis and communication, evaluation, and ethical considerations in conducting qualitative inquiry. Although we present these issues separately, it is important to remember that these decisions are inextricably linked paradigmatically, ontologically, epistemologically, axiologically, methodologically, and rhetorically.
Quantitative sampling strategies, because they are focused on generalizing findings, always are aimed at isolating a clearly bounded group of observations (represented numerically) that is sizable enough to support statistical inferences regarding the overall population of interest. In qualitative research, however, the goal is an in-depth understanding of the meaning of a particular life experience to those who live it, and data most often consist of narratives, observations, field notes, researcher journals, and other kinds of data that are represented (primarily) linguistically. Sample size depends entirely upon saturating the data set—that is, collecting enough data to satisfy the judgment of the researcher that no new information would be gained by additional cases. Thus, sample sizes in terms of actual participants typically are much smaller in qualitative than in quantitative studies, but the data sets themselves are much larger and more complex.
Given the aim of in-depth understanding, sampling in qualitative inquiry is always “purposeful,” that is, to select participants who will provide the most “information-rich” accounts of the phenomena of interest (Patton, 2002 , p. 239). The purposes in “purposeful” sampling can be quite varied, depending on the focus of the research. Patton ( 2002 ), for example, includes 15 different types of sampling strategies that may be of interest to qualitative researchers, including maximum variation, homogeneous, extreme case, snowball, intensity, typical case, critical case, disconfirming case, and other kinds of sampling.
Qualitative sampling also is criterion-based, in that specific criteria used in selecting participants are based on the research questions that guide the inquiry as well as the particular qualitative approach being used (Creswell, Hanson, Clark, & Morales, 2007 ; Morrow, 2007 ). In a phenomenological study, for example, the sample may consist of a small group of individuals who share a very specific common experience (e.g., priests accused of sexual abuse), whereas a participatory action research approach may call for a sample that includes an entire community or organization (e.g., a shelter providing services for women victimized by partner violence). In addition, there are decisions to be made about the extent and kind of contact with participants, ranging from one single lengthy interview with follow-up contact to immersion in and observation of a community over several years. These decisions about who will participate in the study and what the length and nature of the contact will be also determine how the process of actually gathering data will occur.
Data Collection
As the goal of data collection in qualitative inquiry is to ensure that all information relevant to understanding a particular phenomenon is obtained (i.e., the data set is saturated), the process of gathering data often is both prolonged and iterative. Because interviews, observations, extensive field notes, cultural artifacts, and other similar kinds of documentation form the corpus of data, it is not unusual for months and even years to be devoted to data collection. Moreover, most qualitative methods assume some sort of additional contact with participants to verify the researcher’s interpretations during the analysis process, creating an iterative cycle of data collection, researcher analysis, participant feedback, additional data collection and/or analysis, and repeated feedback from participants until no new information is emerging from the process.
Much has been written about the primary data tool in qualitative research: the individual interview. Researchers must conceptualize and articulate their interview strategy in terms of length, depth, kinds of open-ended questions, degree of structure, degree and kind of probing for sensitive information, ways of ensuring that participants’ words and ideas are being captured, and ways of monitoring their own reactivity. Patton ( 2002 , as but one example) includes detailed discussion of theoretical and practical issues in planning and conducting interviews, including sections on focus groups and cross-cultural interviewing, as well as numerous tables and checklists. It is imperative that counseling psychologists undertaking qualitative research for the first time consult such resources, as there may be a tendency to assume that competent clinical interviewing skills fully prepare one for conducting interviews aimed at gathering data for research purposes. However, the roles of scientist and helper are very different, and, although good clinical skills may facilitate the kind of relationship-building that is critical to the success of any interview, acquiring an in-depth understanding for research purposes requires a different mindset and approach than coming to understand an individual therapeutically (an ethical issue to which we return below).
Because data collection in qualitative research is implemented in a deeply interpersonal manner, the researcher also must consider when and how entry into the research context will occur and how trust and rapport will be established. Again, the form that this process takes is determined largely by the qualitative approach being used. If interviews with participants who have no connection to one another constitute the primary means of data collection, then the task becomes one of establishing credibility and trust with people one individual at a time. But if data collection includes multiple interviews, behavioral observations, and scrutiny of organizational documents within a group of highly interconnected individuals, then the task of entry into the organization, identification of key informants, rapport-building, and role clarification will be considerably more complex. Similarly, exit from the research context also is driven by approach; a study that utilized single isolated interviewees requires a different kind of process of following up and sharing findings than does a study of an entire community in which multiple stakeholders desire a product they can use to initiate political redress of identified problems. Of course, these different approaches to relationship-based data collection also have important implications for the role and stance of the researcher.
Researcher Role
Implementing interpersonally based inquiry requires a different researcher stance than that taken in most quantitatively based studies, in which the goals are appropriate distance, control, and avoidance of researcher contamination of data. Because qualitative research relies on co-constructed representations of lived experience, the researcher is rendered both a participant and an observer in the investigative process, with values, assumptions, and world views that must be made conscious and articulated clearly. As both participants and observers, researchers must grapple with the tension inherent in those roles, including the extent to which they want to function emically (as insiders) or etically (as outsiders), the degree to which they want their observations to be overt or more covert and less obvious, the amount of self-disclosure and collaboration they will offer, the expectation of entry into a long-term or more short-term relationship with participants, and the extent to which they will function as catalysts for change (Fine, 1992, 2007 ; Morrow, 2007 ; Patton, 2002 ).
Clearly, a research approach that requires interpersonal connection as its foundation and squarely places the researcher within that connection calls for a researcher stance that differs markedly from the distanced position of quantitative approaches. Researcher “reflexivity” is the term used most often to capture this stance (e.g., Marecek, 2003 ; Morrow, 2005 , 2007 ), and refers to the capacity to use own one’s experiences, thoughts, and feelings, to recognize and understand one’s own perspectives and world views, and to actively and constantly reflect upon the ways in which those might influence one’s experience of observing, collecting, understanding, interpreting, and communicating data. Rennie ( 2004 , cited in Morrow, 2005 ) described reflexivity as self-awareness and agency within that self-awareness. Moreover, reflexivity is not just about the self; it also includes deep reflection about those studied, the intended and unintended audiences for the inquiry, and the cultural and historical context in which the scientific endeavor occurs. Fine (1992) captured the complexity of the reflexive stance in her description of qualitative researchers as “self-conscious, critical, and participatory analysts, engaged with but still distinct from [their] informants” (p. 254).
In actual practice, researcher reflexivity is facilitated through a variety of strategies that are articulated somewhat differently depending on the particular qualitative approach being used. These strategies may include publicly articulating one’s biases through researcher-as-instrument statements, bracketing and monitoring one’s biases, being rigorously subjective in one’s observations and interpretations, keeping and using field notes throughout the research process, continuously separating description from interpretation and judgment, using thick description to ensure remaining close to participants’ experiences, maintaining an appropriate balance between participation and observation, returning again and again to the data and/or participants to verify one’s interpretations, memoing or keeping a journal throughout the course of the study, and using external auditors or teams of multiple researchers to maintain systems of peer checking and review (Morrow, 2005 ). Researcher reflexivity is important throughout the entire inquiry, but it is especially critical during the process of analyzing, interpreting, and communicating the data in the study.
Analysis, Interpretation, and Communication of Data
As noted earlier, qualitative approaches differ in the extent to which systematic analytic principles have been detailed in specific how-to formats. However, all offer conceptual delineations of data analyses that parallel the core paradigmatic assumptions of the approach. Thus, a grounded theory analysis moves the researcher through a system of coding and constantly comparing data to an end point of generating an emergent theory grounded in the lived experiences of the participants. A narratological researcher, on the other hand, will (re)arrange narratives into a chronologically and psychologically coherent, storied account of lived experience. Participatory action researchers involve the constituent group(s) in making sense of the data and consciously use the data to mobilize individuals and the community into actions aimed at social change.
Regardless of specific approach, all qualitative methods rely heavily on researcher reflexivity in the analysis process. This reflexive stance compels a continual return to and immersion in the data—not only the narratives or other data gathered from participants, but also the memos, journals, field notes, research team notes, and other documentation of the extensive and intensive process of rigorous thinking that has occurred throughout the inquiry. Thus, six or eight interview transcripts alone (totaling, at minimum, about 150 pages of text) will generate hundreds more pages of an analysis record and audit trail. Moreover, it can be assumed that many hundreds of hours will be devoted to reading, coding, (re)arranging, thematizing or propertizing, theorizing, (re)checking, obtaining feedback, and discussing the data. When other kinds of data are added (e.g., behavioral observations, artifacts, historical records), the sheer size and complexity of the data set becomes quite challenging, and it should be clear why continually interrogating the data corpus is an absolute necessity in qualitative research.
Capturing the enormity and complexity of the data analysis process for purposes of communicating findings also is extremely challenging, and not particularly well-suited to the length and format constraints of most scholarly journals (Morrow, 2005 ). Morrow ( 2005 ) has provided a cogent guide to writing publishable versions of qualitative inquiries, and many excellent studies have been published in counseling psychology journals despite the difficulties. Unfortunately, the brevity of most published accounts belies the extensive work that undergirds those studies and provides limited information about why particular conceptual, sampling, data collection, or analytic decisions were made, thus offering little basis for judging the quality of the research.
Evaluating Qualitative Research
Because qualitative and quantitative research emanate from different paradigmatic assumptions, the criteria for judging the quality and rigor of quantitative research simply do not apply to qualitative studies. Attempts have been made to describe evaluation criteria for qualitative studies that parallel the quantitative criteria of validity, reliability, generalizability, and objectivity (probably developed, at least in part, to make qualitative work more acceptable to the positivist researchers comprising most editorial and review boards). However, Morrow ( 2005 ) argues that such criteria do not mean or accomplish the same things, and that qualitative studies are more appropriately evaluated using standards that are congruent internally with what qualitative research seeks to do. Morrow advises the development and use of “intrinsic standards of trustworthiness that have emerged more directly from the qualitative endeavor” (2005, p. 252).
Key to discussions of evaluating the rigor of qualitative research is the concept of trustworthiness or credibility (Morrow, 2005 ). A comprehensive evaluation framework offered by Morrow ( 2005 ) outlines four overarching or “transcendent” criteria (p. 250), so termed because they transcend the particular requirements of any specific approach and apply to the evaluation of all qualitative inquiry. The first criterion for judging the trustworthiness of a study is social validity, or the social value of the project. The second criterion addresses the way in which the study handles subjectivity and reflexivity on the part of the researcher, so that the reader can determine whether the participants’ accounts are being honored or whether the findings merely or predominantly reflect the opinions of the researcher. Morrow ( 2005 ) advises that, regardless of paradigmatic and axiological approach (i.e., whether researcher subjectivity is bracketed and monitored or incorporated as a driving force in the study), researchers must make their implicit assumptions and biases fully and clearly explicit to themselves and to all others.
The third criterion for judging the trustworthiness of a study lies in the adequacy of the data. Because sample size has little to do with the richness, breadth, and depth of qualitative research data, the study must demonstrate other forms of evidence that the data are maximally informative. Such evidence might include information-rich cases, appropriate sampling, saturated data sets, lengthy and open-ended interviews, feedback from participants, multiple data types and sources, field notes indicating rapport with participants, and inclusion of discrepant or disconfirming cases. The fourth criterion for evaluating the trustworthiness of the inquiry is the adequacy of the interpretation. There must be clear evidence of immersion in the data set during analysis, the use of a specified analytic framework and analytic memos, and a balance in the writing between the interpretations of the researcher and the direct words of the participants (Morrow, 2005 ).
In addition to these four transcendent criteria, Morrow ( 2005 ) also includes criteria that are more specific to the paradigm that undergirds a particular study. In a constructivist/interpretivist study, for example, the additional criteria of fairness, authenticity, and meaning would be important, whereas a critical/ideological study would be expected to include those criteria but also demonstrate consequential and transgressive evidence. Finally, regardless of approach, the trustworthiness of a study also must include evidence that the researcher attended to the social and ethical issues inherent in that study.
Ethics, Politics, and Social Responsibility
Social, political, and ethical considerations are not pertinent uniquely to qualitative inquiry, as all research is embedded in a sociopolitical and scientific context and therefore must attend to issues of social power, researcher responsibility, protection of people from harm, and potential (mis)use of findings. However, “[b]ecause qualitative methods are highly personal and interpersonal, because naturalistic inquiry takes the researcher into the real world where people live and work, and because in-depth interviewing opens up what is inside people—qualitative inquiry may be more intrusive and involve greater reactivity than surveys, tests, and other quantitative approaches” (Patton, 2002 , p. 407). That is, relationship-based methods create unique challenges in the implementation of the standard ethical requirements of the scientific enterprise, and they “increase both the likelihood of ‘ethically relevant moments’ and the ambiguity of how, or whether, specific ethical standards apply to the question at hand” (Haverkamp, 2005 , p. 148).
The relational focus of qualitative inquiry also is buttressed by the use of linguistically based data, which offer researchers considerable interpretive latitude. However, those constructions typically are supported by participant verification which, in turn, is obtained through repeated and prolonged contact. In addition, qualitative inquiry transforms the notion of research benefit where, particularly in the critical/ideological approach, outcomes of a study must include direct benefit to participants in the form of knowledge and empowerment. Finally, the very process of qualitative research has ethical implications, as its flexibility, fluidity, and changeability necessitate ethical decision making repeatedly throughout the entire inquiry process.
Many discussions of ethical, political, and social issues in qualitative research may be found in the literature (e.g., Fine, 1999, 2007 ; Haverkamp, 2005 ; Marecek, 2003 ; Morrow, 2007 ). Haverkamp ( 2005 ) offers a particularly useful discussion for counseling psychologists, recommending a synthesis of virtue ethics, principle ethics, and an ethic of care, all of which are central to graduate training in our field and thus known to counseling psychologists. Haverkamp ( 2005 ) calls for “professional reflexivity” (p. 152), the ethical counterpart to research reflexivity, which refers to a conscious consideration of the ways in which our social roles, skills, and knowledge base may influence our research practices, including relationships with participants. Professional reflexivity is the cornerstone of competence, and Haverkamp ( 2005 ) notes that professionally reflexive competence includes not only expertise in the populations and topics we wish to investigate, but also in the qualitative methods that we will use in the investigation.
Probably the most widely discussed ethical issue in qualitative research involves researcher boundaries and the complexities inherent in multiple relationships with participants. The deep and prolonged engagement between researcher and participants; the centrality of the researcher’s positionality and values; the public and individual perceptions and expectations of psychologists as healers; the skills of clinically trained psychologists in eliciting deeply private and even unconscious information; the co-creation of the meaning, interpretation, and form of the research product; and the focus of much qualitative inquiry on marginalized and disempowered social or cultural groups elicit a host of complex ethical issues that have clear social and political ramifications. Such issues include dual relationships, conflicts of roles and interests, confidentiality, informed consent, coercion, fiduciary responsibility, and use of professional and social power.
Although detailed discussion of these issues is well beyond the scope of this chapter, we return to the concept of trustworthiness, noted above as the primary criterion for evaluating the quality of a study. Haverkamp ( 2005 ) argues that trustworthiness does not pertain only to the rigor of methods used in a study, but that it “is an inherently relational construct with relevance for multiple dimensions of the research enterprise” (p. 146). Trustworthiness in the realm of ethics recognizes the potential vulnerability of participants involved in qualitative inquiry, and it reminds researchers that they must maintain constant vigilance in their responsibility to protect participants from harm. Fine ( 2007 ) extends this notion into the arena of political and social responsibility by urging a deeper kind of responsibility upon counseling psychologists. She asks that we consider the harm implicit in oppressive social structures and protect our participants by refusing to perpetuate their narratives of denial, blame, or victimization. Fine asserts that we “bear responsibility to theorize that which may not be spoken by those most vulnerable, or, for different reasons, by those most privileged” (p. 472), a clarion call for the kind of qualitative inquiry that also becomes individual and social intervention.
Mixed Methods and Future Prospects
It should be clear that both quantitative and qualitative methods have much to offer in understanding the kinds of issues of interest to most counseling psychologists—relationships, work, counseling, culture, health, and the like. It is our position that mixing these methods in creative ways offers much potential in solving some of the thorniest problems in our field today, and we urge researchers to consider mixed method approaches.
That being said, it is also true that quantitative and qualitative methods may not necessarily mesh well or complement one another within one study. Because they utilize different paradigms, different conceptions of researcher role, different approaches to interacting with participants, different ways of unearthing information, and different articulations of the research enterprise, the outcomes of qualitative and qualitative methods may be not only disparate but wholly incompatible. Dealing with this fundamental gap requires great caution and care, and it may be easier to alternate quantitative and qualitative approaches across studies within an ongoing program of research over many years, shifting the approach to illuminate different aspects of the same research problem (Ponterotto & Grieger, 2007 ).
Several authors have offered perspectives on the challenges and possibilities of mixed-methods approaches. Marecek ( 2003 ) offers a less polarized view of quantitative and qualitative methods, suggesting that the tension is not that one approach produces greater truth than the other, but that they offer different kinds of truths, and researchers must determine which truth is of greatest interest to them in understanding a particular phenomenon. In addition, she observes that any research approach, regardless of paradigmatic and methodological underpinnings, can be used oppressively or dismissively by researchers, and that no particular method guarantees appropriate handling of social justice goals or redress of social ills. Patton ( 2002 ) offers an assortment of possibilities that mix research design, measurement, and analysis in creative approaches to specific problems.
Ponterotto and Grieger ( 1999 , 2007 ) suggest that, just as psychologists can learn to embrace different cultures and languages and become bicultural, researchers can learn to be facile in both quantitative and qualitative inquiry and become bicultural or “merged” in their research identity (1999, p. 59), termed “bimethodological” (2007, p. 408). These authors argue that the flexibility of a merged identity produces scientific richness, but they caution that becoming truly bicultural methodologically requires immersion in the unfamiliar culture—that is, counseling psychologists must actively undertake qualitative research to learn qualitative research. Ponterotto and Grieger ( 1999 ) describe in detail two mixed-methods studies that they judge to be of high quality (Jick, 1983 , and Blustein et al., 1997 ), demonstrating how the researchers successfully navigated the complexities of contrasting paradigmatic approaches, and explaining how and why a mixed-methods approach was effective in these particular investigations. The work of Fassinger and her colleagues in women’s career development provides an example of the use of both quantitative (e.g., Fassinger, 1990 ) and qualitative (e.g., Gomez et al., 2001 ) approaches in different studies over time to explicate a vocational process. These examples may be of help to counseling psychologists wishing to stretch their scientific competence and work toward becoming more bimethodological.
In concluding this chapter, we express hope that more researchers will embrace the notion of using mixed methods in their research programs. In some cases, this “mixing” may be done by one researcher—within one study or over time in a programmatic series of studies designed to enhance understanding of some phenomenon of interest. In other cases, groups of researchers may combine their efforts, some engaged in qualitative and others in quantitative investigations of a particular problem. In all cases, we assert that researchers must become competent in any method they wish to use, and that, at the same time, we all become conversant enough in both quantitative and qualitative research approaches to appreciate their significant and unique contributions to scholarly progress.
AMOS. Structural equation modeling software. [Software]. Available from www.spss.com/Amos/ .
Arminio, J. ( 2001 ). Exploring the nature of race-related guilt. Journal of Multicultural Counseling and Development , 29 , 239–252.
Avalos, L. ( 2005 ). An initial examination of a model of intuitive eating . Unpublished honors thesis, Department of Psychology, the Ohio State University.
Google Scholar
Google Preview
Baron, R. M., & Kenny, D. A. ( 1986 ). The moderator-mediator variable distinction in social psychological research. Journal of Personality and Social Psychology , 51 , 1173–1182.
Bentler, P. (1995). EQS: Structural equation modeling software. [Software]. Available from Multivariate Software: www.mvsoft.com/ .
Blustein, D. L., Phillips, S. D., Jobin-Davis, K., Finkelberg, S. L., & Rourke, A. E. ( 1997 ). A theory- building investigation of the school-to-work transition. The Counseling Psychologist , 25 , 364–402.
Borgen, F. H., & Betz, N. E. ( 2008 ). Career self-efficacy and personality: Linking career confidence and the healthy personality. Journal of Career Assessment , 16 , 22–43.
Browne, M. ( 2001 ). An overview of analytic rotation in exploratory factor analysis. Multivariate Behavioral Research , 36 , 111–150.
Browne, M., & Cudeck, R. ( 1993 ). Alternative ways of assessing model fit. In K. A. Bollen, & J. Long (Eds.), Testing structural equation models (pp 136–162). Newbury Park, CA: Sage.
Browne, M., Cudeck, R., Tateneni, K., & Mels, G. (2004). Comprehensive exploratory factor analysis (CEFA). Computer software and manual . Retrieved from http://faculty.psy.ohio-sttae.edu/browne/software.php .
Charmaz, K. ( 2000 ). Grounded theory: Objectivist and constructivist methods. In N. K. Denzin, & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 509–536). Thousand Oaks, CA: Sage Publications.
Cohen, J. ( 1969 ). Statistical power analyses for the behavioral sciences . New York: Academic Press.
Cohen, J. ( 1988 ). Statistical power analyses for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
Cohen, J. ( 1992 ). A power primer. Psychological Bulletin , 112 , 155–159.
Cohen, J. ( 1994 ). The earth is round (p < .05). American Psychologist , 49 , 997–1003.
Creswell, J. W., Hanson, W. E., Plano Clark, V. L., & Morales, A. ( 2007 ). Qualitative research designs: Selection and implementation. The Counseling Psychologist , 35 (2), 236–264.
DeNeve, K. M., & Cooper, H. ( 1998 ). The happy personality. Psychological Bulletin , 124 , 197–229.
Denzin, N. K., & Lincoln, Y. S. ( 2000 ). Handbook of qualitative research (2nd ed.). Thousand Oaks, CA: Sage Publications.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Starhan, E. J. ( 1999 ). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods , 4 , 272–299.
Fassinger, R. E. ( 1990 ). Causal models of career choice in two samples of college women. Journal of Vocational Behavior , 36 , 225–248.
Fassinger, R. E. ( 2001 ). On remodeling the master’s house: Tools for dismantling sticky floors and glass ceilings . Invited keynote address, Fifth Biennial Conference of the Society for Vocational Psychology, Houston, TX.
Fassinger, R. E. ( 2005 ). Paradigms, praxis, problems, and promise: Grounded theory in counseling psychology research. Journal of Counseling Psychology , 52 , 156–166.
Fassinger, R. E., & O’Brien, K. M. ( 2000 ). Career counseling with college women: A scientist-practitioner-advocate model of intervention. In D. Luzzo (Ed.), Career development of college students: Translating theory and research into practice (pp. 253–265). Washington, DC: American Psychological Association Books.
Fine, M. ( 2007 ). Expanding the methodological imagination. The Counseling Psychologist , 35 , 459–473.
Fine, M., Torre, M. E., Boudin, K., Bowen, I., Clark, J., Hylton, D., et al. ( 2003 ). Participatory action research: From within and beyond prison bars. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 173–198). Washington, DC: American Psychological Association Books.
Fitzgerald, L. F., & Hubert, L. J. ( 1987 ). Multidimensional scaling: Some possibilities for counseling psychology. Journal of Counseling Psychology , 34 , 469–480.
Forester, M., Kahn, J., & Hesson-McInnis, M. ( 2004 ). Factor structure of three measures of research self-efficacy. Journal of Career Assessment , 12 , 3–16.
Frazier, P. A., Tix, A. P., & Barron, K. E. ( 2004 ). Testing moderator and mediator effects in counseling psychology research. Journal of Counseling Psychology , 51 , 115–134.
Friedman, M. L., Friedlander, M. L., & Blustein, D. L. ( 2005 ). Toward an understanding of Jewish identity: A phenomenological study. Journal of Counseling Psychology , 52 , 77–83.
Giorgi, A. P., & Giorgi, B. M. ( 2003 ). The descriptive phenomenological psychological model. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 243–274). Washington, DC: American Psychological Association Books.
Glaser, B. G. ( 1992 ). Basics of grounded theory analysis: Emergence vs . forcing . Mill Valley, CA: Sociology Press.
Glaser, B. G. ( 2000 ). The future of grounded theory. Grounded Theory Review , 1 , 1–8.
Glaser, B. G., & Strauss, A. L. ( 1967 ). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine.
Glass, G. V. ( 1976 ). Primary, secondary, and meta-analysis of research. Educational Researcher , 5 , 3–8.
Glass, G. V. ( 2006 ). Meta-analysis: Quantitative synthesis of research findings. In J. Green, G. Camilli, & P. Elmore (Eds.), Handbook of complementary methods in education research (pp. 427–438). Mahwah, NJ: Erlbaum.
Glass, G. V., & Hopkins, K. ( 1996 ). Statistical methods in psychology and education (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.
Gomez, M. J., Fassinger, R. E., Prosser, J., Cooke, K., Mejia, B., & Luna, J. ( 2001 ). Voces abriendo caminos (voices forging paths): A qualitative study of the career development of notable Latinas. Journal of Counseling Psychology , 48 , 286–300.
Gorsuch, R. L. ( 1983 ). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum.
Haase, R. F., & Ellis, M. V. ( 1987 ). Multivariate analysis of variance. Journal of Counseling Psychology , 34 , 404–413.
Hansen, J. C., Dik, B., & Zhou, S. ( 2008 ). An examination of the structure of leisure interests in college students, working age adults, and retirees. Journal of Counseling Psychology , 55 , 133–145.
Hardy, G. E., Barkham, M., Field, S. D., Elliott, R., & Shapiro, D. A. ( 1998 ). Whineging versus working: Comprehensive process analysis of a “vague awareness” event in psychodynamic-interpersonal therapy. Psychotherapy Research , 8 , 334–353.
Haverkamp, B. E. ( 2005 ). Ethical perspectives on qualitative research in applied psychology. Journal of Counseling Psychology , 52 (2), 146–155.
Hayton, J. C., Allen, D. G., Scarpello, V. ( 2004 ). Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis. Organizational Research Methods , 7 , 191–205.
Henwood, K., & Pidgeon, N. ( 2003 ). Grounded theory in psychological research. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 131–156). Washington, DC: American Psychological Association Books.
Heppner, P., Wampold, B., & Kivlighan, D. ( 2008 ). Research design in counseling (4th ed.). Belmont, CA: Brooks-Cole.
Hermann, K. ( 2005 ). Path models of the relationships of instrumentality and expressiveness, social self-efficacy, and self-esteem to depressive symptoms in college students . Unpublished PhD Dissertation, Department of Psychology, Ohio State University.
Hermann, K., & Betz, N. E. ( 2006 ). Path models of the relationships of instrumentality and expressiveness, social self-efficacy, and self-esteem to depressive symptoms in college students. Journal of Social and Clinical Psychology , 25 , 1086–1106.
Hill, C. E., Knox, S., Thompson, B. J., Williams, E. N., Hess, S. A., Ladany, N. ( 2005 ). Consensual qualitative research: An update. Journal of Counseling Psychology , 52 (2), 196–205.
Hogan, T. P. ( 2007 ). Psychological testing: A practical introduction (2nd ed.). New York: Wiley.
Hoshmand, L. T. ( 2005 ). Narratology, cultural psychology, and counseling research. Journal of Counseling Psychology , 52 (2), 178–186.
Hu, L., & Bentler, P. ( 1999 ). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling , 6 , 1–55.
Jick, T. D. ( 1983 ). Mixing qualitative and quantitative methods: Triangulation in action. In J. Van Maanen (Ed.), Qualitative methodology (pp. 135–148). Beverly Hills, CA: Sage.
Joreskog, K. G., & Sorbom, D. (2008). LISREL 8.8 . [Software]. Lincolnwood IL: Scientific Software. Retrieved from www.ssicentral.com/lisrel/new.html .
Juntunen, C. L., Barraclough, D. J., Broneck, C. L., Seibel, G. A., Winrow, S. A., & Morin, P. M. ( 2001 ). American Indian perspectives on the career journey. Journal of Counseling Psychology , 48 , 274–285.
Kahn, J. H. ( 2006 ). Factor analysis in counseling psychology research, training, and practice. The Counseling Psychologist , 34 , 684–718, inside back cover.
Kanji, G. K. ( 1993 ). 100 statistical tests . Newbury Park, CA: Sage.
Kashubeck-West, S., Coker, A. D., Awad, G. H., Hix, R. D., Bledman, R. A., & Mintz, L. ( 2008 , August). Psychometric evaluation of body image measures in African American women . Poster presented at the meeting of the American Psychological Association, Boston, MA.
Kidd, S. A., & Kral, M. J. ( 2005 ). Practicing participatory action research. Journal of Counseling Psychology , 52 , 187–195.
Killeen, P. R. ( 2005 ). An alternative to null-hypothesis significance tests. Psychological Science , 16 , 345–353.
Kinnier, R. T., Tribbensee, N. E., Rose, C. A., & Vaughan, S. M. ( 2001 ). In the final analysis: More wisdom from people who have faced death. Journal of Counseling & Development , 79, 187–195.
Kirk, R. ( 1996 ). Practical significance: A concept whose time has come. Educational and Psychological Measurement , 56 , 746–759.
Larson, L., Wei, M., Wu, T., Borgen, F., & Bailey, D. ( 2007 ). Discriminating among educational majors and career aspirations in Taiwanese undergraduates. Journal of Counseling Psychology , 54 , 395–408.
Leff, S. S., Costigan, T., & Power, T. J. ( 2004 ). Using participatory research to develop a playground-based prevention program. Journal of School Psychology , 42 , 3–21.
Lent, R. W., Brown, S. D., Sheu, H.-B., Schmidt, J., Brenner, B., Gloster, C., et al. ( 2005 ). Social cognitive predictors of academic interest and goals in engineering: Utility for women and students at historically Black universities. Journal of Counseling Psychology , 52 , 84–92.
MacCallum, R. C., & Austin, J. ( 2000 ). Applications of structural equation modeling in psychological research. Annual Review of Psychology , 51 , 201–226.
MacCallum, R., Roznowski, M., & Necowitz, L. ( 1992 ). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin , 111 , 490–504.
MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. ( 1999 ). Sample size in factor analysis. Psychological Methods , 4 , 84–89.
Marecek, J. ( 2003 ). Dancing through minefields: Toward a qualitative stance in psychology. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 49–70). Washington, DC: American Psychological Association Books.
Miller, P. J., Hengst, J. A., & Wang, S. ( 2003 ). Ethnographic methods: Applications from developmental cultural psychology. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 49–70). Washington, DC: American Psychological Association Books.
Morrow, S. L. ( 2005 ). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology , 52 (2), 250–260.
Morrow, S. L. ( 2007 ). Qualitative research in counseling psychology: Conceptual foundations. The Counseling Psychologist , 35 (2), 209–235.
Morrow, S. L., & Smith, M. L. ( 1995 ). Constructions of survival and coping by women who have survived childhood sexual abuse. Journal of Counseling Psychology , 42 , 24 – 33 .
Murray, M. ( 2003 ). Narrative psychology and narrative analysis. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design . Washington, DC: American Psychological Association Books.
Muthen, L. K., & Muthen, B. O. (2006). Mplus users guide (4th ed.). Los Angeles: Muthen and Muthen. Retrieved from www.statmodel.com/company.shtml .
Noonan, B. M., Gallor, S., Hensler-McGinnis, N., Fassinger, R. E., Wang, S., & Goodman, J. ( 2004 ). Challenge and success: A qualitative study of the career development of highly achieving women with physical and sensory disabilities. Journal of Counseling Psychology , 51 , 68–80.
Patton, M. Q. ( 2002 ). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications.
Polkinghorne, D. ( 2005 ). Language and meaning: Data collection in qualitative research. Journal of Counseling Psychology , 52 , 137–145.
Ponterotto, J. G. ( 2005 ). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology , 52 (2), 126–136.
Ponterotto, J. G., & Grieger, I. ( 1999 ). Merging qualitative and quantitative perspectives in a research identity. In M. Kopala, & L. A. Suzuki (Eds.), Using qualitative methods in psychology (pp. 49–61.). Thousand Oaks, CA: Sage Publications.
Ponterotto, J. G., & Grieger, I. ( 2007 ). Effectively communicating qualitative research. The Counseling Psychologist , 35 (3), 431–458.
Rennie, D. L. ( 1994 ). Clients’ deference in psychotherapy. Journal of Counseling Psychology , 41, 427–437.
Rennie, D. L. ( 2000 ). Grounded theory methodology as methodological hermeneutics. Theory & Psychology , 10 (4), 481–502.
Rennie, D. L. ( 2004 ). Anglo-North American qualitative counseling and psychotherapy research. Psychotherapy Research , 14 , 37–55.
Richie, B. S., Fassinger, R. E., Linns, S., Johnson, J., Prosser, J., & Robinson, S. ( 1997 ). Persistence, connection, and passion: A qualitative study of the career development of highly achieving African American/Black and White women. Journal of Counseling Psychology , 44 , 133–148.
Russell, D., Kahn, J., Spoth, R., & Altmaier, E. ( 1998 ). Analyzing data from experimental studies. Journal of Counseling Psychology , 45 , 18–29.
Schafer, J. L., & Graham, J. W. ( 2002 ). Missing data: Our view of the state of the art. Psychological Methods , 7 , 147–177.
Schumaker, R. E., & Lomax, R. G. ( 2004 ). A beginner’s guide to structural equation modeling . Mahwah, NJ: Lawrence Erlbaum.
Sherry, A. ( 2006 ). Discriminant analysis in counseling psychology research. The Counseling Psychologist , 34 , 661–683.
Sireci, S. G., & Talento-Miller, E. ( 2006 ). Evaluating the predictive validity of Graduate Management Admissions Test scores. Educational and Psychological Measurement , 66 , 305–317.
Smith, M. L., & Glass, G. V. ( 1977 ). Meta-analysis of psychotherapy outcome studies. American Psychologist , 32 , 752–760.
Sobel, M. E. ( 1982 ). Asymptotic intervals for indirect effects in structural equation models. In S. Leinhart (Ed.), Sociological methodology (pp. 290–312). San Francisco: Jossey-Bass.
Spearman, C. ( 1904 ). General intelligence: Objectively defined and measured. American Journal of Psychology , 15 , 201–293.
Steiger, J. H. ( 1990 ). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research , 25 , 173–180.
Sterner, W. R. ( 2011 ). What is missing in counseling research? Reporting missing data. Journal of Counseling and Development , 89 , 56–63.
Stevens, S. S. ( 1951 ). Handbook of experimental psychology . New York: Wiley.
Strauss, A. L. ( 1987 ). Qualitative analysis for social scientists . New York: Cambridge University Press.
Strauss, A. L., & Corbin, J. ( 1998 ). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Thousand Oaks, CA: Sage Publications.
Suzuki, L. A., Ahluwalia, M. K., Mattis, J. S., & Quizon, C. A. ( 2005 ). Ethnography in counseling psychology research: Possibilities for application. Journal of Counseling Psychology , 52 , 206–214.
Suzuki, L. A., Prendes-Lintel, M., Wertlieb, L., & Stallings, A. ( 1999 ). Exploring multicultural issues using qualitative methods. In M. Kopala, & L. A. Suzuki (Eds.), Using qualitative methods in psychology (pp. 123–134). Thousand Oaks, CA: Sage Publications.
Swanson, J. L. ( 2011 ). Measurement and assessment in counseling psychology. In E. M. Altmaier, & J. C. Hansen (Eds.), The Oxford handbook of counseling psychology . New York: Oxford University Press.
Tabachnick, B. G., & Fidell, L. S. ( 2007 ). Using multivariate statistics (5th ed.). New York: Harper Collins College Publishers.
Thurstone, L. L. ( 1947 ). Multiple factor analysis . Chicago: University of Chicago Press.
Tracey, T. J. G. ( 2008 ). Adherence to RIASEC structure as a key career decision construct. Journal of Counseling Psychology , 55 , 146–157.
Vacha-Haase, T., & Thompson, B. ( 2004 ). How to estimate and interpret various effect sizes. Journal of Counseling Psychology , 51 , 473–481.
Wertz, F. J. ( 2005 ). Phenomenological research methods for counseling psychology. Journal of Counseling Psychology , 52 , 167–177.
Weston, R., & Gore, P. ( 2006 ). A brief guide to structural equation modeling. The Counseling Psychologist , 34 , 719–751.
Wilkinson, L., & Task Force on Statistical Inference. ( 1999 ). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist , 54 , 594–604.
Young, R. A., Valach, L., & Domene, J. E. ( 2005 ). The action-project method in counseling psychology. Journal of Counseling Psychology , 52 (2), 215–223.
Young, R. A., Valach, L., Paseluihko, M. A., Dover, C., Matthes, G. E., Paproski, D., et al. ( 1997 ). The joint action of parents and adolescents in conversation about career. Career Development Quarterly , 46 , 72–86.
Zwick, W. R., & Velicer, W. F. ( 1986 ). A comparison of five rules for determining the number of components to retain. Psychological Bulletin , 99 , 432–442.
- About Oxford Academic
- Publish journals with us
- University press partners
- What we publish
- New features
- Open access
- Institutional account management
- Rights and permissions
- Get help with access
- Accessibility
- Advertising
- Media enquiries
- Oxford University Press
- Oxford Languages
- University of Oxford
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
- Copyright © 2024 Oxford University Press
- Cookie settings
- Cookie policy
- Privacy policy
- Legal notice
This Feature Is Available To Subscribers Only
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
Are you a qualified therapist?
Search Counselling Directory
Research methodology: a basic awareness study
For any profession to survive and develop, on-going research is necessary, challenging old concepts and introducing and developing new ideas. In order for this to take place a reliable procedure needs to be followed.
Research methodology and how research findings can inform counselling practice
Before any counselling research can be undertaken it is vital to consider the ethical guidelines applicable. McLeod (2011, p.167) states that “Research in counselling is bound by a general set of ethical guidelines applicable to all types of investigation of human subjects, but also generates unique dilemmas and problems distinctive to the nature of the counselling process”. In the ethical guidelines for researching counselling and psychotherapy, Bond (2004, p.9) further adds that research integrity requires both a robust ethical commitment to fairness as well as honesty and competence in all aspects of the work. The BACP Ethical Framework includes some guidelines in regard to research and these ought to be adhered to.
The initial stage of any research project would involve the collection of relevant information and data which can be sourced from journals, books and other literature. This could also include personal contact, video and audio tapes, research articles, review papers and online searches.
Once sufficient and valid data has been gathered and assimilated the task of designing and implementing an effective research plan begins. In the selection of appropriate methods to be used, the researcher would decide upon using qualitative, quantitative or mixed methods. Although a distinction is commonly drawn between these two types of research McLeod points out (2011, p.73) that many features of qualitative research can be found in certain quantitative studies. In differentiating between the two one can describe quantitative research as being an objective process based upon statistical evidence and contains numbers and statistics which are gathered by mathematical or computational procedures. Qualitative research differs in that it uses words rather than numbers and is divided into the gathering and the analysing of data.
Having selected an appropriate method by which to conduct the research project, the next step would be the collection of data from the subjects selected for the study. McLeod highlights (2011, p.35) the importance of giving careful thought at this stage to the way in which the data gathered will be finally analysed. It is usual for all data to be analysed at one time but in many qualitative studies data is analysed as it is gathered. McLeod (2011, p36) also stresses the need for the method of data analysis to be decided upon in the initial planning and design stage.
Once all the results have been gathered they then need to be analysed, evaluated and discussed. These findings can then be written up and published. McLeod (2011, p.36) further tells us that writing a research report is complex, as information which is technical, descriptive as well as analytical has to effectively be combined.
In addition to the Randomised controlled trials, the BACP Research homepage (web accessed April 2013) identifies two other types of research. The Practice-based research which involves the use of both pre- and post- measures such as CORE and the Systematic Reviews in which the researcher aggregates the findings of similar types of study all addressing the same type of question.
There are various ways in which counsellors can access these research findings in order to inform their practice. The internet is one method and an example of this is the website ‘ http://www.cprjournal.com/ ’ which aims to assist counsellors in getting the most out of research results. They offer easily accessible information both on the specific contents of each issue of the Counselling & Psychotherapy Research publication as well as on other wider research developments. Books are another source of information and one example would be: Essential research findings in counselling and psychotherapy - the facts are friendly, by Mick Cooper (2008) and published in London by Sage Publications. There are also numerous journals and magazines from which information can be gathered such as, Therapy Today, British Journal of Psychotherapy and the BACP’s Counselling and Psychotherapy Research.
Published research work is informative to the practitioner and can be both informative and supportive in the practice of counselling. For example, the BACP Research homepage (web accessed April 2013) suggests that there is evidence advocating that counselling is equally as effective as CBT. In this same study 51% of clients chose to have counselling rather than to take antidepressant medications although (Baker, R et al.,2002) suggests that a combination of counselling and anti-depressant medication may produce the most beneficial outcomes for clients. It is further interesting to note that ( Roijen et al., 2006 ) suggest that there is no significant difference in the costs between the three interventions of counselling, CBT and usual care and some research actually indicated that counselling is less costly than CBT.
Another example of how research can inform practice was in the instance of a counsellor who was working with a client who continued to self-harm despite their on-going therapy. This had the effect of making the counsellor feel uncertain of their proficiency. By reading the analysis of the research findings of Fleet and Mintz (Fleet and Mintz, 2013, p.44-52) the counsellor came to understand that all five counsellors who took part in this study experienced a range of distressing and intense emotions and this was usual when working with clients who self-harm. By further reading, it was noted that their research supported the fact that this type of client work can affect the counsellors self-confidence and can also have the effect of making make them feel de-skilled.
In conclusion, it is important for counsellors to keep up to date with research findings in order to ensure that their work with clients is rooted in robust rationale. It brings their knowhow and methods of treatment up to date and gives them a choice of interventions to apply when and where appropriate in an ethical way.
BIBLIOGRAPHY:
- BACP Research Homepage, Effectiveness of counselling, available from https://www.bacp.co.uk/research/resources/index.php (accessed April 2013).
- Baker, R., E. Baker, et al. (2002) ‘A naturalistic longitudinal evaluation of counselling in primary care.’ Counselling Psychology Quarterly, Volume 15, (4): (p.359-373).
- Baker, S.B. (2012) A new view of evidence-based practice, available from http://ct.counseling.org/2012/12/a-new-view-of-evidence-based-practice/ (accessed May 2013).
- Bond, T. (2004) Ethical guidelines for research counselling and psychotherapy. Rugby: British Association for Counselling and Psychotherapy. (p.9).
- Bond, T. (2013) BACP Ethical Framework for Good Practice in Counselling & Psychotherapy, Lutterworth: British Association for Counselling and Psychotherapy.
- Cooper, M. (2011) ‘Meeting the demand for evidence–based practice’, Therapy Today, Volume 22, Issue 4.
- Cottrell, S. (2003) The Study Skills Handbook Second Edition, Basingstoke: Palgrave Macmillan.
- Dryden, W. (2003) Handbook of Individual Therapy Fourth Edition, London: Sage Publications Ltd.
- Fleet, D. and Mintz, R. (2013) ‘Counsellors’ perceptions of client progression when working with clients who intentionally self-harm and the impact such work has on the therapist’, Counselling and Psychotherapy Research, Volume 13, (1). (p. 44-52).
- McLeod, J. (2009) An Introduction to Counselling Fourth Edition, Berkshire: Open University Press
- McLeod, J. (2011) Doing Counselling Research 2nd Edition, London: Sage Publications. (pp. 35/36/73/167).
- Roijen, L.H., Van Straten, A. et al. (2006) ‘Cost-utility of brief psychological treatment for depression and anxiety’.British Journal of Psychiatry, Volume 188, (4): (p323-329 ).
- Wosket, V. (1999) The Therapeutic Use of Self, Hove: Routledge.
The views expressed in this article are those of the author. All articles published on Counselling Directory are reviewed by our editorial team .
You’ve just taken the hardest step: looking for help. That’s beautiful and courageous. Whatever troubles you, know there are answers. They may not be ones you expect – sometimes, the causes aren’t what we think they are. Nor are they just “I’ll be fine” ans...
Find the right counsellor or therapist for you
For the most accurate results, please enter a full postcode. If you are searching for an online/phone Counsellor or Therapist, you don't need to enter your location, however, we recommend choosing a Counsellor or Therapist near you, so that you have the choice to see them in person in the future.
All therapists are verified professionals
How we use cookies
We use cookies to run and improve our site. This includes cookies that are essential for the site to function as well as analytics cookies that help us understand how you use the site, security cookies to authenticate users and prevent fraud, and advertising cookies to help serve and personalise ads. By clicking "Accept all cookies" you are giving us consent to set these cookies .
The Sheridan Libraries
- Counseling and Human Services
- Sheridan Libraries
- Research Methods
- PsycInfo Tutorials
- Counseling & Psychotherapy Videos
- Tests & Measures
- DSM-5 (DSM-V) and Assessment
- Citing Sources This link opens in a new window
- Publishing & Scholarly Metrics
- Data & Statistics This link opens in a new window
Your Librarian
Research Methods Books
- SAGE Research Methods An online research methods tool that helps students, faculty and researchers with their research projects. Included are over 175,000 pages of SAGE’s book, journal and reference content with advanced search and discovery tools. You find information about research design, methodologies, and writing up your results.
- << Previous: Tests & Measures
- Next: DSM-5 (DSM-V) and Assessment >>
- Last Updated: Nov 17, 2023 11:46 AM
- URL: https://guides.library.jhu.edu/counseling
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- All content
- Dictionaries
- Encyclopedias
- Expert Insights
- Foundations
- How-to Guides
- Journal Articles
- Little Blue Books
- Little Green Books
- Project Planner
- Tools Directory
- Sign in to my profile No Name
- Sign in Signed in
- My profile No Name
Practitioner Research in Counselling
- By: John McLeod
- Publisher: SAGE Publications Ltd
- Series: Professional Skills for Counsellors
- Publication year: 1999
- Online pub date: January 01, 2011
- Discipline: Counseling and Psychotherapy
- Methods: Case study research , Practitioner research , Research questions
- DOI: https:// doi. org/10.4135/9781849209588
- Keywords: client satisfaction , clients , counseling research , inquiry , journals , knowledge , outcomes Show all Show less
- Print ISBN: 9780761957638
- Online ISBN: 9781849209588
- Buy the book icon link
Subject index
`This is a practical guide to carrying out research in counselling and the helping professions generally. It covers all major aspects of research and guides the reader through the essential processes involved, from setting up and conducting a study, to analyzing data and evaluating findings' - New Therapist. This practical, informative and encouraging guide to doing research in counselling and the helping professions generally has been written with practitioners firmly in mind. The book is a comprehensive yet accessible introduction which covers all major aspects of research and guides the reader through the essential processes involved, from setting up and conducting a study, to analyzing data and evaluating findings. In addition, the author provides guidelines for accessing research information and resources. With an emphasis on the acquisition of research skills and their practical application to counselling issues, Practitioner Research in Counselling shows how research can be used in a meaningful way by all practitioners.
Front Matter
- Introduction
- Acknowledgements
- The Nature of Practitioner Research in Counselling
- Types of Research
- What's the Question?
- Using the Literature
- Setting up a Research Study
- Gathering Data
- Analysing and Presenting Quantitative Data
- Doing Qualitative Research
- Evaluating Outcomes
- Studying the Therapeutic Process
- Practitioner Inquiry Groups
- Getting into Print… and Other Ways of Getting the Message Across
- Critical Reflexivity and the Reconstruction of Counselling
Back Matter
- Appendix Annotated List of Key Resources for Counselling Researchers
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Sign up for a free trial and experience all Sage Learning Resources have to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
Sign up for a free trial and experience all Sage Research Methods has to offer.
- view my profile
- view my lists
- ACA Website
- Submission guidelines
- Advertising
A Publication of the American Counseling Association
Counseling today , from the president, research in counseling.
By Cirecie West-Olatunji November 25, 2013
There are four primary reasons for this impetus. First, by prioritizing counseling research, we move forward as a discipline to our next developmental step — from the conceptual to the empirical. Second, there is a need for more empirical articles that reflect our pedagogical perspective. Third, as many counseling students have lamented, our discipline still lacks a sufficient number of research studies to provide a foundation for research projects. Finally, counseling research gives voice to our lived experiences as counselors and serves as a buffer against marginalization within the mental health research community.
During the past four decades, counselor educators have articulated the need for humanism and multicultural competence, among other ideals. Appropriately, many of the articles published in ACA journals have been conceptual in nature to explicate new constructs, approaches and paradigms. For example, most beginning counselors today have a clear understanding and appreciation for the complex issues presented when working with diverse clients. Moreover, the majority of our training programs have emphasized the relationship between counselor bias and clinical efficacy. Yet, it is time for us to provide evidence not only that the difference exists, but where and how it exists within the therapeutic relationship. More important, we need to know what interventions have been proved to effectively resolve or diminish obstacles to well-being. We should substantially increase the number of research articles in counseling journals to further our development as a profession and to ensure our place within mental health research.
In addition to increasing the number of empirical articles in counseling journals, we can become more intentional about founding our studies in the basic tenets of our profession. Research that reflects humanistic values such as empowerment, resilience, prevention and holism are sorely needed. Far too often, clinical research is deficit-oriented, marginalizing, hegemonic and limited by an emphasis on the intrapsychic experience. We need to serve as advocates for our clients by fostering more mindful research that reflects our unique disciplinary perspective.
In addition to being more intentional about how we frame our research, we need to increase the volume of research in counseling. I, for one, am tired of receiving papers from students (regardless of the given clinical area or topic) that cite every discipline except counseling. When I ask students why they failed to sufficiently cite counseling journals, they often reply there were few if any counseling citations for the chosen (or assigned) topic. Leaders in the counseling profession need to develop initiatives that encourage researchers to conduct and disseminate more research that informs those within and outside of our community about the value and utility of counseling.
Lastly, counselors must believe that by increasing research in counseling, we self-advocate and take social action against marginalization. Although there are those outside of our discipline who believe that counselors are not capable of girding the profession with sufficient analytical prowess and rigor, I disagree. With sufficient, sustained and concerted effort, we can collectively sponsor a campaign to improve and enhance the quality and quantity of counseling research.
As an organization, ACA is committed to this goal, as evidenced by the establishment of the Center for Counseling Practice, Policy and Research, under the direction of Will Stroble. The purpose of the center is to advance ACA’s strategic initiative focused on increasing counseling research and making it more accessible to practitioners. As Will continues to unfold the center’s projects, he will be soliciting input, assistance and support from the ACA membership. Please take time to reflect on how you can contribute to the campaign to increase research in counseling, dialogue with others about the possibilities and then take one concrete step. It matters.
There is great work done in counseling just because of research. Any where What is the importance of research to a counselor?
Thank you for this article. I’m very interested in this subject and found it enlightening!
Thank you for sharing such vital information concerning the Counseling profession. Interesting!
How and where would one pursue a living, in a position solely dedicated to counseling research?
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Notify me of follow-up comments by email.
Notify me of new posts by email.
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- All content
- Dictionaries
- Encyclopedias
- Sign in to my profile No Name
- Sign in Signed in
- My profile No Name
- Business & Management
- Counseling & Psychotherapy
- Criminology & Criminal Justice
- Geography, Earth & Environmental Science
- Health & Social Care
- Media, Communication & Cultural Studies
- Politics & International Relations
- Social Work
- Information for instructors
- Information for librarians
- Information for students and researchers
The Counselling and Psychotherapy Research Handbook
- Edited by: Andreas Vossler & Naomi Moller
- Publisher: SAGE Publications Ltd
- Publication year: 2015
- Online pub date: December 08, 2017
- Discipline: Counseling & Psychotherapy
- Subject: Research Methods for Counseling & Psychotherapy (general)
- DOI: https:// doi. org/10.4135/9781473909847
- Keywords: counseling , counseling and psychotherapy , psychotherapy , qualitative research , research , research questions , t-test Show all Show less
- Print ISBN: 9781446255278
- Online ISBN: 9781473909847
- Buy the book icon link
Subject index
Research is a vital and often daunting component of many counselling and psychotherapy courses. As well as completing their own research projects, trainees across modalities must understand the research in the field – what it tells them and how to do it. Breaking down this seemingly mountainous task into easy to swallow pieces, this book will navigate your students through each stage of the research process, from choosing a research question, through the pros and cons of different methods, to data analysis and writing up their findings. Written by leading contributors from the field including John McLeod, Mick Cooper and Tim Bond, each chapter features points for reflection, engaging activities and suggestions for further reading, helping students to engage with all aspects of research. An original graphic narrative runs throughout the book, bringing this complex topic to life in a unique way. Whether embarking on research for the first time or already a little familiar with research and research methods, this unique guide is something counselling and psychotherapy students will turn to continually throughout their research projects.
Front Matter
- About the editors and contributors
Part I: Introduction
- Chapter 1: Setting the scene: Why research matters (Andreas Vossler, Naomi Moller & Mick Cooper)
- Chapter 2: Attitudes to and perceptions of research (Andreas Vossler & Naomi Moller)
Part II: Beginning the research journey
- Chapter 3: Choosing a research question (Elaine Kasket)
- Chapter 4: How to read and understand research (Elena Gil-Rodriguez)
- Chapter 5: Doing a literature review (Meg Barker)
- Chapter 6: Introduction to research methodology (Ladislav Timulak)
- Chapter 7: Planning your research: Design, method and sample (Terry Hanley, Clodagh Jordan & Kasia Wilk)
- Chapter 8: Ethical considerations (Tim Bond)
- Chapter 9: Writing a research proposal (Mark Donati)
Part III: Methodologies and methods for doing research
- Chapter 10: Quantitative methods (Elspeth Twigg)
- Chapter 11: How to use t-tests to explore pre-post change (Elspeth Twigg & Paul Redford)
- Chapter 12: Qualitative methods (Linda Finlay)
- Chapter 13: How to use thematic analysis with interview data (Virginia Braun, Victoria Clarke & Nicola Rance)
- Chapter 14: Case study methodologies (Julia McLeod, Mhairi Thurston & John McLeod)
- Chapter 15: How to use case study methodology with single client therapy data (Mhairi Thurston, Julia McLeod & John McLeod)
Part IV: Completing the research journey
- Chapter 16: Dissemination of research (Andrew Reeves)
- Chapter 17: Student top tips (Brian Sreenan, Harriet Smith & Charles Frost)
- Chapter 18: Next steps: Building on and using research in training and practice (Peter Stratton, Naomi Moller & Andreas Vossler)
Back Matter
Sign in to access this content, get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Sign up for a free trial and experience all Sage Learning Resources have to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
Sign up for a free trial and experience all Sage Knowledge has to offer.
- view my profile
- view my lists
- The Chicago School
- The Chicago School Library
- Research Guides
Clinical Mental Health Counseling
Research methods.
- Getting Started
- Background Information
- Journal Articles
- Books & Dissertations
- Tests & Measures
- NCMHCE Prep
Research Methods eBooks
- Stay Current in Clinical Mental Health Counseling
- Research Tips This link opens in a new window
- Writing Resources
- Citing Your Sources
Start here:
Finding information About Research Methods:
Literature on research methods can be found in books and journal articles, so searching in the library catalog and databases will turn up relevant sources.
Some relevant keywords to search using the library One Search include:
- "research methods"
- "qualitative research"
- "quantitative research"
- << Previous: NCMHCE Prep
- Next: Stay Current in Clinical Mental Health Counseling >>
- Last Updated: Jul 12, 2023 1:31 PM
- URL: https://library.thechicagoschool.edu/cmhc
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- Wiley-Blackwell Online Open
How should we evaluate research on counselling and the treatment of depression? A case study on how the National Institute for Health and Care Excellence's draft 2018 guideline for depression considered what counts as best evidence
Michael barkham.
1 Centre for Psychological Services Research, University of Sheffield, Sheffield, UK
Naomi P. Moller
2 Open University, Milton Keynes, UK
3 British Association for Counselling and Psychotherapy, Lutterworth, UK
Joanne Pybis
Health guidelines are developed to improve patient care by ensuring the most recent and ‘best available evidence’ is used to guide treatment recommendations. The National Institute for Health and Care Excellence's ( NICE 's ) guideline development methodology acknowledges that evidence needed to answer one question (treatment efficacy) may be different from evidence needed to answer another (cost‐effectiveness, treatment acceptability to patients). This review uses counselling in the treatment of depression as a case study, and interrogates the constructs of ‘best’ evidence and ‘best’ guideline methodologies.
The review comprises six sections: (i) implications of diverse definitions of counselling in research; (ii) research findings from meta‐analyses and randomised controlled trials ( RCT s); (iii) limitations to trials‐based evidence; (iv) findings from large routine outcome datasets; (v) the inclusion of qualitative research that emphasises service‐user voices; and (vi) conclusions and recommendations.
Research from meta‐analyses and RCT s contained in the draft 2018 NICE Guideline is limited but positive in relation to the effectiveness of counselling in the treatment for depression. The weight of evidence suggests little, if any, advantage to cognitive behaviour therapy ( CBT ) over counselling once risk of bias and researcher allegiance are taken into account. A growing body of evidence from large NHS data sets also evidences that, for depression, counselling is as effective as CBT and cost‐effective when delivered in NHS settings.
Specifications in NICE 's updated guideline procedures allow for data other than RCT s and meta‐analyses to be included. Accordingly, there is a need to include large standardised collected data sets from routine practice as well as the voice of patients via high‐quality qualitative research.
Introduction
English health guidelines are created and regularly updated with the aim of improving patient care by ensuring that the most recent and ‘best available evidence’ is used to guide treatment (National Institute for Health and Care Excellence Guidance, 2017a ). As stated on its website: ‘National Institute for Health and Care Excellence (NICE) guidelines are evidence‐based recommendations for health and care in England’ (NICE Guidelines, 2017b ). Although some NICE guidance is also adopted by Wales, Scotland and Northern Ireland, a separate UK‐based body equivalent to NICE exists; namely the Scottish Intercollegiate Guidelines Network ( 2017 ). Mental health treatment guidelines are also developed by other international organisations, such as the World Health Organization ( 2017 ) and professional/scientific bodies such as the American Psychiatric Association ( 2017 ), and by European and other countries (Vlayen, Aertgeerts, Hannes, Sermeus & Ramaeker, 2005 ).
This article focuses on: (i) NICE guidelines because of the organisation's impact in shaping mental health care, not only in the UK but internationally (Hernandez‐Villafuerte, Garau & Devlin, 2014 ); (ii) depression, as NICE is currently updating their depression guideline (NICE, 2017d ), and; (iii) counselling as the intervention, as different guidelines have drawn different conclusions (Moriana, Gálvez‐Lara & Corpas, 2017 ). Specially, we focus on the selection and use of evidence. In terms of overall methodology, in their procedural manual NICE state: ‘Guidance is based on the best available evidence of what works, and what it costs’ (NICE, 2014 /2017, p. 14). Although the procedural manual states that randomised controlled trials (RCTs) are often the most appropriate design, it also states: ‘However, other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). Accordingly, we assess the extent to which NICE has adhered to its own methods manual in drawing up the draft guideline. While NICE's depression guideline is used as the example, arguments in this article are intended to have broad relevance for any organisation developing guidelines across mental health treatments.
The new revision of the NICE Guideline for Depression in Adults: Recognition and Management is scheduled to be published in January 2018 and available as a consultation document at the time of writing (NICE, 2017d ). The previous 2009 NICE Guideline stated: ‘ For people with depression who decline an antidepressant, CBT [cognitive behaviour therapy], IPT [interpersonal psychotherapy], behavioural activation and behavioural couples therapy, consider: counselling for people with persistent subthreshold depressive symptoms or mild to moderate depression’ (NICE, 2009 , p. 23). Counselling was included in the 2009 Guidelines but only for those who declined other recommended treatments; the guidelines were accordingly critiqued on the basis of limiting patient choice (British Association for Counselling and Psychotherapy, 2009 ). In addition, practitioners offering counselling to adults with depression were recommended to: ‘Discuss with the person the uncertainty of the effectiveness of counselling and psychodynamic psychotherapy in treating depression’ (p. 24). This recommendation was criticised as research suggests that both patient hope and a good therapeutic relationship are important in creating good patient outcomes (Barber, Connoll, Crits‐Christoph, Gladis & Siqueland, 2000 ). Accordingly, this recommendation would likely have negatively impacted on early engagement in counselling as well as on outcomes for counselling, if practitioners had implemented this guidance.
The consultation document for the 2018 proposed guideline states: ‘Consider counselling if a person with less severe depression would like help for significant psychosocial, relationship or employment problems and has had group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA for a previous episode of depression, but this did not work well for them, or does not want group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA’ (NICE, 2017d ; Recommendation 64, p. 252). It also recommends that the counselling ‘is based on a model developed specifically for depression, consists of up to 16 individual sessions each lasting up to an hour, and takes place over 12–16 weeks, including follow‐up’ (NICE, 2017d ; Recommendation 65, p. 252). Importantly, the ‘uncertainty’ directive has been removed. Hence, the proposed guideline is arguably an improvement on before, as it moves towards a principle of matching counselling with specific issues (i.e., psychosocial, relationship and employment) together with a crucial note about the specificity of the counselling model to be adopted.
Historically, the NICE Guideline for Depression has been highly influential in shaping healthcare provision for those experiencing depression. As described by Clark ( 2011 ), the NICE recommendations for depression from 2004 onwards contributed to the development and roll‐out of the Improving Access to Psychological Therapies (IAPT) programme, which in England now provides the bulk of treatment for depression in primary care (Gyani, Pumphrey, Parker, Shafran & Rose, 2012 ). One example of the impact of the revised 2009 Guideline appears to have been the cutting of counselling jobs in the NHS, with IAPT workforce census data suggesting a 35% decline in the number of qualified counsellors working as high‐intensity therapists between 2012 and 2015, in a period where the total IAPT workforce grew by almost 18% (IAPT Programme, 2013 ; NHS England & Health Education England, 2016 ). Workforce shifts that apparently follow revised NICE guidelines (e.g., counselling not being recommended as a first‐line treatment for depression) underline the importance of scrutinising guideline recommendations since a core assumption is that using ‘best’ evidence and guideline methodologies will lead to NICE recommendations that improve patient care. An implicit question in the remainder of this article is whether the positioning of counselling as a second‐tier treatment for mild‐to‐moderate depression (only) through NICE recommendations is likely to lead to improved outcomes for clients with depression.
Defining counselling as a psychological intervention
The NICE depression guidelines (2009, 2017d) have included recommendations for ‘counselling’, but the definition of ‘counselling’ is unclear. The British Association for Counselling and Psychotherapy (BACP) adopts a generic definition for both counselling and psychotherapy as umbrella terms for ‘a range of talking therapies’ (BACP, 2017 ). Equivalent professional organisations, such as the American Counseling Association (ACA) and the European Association for Counselling (EAC) define counselling in terms of a professional relationship that seeks to aid patients (ACA, 2017 ; EAC, 2017 ). What these definitions have in common is that they are nonspecific: counselling is a broad family of interventions that includes subtypes of counselling such as person‐centred therapy (PCT) or cognitive behaviour therapy (CBT). However – and problematically – the 2009 NICE Guideline for Depression directly compared ‘counselling’ with subtypes of counselling.
The 2009 NICE Guideline for Depression did not specify a definition of counselling; however, various definitions for counselling are provided in the empirical literature. For example, King, Marston and Bower ( 2014 ) reported on a reanalysis of the Health Technology Assessment‐funded trial (Ward et al., 2000 ), comprising a head‐to‐head RCT comparing ‘nondirective counselling’ and cognitive behaviour therapy (CBT), and defined the counselling used in their study as ‘a nondirective, inter‐personal approach’ (p. 1836) derived from the work of Carl Rogers. In this context, the therapy ‘counselling’ has clear theoretical and empirical roots and is a synonym for a type of talking therapy.
In contrast, a 2012 meta‐analytic study by Cuijpers et al. examined the efficacy of ‘nondirective supportive therapy’ (NDST) – which they stated is ‘commonly described in the literature as counselling’ (p. 281). They defined NDST as an approach that utilises the shared attributes (or common factors) of all talking therapies ‘without (utilizing) specific psychological techniques…’ (p. 281), which characterise particular types of therapy. Cuijpers et al. ( 2012 ) point out that many RCTs that include counselling do so as a nonspecific control group and suggest researchers appear to treat counselling as not being a bona fide active treatment. In this context ‘counselling’ is neither a category nor an example of a category, but a shared nonspecific attribute of psychological therapies in general.
The outcome of the 2009 NICE guidance recommendations spurred the development of a model of counselling for the treatment of depression designed to be effective as a high‐intensity intervention within IAPT that took the form of a person‐centred experiential therapy named Counselling for Depression (CfD; Sanders & Hill, 2014 ). The aim was to develop a bona fide psychological therapy using an established methodology that involved defining a range of basic, generic, specific and meta‐competencies for this model of therapy (Roth, Hill & Pilling, 2009 ). The CfD (person‐centred experiential) model, which is now available to IAPT patients (NHS England, 2017 ), also meets the recommendations in the 2018 draft guidelines for a model of counselling developed for depression.
The reviewed definitions suggest there are potentially two distinct forms of counselling: a nonspecific counselling that utilises generic and basic competences common to all forms of therapy, and a model‐specific form of counselling, such as person‐centred experiential counselling, which includes CfD. This distinction between generic counselling and a bona fide/active intervention potentially implies critical differences in the level of training and competencies of a practitioner (comparable to the differences between low and high‐intensity treatment in IAPT) and in the specificity of the model of intervention used. The 2018 proposed guideline does not utilise such distinctions, however, the only recommendation in the draft guidelines is that the counselling intervention should be one developed specifically for depression (yet CfD is not named). This suggests that guideline developers need to make a concerted effort to use definitions that specify the theoretical approach and potentially the level of professional training or competencies.
The current evidence for the clinical efficacy and effectiveness of counselling in the treatment of depression
NICE guidelines for depression draw on two main classes of data to arrive at clinical recommendations, namely meta‐analyses and RCTs. NICE's methodological procedures state: ‘NICE prefers data from head‐to‐head RCTs to compare the effectiveness of interventions’ (NICE, 2014 /2017, p. 103). Further, the procedures require the detailing of the methods and results of individual trials. If direct evidence from treatment comparisons is not available, then indirect comparisons can be made using network meta‐analysis (see Mills, Thorlund & Ioannidis, 2013 ). This procedure, which combines direct and indirect treatment comparisons, focuses on classes of interventions (i.e., broader headings of approaches rather than specific therapy brands) to arrive at recommendations when comparing multiple interventions. The interventions are judged against an appropriate comparator, that is, a common standard. The draft 2018 Guideline uses a pill placebo condition as the appropriate comparator. The Guideline also considers the cost‐effectiveness of interventions. In this section, we provide an overview of the current status of evidence regarding counselling as derived from meta‐analyses and RCTs.
Meta‐analyses of counselling in the treatment of depression
In terms of meta‐analyses, the aim is to combine data from multiple studies and to statistically synthesise the results to create conclusions that are more robust. There are three meta‐analyses of direct relevance.
First, Cape, Whittington, Buszewicz, Wallace and Underwood ( 2010 ) carried out a meta‐analysis and meta‐regression of 34 studies focusing on brief psychological interventions for anxiety and depression, involving 3962 patients. Most interventions were brief cognitive behaviour therapy (CBT; n = 13), counselling ( n = 8) or problem solving therapy (PST; n = 12). Results showed effectiveness for all three types of therapy: studies of CBT for depression ( d: −.33, 95% CI: −.60 to −.06) and studies of CBT for mixed anxiety and depression ( d : −.26, 95% CI: −.44 to −.08); counselling in the treatment of depression alone as well as mixed anxiety and depression ( d : −.32, 95% CI: −.52 to −.11); and PST for depression and mixed anxiety and depression ( d : −.21, 95% CI: −.37 to −.05). Controlling for diagnosis, meta‐regression found no difference between CBT, counselling and PST. The authors concluded that brief CBT, counselling and PST are all effective treatments in primary care, but that effect sizes are low compared to longer length treatments. Nonetheless, it should be pointed out that for the analysis of the four studies of counselling for the treatment of depression only, the results were not statistically significant. However, four studies are not sufficient to yield reliable results.
Second, Cuijpers et al. ( 2012 ) found that studies in which NDST was compared with CBT resulted in a small and nonsignificant difference between NDST and CBT. The authors commented that NDST has been treated as a proxy for counselling, although it specifically excludes active elements that may be present in bona fide counselling interventions. However, they found that the studies with researcher allegiance in favour of the alternative psychotherapy resulted in a considerably larger effect size than studies without researcher allegiance. Moreover, in studies without an indication of researcher allegiance, the difference between NDST and other therapies was virtually zero. The authors argued that such results suggested that NDST is effective and deserved more respect from the research community.
Third, the most recent relevant study by Barth et al. ( 2013 ) adopted a network meta‐analysis – the same method used by the NICE Guideline Development Group – using 198 trials comparing seven forms of psychotherapeutic interventions, one of which was ‘supportive counselling’. The analysis found significant effects for supportive counselling compared against waitlist and that the evidence base for supportive counselling was broad. However, when that analysis focused only on the network of large trials, for four of the interventions, including supportive counselling, significant effects were no longer found. Barth et al. ( 2013 ) themselves invoked the results of the Cuijpers et al. ( 2012 ) meta‐analysis that found no difference between NDST and other treatments. They stated it was ‘unjustified’ to dismiss supportive counselling as a suboptimal treatment because, although the evidence for this intervention was less strong, the size of the differences between the interventions studied was small. They concluded that different psychotherapeutic interventions for depression have comparable, moderate‐to‐large effects.
In summary, when studies with a low researcher allegiance against counselling together with evidence from bona fide counselling interventions are considered, the meta‐analytic studies comparing counselling with CBT for depression suggest either broad equivalence of patient outcomes or, where differences do exist, that they are small.
RCTs of counselling in the treatment of depression
As a tradition, counselling in the UK is often associated with Humanistic/Experiential therapies, and there are a few RCTs which report evidence for the efficacy for these therapies with depressed patients (Goldman, Greenberg & Angus, 2006 ), including one that compared process‐experiential therapy (now referred to as emotion‐focused therapy) with CBT and found comparable outcomes (Watson, Gordon, Stermac, Kalogerakos & Steckley, 2003 ). However, only one recent report directly compared counselling (defined as nondirective person‐centred counselling) to CBT in the treatment of depression. The original study reported comparisons between nondirective counselling and CBT for mixed anxiety and depression and found no significant difference in outcomes for the two therapies (Ward et al., 2000 ). A subsequent reanalysis of the subsample of patients meeting a diagnosis of depression only, found similar results with both therapies being equally effective and both being superior to usual General Practice care at 4 months but not at 12 months (King et al., 2014 ).
The findings from this study are important because of the lack of RCT research that might provide direct head‐to‐head trial evidence for the efficacy of counselling. The 2009 NICE Guideline for Depression development process identified six relevant studies for consideration. One was excluded due to the mixed diagnosis (Ward et al., 2000 ) although, as stated, a subanalysis focusing on patients reporting depression only was considered (and subsequently published as King et al., 2014 ). Data from five other trials were also used (Bedi et al., 2000 ; Goldman et al., 2006 ; Greenberg & Watson, 1998 ; Simpson, Corney, Fitzgerald & Beecham, 2000 ; Watson et al., 2003 ). However, they were all either low powered in terms of patient numbers, had patient samples drawn from the mild‐to‐moderate range of depression only with some including subthreshold patients, or compared outcomes for similar (Humanistic/Experiential) therapies. The 2009 guideline recommendation was that counselling should not be considered as a first‐line intervention, as it had more limited evidence, and should only be considered for patients experiencing subthreshold, mild or moderate depression who declined the other treatments available. As stated, the guideline also added the qualification about the uncertainty of the evidence for counselling, and suggested patients should be advised on this matter.
In summary, while there is minimal recent RCT evidence comparing counselling as a bona fide intervention with CBT, the evidence that does exist supports the general efficacy of counselling. However, apart from the Ward/King reports, RCT studies are generally small‐scale and lack a standard comparator such as CBT. The lack of new data may explain why the recommendations for counselling in the 2009 published and 2018 draft guidelines are broadly similar. However, unlike the 2009 Guideline, the draft 2018 Guideline is based on network meta‐analyses. As some commentators have noted: ‘Nonetheless, a network meta‐analysis is not a substitute for a well conducted randomized controlled trial’ (Kanters et al., 2016 , p. 783). More immediately, perhaps, there needs to be a debate as to the appropriateness of using pill placebo as the appropriate comparator in relation to decision‐making. To use a nonclinically viable intervention as the appropriate comparator – something a patient experiencing depression would never be offered – does not appear to be the most useful benchmark for informing decision‐making regarding differing interventions (see Dias, 2013 ).
Yet, beyond meta‐analyses and RCTs, other potentially valuable sources of evidence exist that are defined by NICE as within the scope of evidence that could be considered but, unfortunately, have not been in the 2018 draft recommendations. In the next section, we argue that there has been an overreliance on the RCT design, before then presenting a case for including relevant non‐RCT data.
The limitations of currently considered evidence in guideline development
An overreliance on rcts.
Within the counselling and psychotherapy outcomes literature, there has been a long‐standing debate regarding what counts as evidence (Kazdin, 2008 ). Evidence from RCTs has traditionally been favoured due to specific features that control for systematic biases, leading them to be judged as providing the most stringent form of evidence. In short, randomisation protects against any systematic biases in the assignment of patients to treatments. The component of randomisation is probably the hallmark most often cited as underpinning the superiority of trials data in the field of the psychological therapies. However, the other central element of RCTs – participants being double‐blinded – can only be utilised in drug trials where the content of the drug can be hidden to patients and to the professional providing the medication. Hence, while trial designs in the psychological therapies are not the strongest form that the RCT design allows, it has long been held as the design that yields the most reliable and valid findings (Wessley, 2007 ).
While the strengths of RCT designs are well accepted, no research method is immune from criticism and one of the abiding criticisms of RCTs concerns their lack of generalisability (Kennedy‐Martin, Curtis, Faries, Robinson & Johnston, 2015 ). While statistical work is taking place to develop procedures in an attempt to address this issue (Stuart, Bradshaw & Leaf, 2015 ), by design, RCTs involve the careful screening of patients to ensure that all trial participants fully meet diagnostic criteria for the presenting condition under study. Typically, this involves screening out patients presenting with any comorbidities, something that leads to the criticism that RCT participants are atypical of patients in actual practice, since, for example, depression is highly comorbid with anxiety (Kaufman & Charney, 2000 ). In addition, by their very nature RCTs draw on a specific subgroup of the population of patients, namely those who are willing to be trial participants. A major reason patients decline to be participants in trials is that they do not wish to be research subjects (Barnes et al., 2013 ). In addition, there has been a long‐term concern about the lack or underrepresentation of minorities in research studies (Hussain‐Gambles, Atkin & Leese, 2004 ; Stronks, Wieringa & Hardon, 2013 ). Hence, while a well‐conducted RCT will state that the intention to offer treatment X (from an intent‐to‐treat analysis) or receipt of treatment X (from a per‐protocol analysis) is better than treatment Y in a specific setting, it will not address the question a commissioner asks, namely: will it work for us? (Cartwright & Munro, 2010 ).
Jadad and Enkin ( 2007 ), the authors of the standard guide to designing RCTs, state: ‘… randomized trials are not divine revelations, they are human constructs, and like all human constructs, are fallible. They are valuable, useful tools that should be used wisely and well’ (p. 44). Indeed, Jadad and Enkin list over 50 specific biases that are possible when carrying out a trial and go on to provide a strong warning that unless their weaknesses are acknowledged, there is a ‘risk of fundamentalism and intolerance of criticism, or alternative views (that) can discourage innovation’ (p. 45).
Despite such criticisms, trials have become the dominant source for informing clinical guidelines. Yet, as the previous Chairman of NICE, Sir Mike Rawlins, stated: ‘Awarding such prominence to the results of RCTs, however, is unreasonable’ (2008, p. 2159). Rawlins further argued in relation to the hierarchy of evidence used by NICE that privileges trials data, that ‘Hierarchies of evidence should be replaced by accepting a diversity of approaches.’ (p. 2159). And indeed, the word hierarchy does not appear at all in the NICE methods manual (NICE, 2014 /2017). Rawlins’ argument was not to abandon RCTs in favour of observational studies; rather what he sought was for researchers to improve their methods and for decision makers to avoid adopting entrenched positions about the nature of evidence. However, given the dominance of RCT evidence and the absence of relevant and available observational data in the draft 2018 guidelines, it would appear that Rawlins’ call has not been heeded.
Considering statistical power and nonindependence of patients in RCTs
A separate but major issue concerning trials, as identified earlier, is the extent to which they are appropriately powered to detect any hypothesised differences. To have confidence in the findings from RCTs that test the superiority, noninferiority or equivalence of one treatment condition against another, studies must have the required statistical power (sufficient numbers of patients in the trial) to detect such a difference if one exists. The standard criterion that defines sufficient power for a superiority trial requires that a study will have at least an 80% chance of detecting a difference at p < .05 if one exists.
Cuijpers ( 2016 ) reviewed the statistical power needed both for individual RCTs and for meta‐analytic studies focused on adult depression. His analysis should be considered alongside the three classes of between‐group effect sizes traditionally postulated by Cohen ( 1992 ): small ( d = .2), medium ( d = .5), and large ( d = .8). He identified that a sample size of 90 trial patients (i.e., 45 patients per arm) was required to find a differential effect size of d = .6 (i.e., a medium effect size). Having established in an earlier article that an effect size of d = .24 could be considered as a ‘minimally important difference’ from the patient's perspective (Cuijpers, Turner, Koole, van Dijke & Smit, 2014 ), he calculated that for a trial to determine such a minimally important difference between two active treatments for depression would require 548 patients – that is, 274 patients in each arm of the trial.
Yet in Cuijpers’ ( 2016 ) analysis, the mean number of patients included in RCT comparisons between CBT and another psychotherapy for depression was 52, with a range from 13 to 178. The effect size that can be detected with the average trial comprising 52 patients was d = .79, an effect size similar to that comparing CBT with untreated control groups (i.e., d = .71). For nondirective counselling, the analysis found that the largest study had sufficient power to detect a differential effect size of d = .34. The largest comparative trial found in three comprehensive meta‐analyses of major types of psychotherapy comprised 221 patients. This is about 40% of the 548 patients needed to detect a clinically relevant effect size of d = .24. Taking these statistics together, it is uncertain whether there can be sufficient confidence in the results of RCTs for adult depression conducted to date that compare CBT with another therapy because they likely lack sufficient statistical power (Cuijpers, 2016 ).
Meta‐analyses are, like single RCTs, subject to considerations of power. For meta‐analyses of RCTs focused on treatment of depression, Cuijpers ( 2016 ) suggests that for CBT (based on a mean of 52 patients per study), 18 trials would be needed to detect a significant effect of d = .24 with a power of .8, or 24 trials with a power of .9. According to his analysis, the actual number of trials was 46, which was sufficient to detect a clinically relevant effect. However, he concluded that only 13 of these trials had a low risk of bias. This is important, as ‘bias’ is an agreed index of factors that reduce confidence in the results of RCTs. For example, a potential source of bias is the degree to which assessors or data analysts have prior knowledge of the specific intervention any individual study participant received. Hence, meta‐analyses are also vulnerable to low power once only studies with a low risk of bias are considered.
For nondirective supportive counselling (based on a slightly higher mean of 59 patients per trial), 16 trials would be needed to detect an effect of d = .24 with a power of .8 or 21 trials with a power of .9. The 32 trials comparing counselling with other therapies therefore had sufficient power to detect a clinically relevant effect. However, only 14 trials had low risk of bias, yielding the same conclusion that there were not enough trials to detect such an effect.
In addition to issues of bias and low power, the statistical analysis applied to the data assumes that the data – that is, patients – are independent of each other. However, patients are not independent of each other as they are nested within therapists. Patient outcomes for one therapist will be correlated with the other patients from the same therapist and differ from the outcomes with other therapists. It is likely that there will be variability between the outcomes of therapists, a phenomenon known as therapist effects (Barkham, Lutz, Lambert & Saxon, 2017 ). Failure to take account of therapist effects results in this effect being attributed to the treatment effect and, thereby, inflating it (or deflating it if the therapists are not effective).
In summary, despite numerous comparative trials being conducted, from this data it is unclear whether one therapy for adult depression is more effective than another to an extent that is clinically relevant . Trials are underpowered and require much greater statistical power and less bias to determine differential effectiveness. In the light of this position, we now consider arguments for including very large data sets from routine practice.
Incorporating very large routine practice‐based data sets in guideline development for depression
As stated earlier, the NICE methods manual states that while RCTs may often be the most appropriate design, ‘other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). And in terms of the development work in network meta‐analysis, the aim is to move towards ‘the inclusion of studies of various designs, including observational studies, within one analysis’ (Kanters et al., 2016 , p. 783). Accordingly, there appears to be little reason, if any, for NICE not to consider high‐quality and relevant observational data.
One key development over the past decade or more has been the growth in the availability of very large data sets. For the psychological therapies, this is best exemplified by the implementation of the IAPT programme in England (London School of Economics and Political Science, 2006 ). The IAPT programme comprises a stepped care approach in which patients are initially referred for low‐intensity interventions such as psychoeducational interventions delivered by psychological wellbeing practitioners (PWPs). If not successful, these are ‘stepped up’ to high‐intensity interventions comprising CBT and several non‐CBT therapies, including CfD (person‐centred experiential therapy), a standardised model of intervention focused on depression with standards of training and supervision. Some patients, based on their presenting issues, are assigned directly to high‐intensity interventions. The IAPT programme, which was piloted in 2006 and independently evaluated (Parry et al., 2011 ), has been rolled out nationally and has focused largely on patients experiencing depression and anxiety but is being expanded to other patient groups.
A key feature of the IAPT programme is the administration of a common set of outcome measures – a minimum data set (MDS) – at each attended session. The MDS comprises the following: the Patient Health Questionnaire‐9 (PHQ‐9: Kroenke, Spitzer & Williams, 2001 ), which acts as a proxy measure for depression; the General Anxiety Disorder‐7 (GAD‐7; Spitzer, Kroenke, Williams & Löwe, 2006 ); and the Work and Social Adjustment Scale (WSAS; Mundt, Marks, Shear & Greist, 2002 ). The per‐session administration of the PHQ‐9, GAD‐7 and WSAS in IAPT has yielded potential standardised data sets from routine practice of unprecedented size. In 2015–2016 (the last year for which there is currently data), almost a million people entered IAPT treatment, with over half a million completing a course of treatment (NHS Digital, 2016 ).
The numbers of IAPT patients for whom systematic data has been collected potentially makes this one of the largest standardised data sets on the psychological therapies in the world. Kazdin ( 2008 ), on observing the general waste from data in practice settings not being used stated: ‘we are letting knowledge from practice drip through the holes of a colander’ (p. 155). Indeed, the collection and use of such large‐scale routinely collected standardised data are a hallmark of the research paradigm termed practice‐based evidence (Barkham & Margison, 2007 ; Barkham, Stiles, Lambert & Mellor‐Clark, 2010 ). While the privileging of trials data ahead of observational data may have been appropriate when the latter comprised small‐scale and unsystematic studies, this is no longer the case. In the same way that narrative reviews have developed a clear and systematic methodological underpinning to yield systematic reviews, the methods of collection and analyses of ‘routine data’ have developed a level of sophistication that can arguably no longer be dismissed (or labelled) as simply observational data.
Consistent with this practice‐based paradigm, the proposed 2018 Guideline states: ‘For all interventions for people with depression: use sessional outcome measures; review how well the treatment is working with the person; and monitor and evaluate treatment adherence’ (NICE, 2017d; Recommendation 37, p. 248). In addition, healthcare professionals delivering interventions for people with depression should: ‘receive regular high‐quality supervision; and have their competence monitored and evaluated, for example, by using video and audio tapes, and external audit’ (NICE, 2017d; Recommendation 38, p. 248). These recommendations provide the underpinning not only for enhancing the quality of clinical practice but also of ensuring the collection of high‐quality standardized data that would complement trials‐based data. However, despite the potential size of the IAPT data set and its quality, the data are not currently considered in NICE guideline developments. Given that the IAPT initiative was shaped by iterations of the NICE Guidelines for depression, the IAPT data itself may contribute to a better linkage between practice in routine settings, the yield from RCTs, and guideline development. It also enables practitioners in routine practice to contribute directly via their standardised data to informing the very guidelines that they will have to implement.
The IAPT data set: effectiveness of counselling in the treatment of depression in the NHS
The potential value of the IAPT data set in contributing to the evidence base on effective treatment for depression in adults is illustrated by examining reports and studies derived from IAPT data. Since 2013–14, IAPT have published annual reports comparing the number of referrals, average number of sessions and recovery rates between the available psychological therapies (NHS Digital, 2014 , 2015 , 2016 ). As demonstrated in Table 1 , whilst a greater proportion of referrals (approximately 60–65%) received CBT as compared with counselling, patient outcomes (i.e., recovery rates) have been virtually equivalent between the two interventions.
Data extracted from successive NHS digital reports on comparisons between cognitive behaviour therapy (CBT) and counselling/counselling for depression (CfD)
Research studies carried out by different academic groups that have accessed different portions of the IAPT data set to undertake more detailed analyses have also reported comparable outcomes between CBT and counselling in relation to the treatment of depression (Gyani, Shafran, Layard & Clark, 2013 ; Pybis, Saxon, Hill & Barkham, 2017 ). In more sophisticated studies using multilevel modelling to account for patient case mix and the nested nature of data, where differences have been observed these have been small and clinically insignificant (Pybis et al., 2017 ; Saxon, Firth & Barkham, 2017 ). These data demonstrate that for patients accessing psychological therapy throughout the NHS, counselling is, to all intents and purposes, as effective as CBT in the treatment of depression for both moderate and severe levels of depression. These studies, as well as the publicly available evidence from NHS Digital, confirm the findings of earlier studies using the Clinical Outcomes in Routine Evaluation measure (CORE‐OM; Evans et al., 2002 ). These studies used routinely collected CORE‐OM data from naturalistic settings before the implementation of IAPT and yielded comparable patient outcomes between counselling and CBT (Stiles, Barkham, Mellor‐Clark & Connell, 2008 ; Stiles, Barkham, Twigg, Mellor‐Clark & Cooper, 2006 ).
In summary, the evidence from the IAPT data set is that counselling is as effective as CBT as an intervention for depression. This evidence of effectiveness in NHS practice settings across England accords with the conclusions of Cuijpers ( 2017 ), who reviewed over 500 depression RCTs from four decades of research, and concluded that there were no significant differences between the main interventions, once biases and allegiances were considered. The consistency of the trials‐based and practice‐based findings is important in supporting the value of counselling as an intervention for depression offered in the NHS in England. However, we argue that the key conclusion for guideline development from these findings is that focus of research attention should not be on repeatedly re‐evaluating the evidence for different interventions. Instead the focus should move to other factors such as therapist effects or site effects where there appear to be noticeable differences in patients’ outcomes (e.g., Saxon & Barkham, 2012 ). This refocusing away from treatment differences and towards other factors is a position endorsed by the American Psychological Association ( 2012 ).
The IAPT data set: efficiency and cost‐effectiveness of counselling in the treatment of depression
A 2010 report calculated the annual cost of depression in England to be almost £11 billion in lost earnings, demands on the health service and the cost of prescribing drugs to address the depression (Cost of Depression in England, 2010 ). In this context, the cost‐effectiveness of treatment is important to consider. Determining cost‐effectiveness with acceptable degrees of certainty requires large samples, which the IAPT data set offers in a way that trials do not. Given the NICE procedural manual states that, for example, observational data can be used for ‘aspects of effectiveness’, the potential contribution of the IAPT data set to considerations of cost‐effectiveness is significant.
Improving Access to Psychological Therapies data suggest patients accessing counselling attend fewer sessions on average than those accessing CBT (NHS Digital, 2014 , 2015 , 2016 ; Pybis et al., 2017 ; Saxon, Firth et al., 2017 ). This suggests counselling may well be cheaper and therefore more cost‐efficient than CBT as it achieves comparable patient outcomes. To consider this in more detail, a study exploring the cost‐effectiveness of IAPT as a service reported data collected from five Primary Care Trusts and found the cost of a high‐intensity session was £177 (Radhakrishnan et al., 2013 ). Using this estimate alongside figures from the latest IAPT report that counselling is typically seeing patients for 5.9 sessions, whereas CBT is seeing patients for 7.1 sessions (NHS Digital, 2016 ), this would suggest counselling costs approximately £1044 per patient and CBT approximately £1256 per patient. In 2015–16, 152,452 patients completed a course of CBT at an estimated cost of £191 million. If those same patients had received counselling the cost saving could have been over £30 million.
The potential saving of £30 million is calculated only from the fewer sessions (on average) received by counselling patients in IAPT. However, given that counsellors in IAPT are often paid a grade lower than ‘IAPT‐qualified’ therapists (Perren, 2009 ), this figure may underestimate the potential saving. Moreover, while counselling training is typically self‐funded, IAPT CBT trainings have been government funded, initially centrally and more recently locally. This illustrates the potential financial implications of how research evidence is weighed up and then synthesised into guideline recommendations for the treatment of depression.
In summary, the vast data set derived from the IAPT programme needs to be used to complement data from RCTs. And this is particularly true for questions concerning cost‐effectiveness that cannot be adequately addressed by RCTs alone. Within years, there will be patient data on millions of patients within IAPT services. Its inclusion in the scope of NICE guideline reviews would be wholly consistent with the NICE guidelines procedure manual.
Considering the role of service users’ voices via qualitative research in guideline development
The previous section has argued for guideline developers to consider very large patient data sets. In this section, we argue for guideline developers to incorporate qualitative evidence that gives voice to service users. Doing so would be in accordance with NHS England's business plan for 2016/2017, which sets out a commitment: ‘to make a genuine shift to place patients at the centre, shaping services around their preferences and involving them at all stages’ (NHS England, 2016 , p. 49). NICE has a similar commitment (NICE Patient and Public Involvement Policy, 2017c ). Currently, while qualitative research is included in guideline development, NICE processes do not allow such data to be included in the final summative analyses that shape key recommendations. Yet a number of researchers (Hill, Chui & Baumann, 2013 ; Midgley, Ansaldo & Target, 2014 ) argue that qualitative outcome studies are important to consider because they ‘offer a significant challenge to assumptions about outcome that derive from mainstream quantitative research on this topic, in relation to two questions: how the outcome is conceptualised, and the overall effectiveness of therapy’ (McLeod, 2013 , p. 65). Reviewing existing literature, McLeod suggested patients themselves conceptualise outcome much more broadly than in terms of symptom or behavioural change (Binder, Holgersen & Nielsen, 2010 ). Typically, patients acknowledge ways in which therapy has been helpful but also where it has failed, suggesting that quantitative outcome research may overstate therapeutic effectiveness. Qualitative studies can also help answer questions about patient experience and expectations of NHS services, including whether treatments are credible and acceptable to them, which have an impact on outcomes.
Turning to qualitative research focused on depression, there is a growing literature on understanding the experiences of patient populations such as minority ethnic groups (e.g., Lawrence et al., 2006a ), women (e.g., Stoppard & McMullen, 1999 ), men (e.g., Emslie, Ridge, Ziebland & Hunt, 2006 ) and older adults (e.g., Lawrence et al., 2006b ). Such studies elucidate population‐specific depression experiences that can be useful in understanding why certain populations benefit less from treatment. There is also a literature that seeks to describe the experience of aspects of depression such as recovery (e.g., Ridge & Ziebland, 2006 ) or types of depression such as postnatal depression (e.g., Beck, 2002 ). However, currently relatively little research focuses on patients’ experiences of depression treatment. There is some research on depressed patients’ experiences of computer‐mediated depression treatment (e.g., Beattie, Shaw, Kaur & Kessler, 2009 ; Lillevoll et al., 2013 ), and mindfulness (e.g., Mason & Hargreaves, 2001 ; Smith, Graham & Senthinathan, 2007 ). However, there is less research on the major modalities such as CBT (e.g., Barnes et al., 2013 ), psychodynamic (e.g., Valkonen, Hänninen & Lindfors, 2011 ) and process‐experiential therapies (e.g., Timulak & Elliott, 2003 ). The lack matters because such qualitative research focusing on treatment experiences provides a method by which theoretical assumptions about how a therapy ‘works’ can be evaluated against the patient perspective.
Even more rare are comparative qualitative outcome studies (e.g., Nilsson, Svensson, Sandell & Clinton, 2007 ). Such studies focusing on depression are valuable because they can foster understanding of whether patients experience outcomes differently in different therapies. One example is Straarup and Poulsen's ( 2015 ) study, which compared patients’ experiences of CBT and metacognitive therapy and found evidence of different understandings of the causes of depression and what had changed as a result of therapy.
In summary, qualitative research has considerable value in terms of capturing patients’ experiences of psychotherapy that can inform practice (see Levitt, Pomerville & Surace, 2016 ). This suggests the need: (1) to consider qualitative outcome studies in guideline development and recommendations, and (2) encouraging further research focused on guideline‐recommended treatments and differential patient experiences.
Towards a broader spectrum of best evidence
Whatever the potential pool of data, guideline organisations need to establish and implement procedures for making recommendations. A recent review considered how different national organisations produce clinical guidelines. Moriana et al. ( 2017 ) analysed and compiled lists of evidence‐based psychological treatments by disorder using data provided by RCTs, meta‐analyses, guidelines and systematic reviews of NICE, Cochrane, Division 12 of the American Psychological Association and the Australian Psychological Society. For depression, they found poor agreement with no single intervention obtaining positive consensus agreement from all four organisations. The authors suggested one possible cause for the lack of agreement might be subtle biases in committee procedures, while evidence considered by both NICE and Cochrane may be overinfluenced by the key meta‐analyses that both organisations commission to support their decision‐making. Whilst one organisation might favour its own procedures in this way, the process lacks standardisation across the different bodies and leads to discrepancies in guidance.
The finding that guideline processes have led to different treatment recommendations for the same condition underlines the criticisms of an approach to synthesising evidence that rigidly prioritises RCTs. We argue that a rigorous and relevant knowledge base of the psychological therapies cannot be built on one research paradigm or type of data alone but should incorporate both evidence‐based practice (i.e., trials) and practice‐based evidence (i.e., routine practice data; Barkham & Margison, 2007 ). In this conceptualisation, trials provide evidence from a top‐down model (RCT evidence generating national guidelines that are implemented in practice settings) while practice‐based evidence builds upwards using data from routine practice settings to guide interventions and inform guideline development. Both paradigms are complementary and, most importantly, the results from one paradigm can be tested out in the other. Further, a synthesis of evidence from both paradigms ensures that the data from trials remain directly connected and relevant to routine practice, creating a continual cycle between practice and research and between practitioners and researchers.
Given the points made here, there is little justification for relying solely on trials data and dismissing evidence from large standardised routine datasets delivering NICE recommended and IAPT approved psychological therapies. There are issues and vulnerabilities with both paradigms and the evidence they provide, but it is no longer credible to suggest that the term best applies only to trials data. To abide by the advice of Rawlins ( 2008 ) as well as Jadad and Enkin ( 2007 ), views concerning nontrial data need to become more accommodating. Overall, a collective move to a position of considering the weight of evidence from a wider bandwidth or spectrum provides a more rounded and inclusive view of available high‐quality data. By applying the concept of teleoanalysis – that is, the synthesis of different categories of evidence to obtain a quantitative summary – it is possible to arrive at more robust and relevant conclusions (Clarke & Barkham, 2009 ; Wald & Morris, 2003 ). This, we would suggest, is an approach that would yield both better and more relevant evidence. Accordingly, IAPT data now needs to be considered alongside evidence from trials to form a more complete and accurate picture of the comparative effectiveness of psychological therapies. Further, high‐quality qualitative data require inclusion in arriving at recommendations, particularly as it is a primary source for patients’ perspectives and experiences.
Conclusions and recommendations
We have argued for greater precision in defining the profession and practice of counselling, provided an overview of research on counselling for the treatment of depression from meta‐analyses and RCTs, raised issues arising from a sole reliance on trials, and put the case for broadening the bandwidth of high‐quality evidence using large routine standardised data sets and the consideration of high‐quality qualitative studies. Overall, with regard to depression, counselling is effective. Some analyses suggest it is somewhat less effective than other therapies for depression (e.g., CBT), but when research findings are adjusted for researcher allegiance and low risk of bias, such differences are minimal and not clinically relevant (Cuijpers, 2017 ). Results from (very) large standardised data sets in routine practice show counselling to be as effective as CBT in the treatment of patient‐reported depression and with a suggestion that it may be more cost‐efficient. However, such data are not considered by NICE even though it is consistent with the scope of data defined in their guideline development procedural manual (NICE, 2014 /2017).
One clear observation concerning RCTs in the field of depression is the paucity of high‐quality head‐to‐head trials relating to counselling. In addition, there are calls from advocates of RCTs for trials to be larger and pragmatic (Wessley, 2007 ). In response to such calls, there is a large pragmatic noninferiority RCT comparing CfD (Person‐centred experiential therapy) with CBT as the benchmark treatment that will yield initial results late in 2018 (Saxon, Ashley et al., 2017 ). Particularly significant is the trial's focus on patients diagnosed as experiencing moderate or severe depression. The results regarding any differential effectiveness of counselling between moderate and severe depression will address a key issue as to whether CfD could be considered as a front‐line intervention. Funders should call for other therapeutic approaches to be evaluated using CBT as a benchmark – to determine whether another therapy is, in any clinically meaningful way, noninferior to CBT. In this way, a robust and relevant knowledge base will be constructed that aims to ensure quality and standards of psychological interventions for the treatment of depression while providing choice to patients. This is important giving the mounting empirical evidence that improving patient treatment choice improves therapy outcomes (Lindhiem, Bennett, Trentacosta & McLear, 2014 ; Williams et al., 2016 ).
Finally, in this article, we have sought to make an argument about re‐evaluating the definition of best evidence for guideline development. Using the evidence base for counselling in the treatment of depression as an example, we have argued that guideline developers should move towards integrating differing forms of high‐quality evidence rather than relying on trials alone. But this requires change for all stakeholders: for individual researchers in counselling to be strategic and ensure their work builds cumulatively on the work of others; for researchers in organisations to yield larger and more substantive studies; for service providers to collaborate in collating common data through, for example, building practice research networks; for counselling bodies to devise, fund and implement research strategies that will deliver a robust evidence base for practice; and for guideline developers to accept a diversity of substantive research approaches that, combined, will yield best evidence. In doing so, not only will it be possible to draw more robust conclusions about the cost‐effectiveness of depression treatment in the NHS and the clinical efficacy and effectiveness of different interventions, but also potentially the community, service, therapist, and patient variables that significantly impact on patient outcomes.
Acknowledgements
We would like to thank the anonymous reviewers for their helpful comments on an earlier draft.
Biographies
Michael Barkham is Professor of Clinical Psychology and Director of the Centre for Psychological Services Research at the University of Sheffield.
Naomi P. Moller is Joint Head of Research for the British Association for Counselling and Psychotherapy and Senior Lecturer in the School of Psychology at the Open University.
Joanne Pybis is Senior Research Fellow for the British Association for Counselling and Psychotherapy.
The views expressed in this article are our own and do not necessarily reflect the views of our respective organisations.
- American Counseling Association (2017). What is counseling? Retrieved from https://www.counseling.org/aca-community/learn-about-counseling/what-is-counseling/overview
- American Psychiatric Association (2017). APA practice guidelines . Retrieved from http://psychiatryonline.org/guidelines
- American Psychological Association (2012). Recognition of psychotherapy effectiveness . Retrieved from http://www.apa.org/about/policy/resolution-psychotherapy.aspx
- Barber, J. P. , Connoll, M. B. , Crits‐Christoph, P. , Gladis, L. , & Siqueland, L. (2000). Alliance predicts patients’ outcome beyond in‐treatment change in symptoms . Journal of Consulting and Clinical Psychology , 68 , 1027–1032. [ PubMed ] [ Google Scholar ]
- Barkham, M. , Lutz, W. , Lambert, M. J. , & Saxon, D. (2017). Therapist effects, effective therapists, and the law of variability In Castonguay L. G., & Hill C. E. (Eds.), How and why are some therapists better than others?: Understanding therapist effects (pp. 13–36). Washington, DC: American Psychological Association. [ Google Scholar ]
- Barkham, M. , & Margison, F. (2007). Practice‐based evidence as a complement to evidence‐based practice: from dichotomy to chiasmus In Freeman C., & Power M. (Eds.), Handbook of evidence‐based psychotherapies: A guide for research and practice (pp. 443–476). Chichester, United Kingdom: Wiley. [ Google Scholar ]
- Barkham, M. , Stiles, W. B. , Lambert, M. J. , & Mellor‐Clark, J. (2010). Building a rigorous and relevant knowledge‐base for the psychological therapies In Barkham M., Hardy G. E., & Mellor‐Clark J. (Eds.), Developing and delivering practice‐based evidence: A guide for the psychological therapies (pp. 21–61). Chichester, United Kingdom: Wiley. [ Google Scholar ]
- Barnes, M. , Sherlock, S. , Thomas, L. , Kessler, D. , Kuyken, W. , Owen‐Smith, A. , … Turner, K. (2013). No pain, no gain: depressed clients’ experiences of cognitive behavioural therapy . British Journal of Clinical Psychology , 52 , 347–364. [ PubMed ] [ Google Scholar ]
- Barth, J. , Munder, T. , Gerger, H. , Nüesch, E. , Trelle, S. , Znoj, H. , … Cuijpers, P. (2013). Comparative efficacy of seven psychotherapeutic interventions for patients with depression: a network meta‐analysis . PLoS Medicine , 10 , e1001454. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Beattie, A. , Shaw, A. , Kaur, S. , & Kessler, D. (2009). Primary‐care patients’ expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study . Health Expectations , 12 , 45–59. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Beck, C. T. (2002). Postpartum depression: a metasynthesis . Qualitative Health Research , 12 , 453–472. [ PubMed ] [ Google Scholar ]
- Bedi, N. , Chilvers, C. , Churchill, R. , Dewey, M. , Duggan, C. , Fielding, K. , … Williams, I. (2000). Assessing effectiveness of treatment of depression in primary care . British Journal of Psychiatry , 177 , 312–328. [ PubMed ] [ Google Scholar ]
- Binder, P. E. , Holgersen, H. , & Nielsen, G. H. S. (2010). What is a ‘good outcome’ in psychotherapy? A qualitative exploration of former patients’ point of view . Psychotherapy Research , 20 , 285–294. [ PubMed ] [ Google Scholar ]
- British Association for Counselling and Psychotherapy (2009). BACP issues warning that new depression guidelines may harm patients , October 9 2009. Retrieved from http://www.bacp.co.uk/media/index.php?newsId=1610
- British Association for Counselling and Psychotherapy (2017). What is counselling? Retrieved from http://www.bacp.co.uk/crs/Training/whatiscounselling.php
- Cape, J. , Whittington, C. , Buszewicz, M. , Wallace, P. , & Underwood, L. (2010). Brief psychological therapies for anxiety and depression in primary care: meta‐analysis and meta‐regression . BMC Medicine , 8 , 38. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Cartwright, N. , & Munro, E. (2010). The limitations of randomized controlled trials in predicting effectiveness . Journal of Evaluation in Clinical Practice , 16 , 260–266. [ PubMed ] [ Google Scholar ]
- Clark, D. M. (2011). Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: the IAPT experience . International Review of Psychiatry , 23 , 318–327. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Clarke, J. , & Barkham, M. (2009). Tribute to Phil Richardson ‐ Evidence de rigueur: the shape of evidence in psychological therapies and the modern practitioner as teleoanalyst . Clinical Psychology Form , 202 , 7–11. [ Google Scholar ]
- Cohen, J. (1992). A power primer . Psychological Bulletin , 112 , 155–159. [ PubMed ] [ Google Scholar ]
- Cost of Depression in England (2010). All party parliamentary Group on wellbeing economics . Retrieved from https://wellbeingeconomics.wordpress.com/reports/Retrieved July 19 2017.
- Cuijpers, P. (2016). Are all psychotherapies equally effective in the treatment of adult depression? The lack of statistical power of comparative outcome studies . Evidence Based Mental Health , 19 , 39–42. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Cuijpers, P. (2017). Four decades of outcome research on psychotherapies for adult depression: an overview of a series of meta‐analyses . Canadian Psychology/Psychologie Canadienne , 58 , 7–19. [ Google Scholar ]
- Cuijpers, P. , Driessen, E. , Hollon, S. D. , van Oppen, P. , Barth, J. , & Andersson, G. (2012). The efficacy of non‐directive supportive therapy for adult depression: a meta‐analysis . Clinical Psychology Review , 32 , 280–291. [ PubMed ] [ Google Scholar ]
- Cuijpers, P. , Turner, E. H. , Koole, S. L. , van Dijke, A. , & Smit F. (2014). What is the threshold for a clinically relevant effect? The case of major depressive disorders Depression and Anxiety , 31 , 374–378. [ PubMed ] [ Google Scholar ]
- Dias S. (2013). Using network meta‐analysis (NMA) for decision making . Paper presented at the 38th International Society for Clinical Biostatistics, Munich, Germany. [ Google Scholar ]
- Emslie, C. , Ridge, D. , Ziebland, S. , & Hunt, K. (2006). Men's accounts of depression: reconstructing or resisting hegemonic masculinity? Social Science & Medicine , 62 , 2246–2257. [ PubMed ] [ Google Scholar ]
- European Association for Counselling (2017). Definition of counselling . Retrieved from http://eac.eu.com/standards-ethics/definition-counselling/ . Cited 19 June 2017.
- Evans, C. , Connell, J. , Barkham, M. , Margison, F. , Mellor‐Clark, J. , McGrath, G. , & Audin, K. (2002). Towards a standardised brief outcome measure: psychometric properties and utility of the CORE‐OM . British Journal of Psychiatry , 180 , 51–60. [ PubMed ] [ Google Scholar ]
- Goldman, R. N. , Greenberg, L. S. , & Angus, L. (2006). The effects of adding emotion‐focussed interventions to the client‐centred relationship conditions in the treatment of depression . Psychotherapy Research , 16 , 537–549. [ Google Scholar ]
- Greenberg, L. S. , & Watson, J. C. (1998). Experiential therapy of depression: differential effects of client‐centred relationship conditions and process experiential interventions . Psychotherapy Research , 8 , 210–224. [ Google Scholar ]
- Gyani, A. , Pumphrey, N. , Parker, H. , Shafran, R. , & Rose, S. (2012). Investigating the use of NICE guidelines and IAPT services in the treatment of depression . Mental Health in Family Medicine , 9 , 149–160. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Gyani, A. , Shafran, R. , Layard, R. , & Clark, D. M. (2013). Enhancing recovery rates: lessons from year one of IAPT . Behaviour Research and Therapy , 51 , 597–606. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Hernandez‐Villafuerte, K. , Garau, M. , & Devlin, N. (2014). Do NICE decisions affect decisions in other countries? Value Health , 17 , A418. [ PubMed ] [ Google Scholar ]
- Hill, C. E. , Chui, H. , & Baumann, E. (2013). Revisiting and reenvisioning the outcome problem in psychotherapy: an argument to include individualized and qualitative measurement . Psychotherapy , 50 , 68–76. [ PubMed ] [ Google Scholar ]
- Hussain‐Gambles, M. , Atkin, K. , & Leese, B. (2004). Why ethnic minority groups are underrepresented in clinical trials: a review of the literature . Health and Social Care in the Community , 12 , 382–389. [ PubMed ] [ Google Scholar ]
- IAPT Programme (2013). Census of the IAPT Workforce as at August 2012 . Retrieved from https://www.uea.ac.uk/documents/246046/11919343/iapt-workforce-education-and-training-2012-census-report.pdf/907e15d0-b36a-432c-8058-b2452d3628de . Cited 19 June 2017.
- Jadad, A. R. , & Enkin, M. W. (2007). Randomized control trials: Questions, answers and musings , 2nd edn Oxford, United Kingdom: Blackwell. [ Google Scholar ]
- Kanters, S. , Ford, N. , Druyts, E. , Thorlund, K. , Mills, E. J. , & Bansback, N. (2016). Use of network meta‐analysis in clinical guidelines . Bulletin of the World Health Organization , 94 , 782–784. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Kaufman, J. , & Charney, D. (2000). Comorbidity of mood and anxiety disorders . Depression and Anxiety , 12 ( Suppl. 1 ), 69–74. [ PubMed ] [ Google Scholar ]
- Kazdin, A. E. (2008). Evidence‐based treatment and practice. New opportunities to bridge clinical research and practice, enhance the knowledge base and improve patient care . American Psychologist , 63 , 146–159. [ PubMed ] [ Google Scholar ]
- Kennedy‐Martin, T. , Curtis, S. , Faries, D. , Robinson, S. , & Johnston, J. (2015). A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results . Trials , 16 , 495. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- King, M. , Marston, L. , & Bower, P. (2014). Comparison of non‐directive counselling and cognitive behaviour therapy for patients presenting in general practice with an ICD‐10 depressive episode: a randomized control trial . Psychological Medicine , 44 , 1835–1844. [ PubMed ] [ Google Scholar ]
- Kroenke, K. , Spitzer, R. L. , & Williams, J. B. W. (2001). The PHQ‐9: validity of a brief depression severity measure . Journal of General Internal Medicine , 16 , 606–613. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Lawrence, V. , Banerjee, S. , Bhugra, D. , Sangha, K. , Turner, S. , & Murray, J. (2006a). Coping with depression in later life: a qualitative study of help‐seeking in three ethnic groups . Psychological Medicine , 36 , 1375–1383. [ PubMed ] [ Google Scholar ]
- Lawrence, V. , Murray, J. , Banerjee, S. , Turner, S. , Sangha, K. , Byng, R. , … Macdonald, A. (2006b). Concepts and causation of depression: a cross‐cultural study of the beliefs of older adults . The Gerontologist , 46 , 23–32. [ PubMed ] [ Google Scholar ]
- Levitt, H. M. , Pomerville, A. , & Surace, F. I. (2016). A qualitative meta‐analysis examining clients’ experiences of psychotherapy: a new agenda . Psychological Bulletin , 142 , 801–830. [ PubMed ] [ Google Scholar ]
- Lillevoll, K. R. , Wilhelmsen, M. , Kolstrup, N. , Høifødt, R. S. , Waterloo, K. , Eisemann, M. , & Risør, M. B. (2013). Patients’ experiences of helpfulness in guided internet‐based treatment for depression: qualitative study of integrated therapeutic dimensions . Journal of Medical Internet Research , 15 ( 6 ), e126. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Lindhiem, O. , Bennett, C. B. , Trentacosta, C. J. , & McLear, C. (2014). Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta‐analysis . Clinical Psychology Review , 34 , 506–517. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- London School of Economics and Political Science. Centre for Economic Performance. Mental Health Policy Group (2006). The depression report: A new deal for depression and anxiety disorders [online]. London, UK: LSE Research Online. [ Google Scholar ]
- Mason, O. , & Hargreaves, I. (2001). A qualitative study of mindfulness‐based cognitive therapy for depression . Psychology and Psychotherapy: Theory, Research and Practice , 74 , 197–212. [ PubMed ] [ Google Scholar ]
- McLeod, J. (2013). Qualitative research: methods and contributions In Lambert M. J. (Ed.), Bergin and Garfield's handbook of psychotherapy and behavior change , 6th edn (pp. 49–84). Hoboken, NJ: Wiley. [ Google Scholar ]
- Midgley, N. , Ansaldo, F. , & Target, M. (2014). The meaningful assessment of therapy outcomes: incorporating a qualitative study into a randomized controlled trial evaluating the treatment of adolescent depression . Psychotherapy , 51 , 128–137. [ PubMed ] [ Google Scholar ]
- Mills, E. J. , Thorlund, K. , & Ioannidis, J. P. A. (2013). Demystifying trial networks and network meta‐analysis . BMJ , 346 ( f2914 ), 1–6. [ PubMed ] [ Google Scholar ]
- Moriana, J. A. , Gálvez‐Lara, M. , & Corpas, J. (2017). Psychological treatments for mental disorders in adults: a review of the evidence of leading international organizations . Clinical Psychology Review , 54 , 29–43. [ PubMed ] [ Google Scholar ]
- Mundt, J. C. , Marks, I. M. , Shear, M. K. , & Greist, J. M. (2002). The work and social adjustment scale: a simple measure of impairment in functioning . British Journal of Psychiatry , 180 , 461–464. [ PubMed ] [ Google Scholar ]
- National Institute for Health and Care Excellence (2009). NICE guidelines for depression in adults: recognition and management . Retrieved from https://www.nice.org.uk/guidance/cg90
- National Institute for Health and Care Excellence (2014/updated 2017). Developing NICE guidelines: the manual: Process and methods . Retrieved from https://www.nice.org.uk/process/pmg20
- National Institute for Health and Care Excellence (2017a). NICE guidance . Retrieved from https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance
- National Institute for Health and Care Excellence (2017b). NICE guidelines . Retrieved from https://www.nice.org.uk/About/What-we-do/Our-Programmes/NICE-guidance/NICE-guidelines
- National Institute for Health and Care Excellence (2017c). Patient and public involvement policy . Retrieved from https://www.nice.org.uk/about/nice-communities/public-involvement/patient-and-public-involvement-policy
- National Institute for Heath and Care Excellence (2017d). Depression in adults: recognition and management: Draft guidance consultation . Retrieved from https://www.nice.org.uk/guidance/GID-CGWAVE0725/documents/draft-guideline
- NHS Digital (2014). Psychological therapies: Annual report on the use of IAPT services . England, 2013‐2014. Retrieved from http://content.digital.nhs.uk/catalogue/PUB14899 [ Google Scholar ]
- NHS Digital (2015). Psychological therapies: Annual report on the use of IAPT services . England, 2014‐2015. Retrieved from http://content.digital.nhs.uk/catalogue/PUB19098 [ Google Scholar ]
- NHS Digital (2016). Psychological therapies: Annual report on the use of IAPT services . England, 2015‐2016. Retrieved from http://content.digital.nhs.uk/pubs/psycther1516 [ Google Scholar ]
- NHS England (2016). Our 2016/2017 business plan . Retrieved from https://www.england.nhs.uk/wp-content/uploads/2016/03/bus-plan-16.pdf
- NHS England (2017). IAPT workforce . Retrieved from https://www.england.nhs.uk/mental-health/adults/iapt/workforce/
- NHS England and Health Education England (2016). 2015 Adult IAPT workforce census report . Retrieved from https://www.england.nhs.uk/mentalhealth/wp-content/uploads/sites/29/2016/09/adult-iapt-workforce-census-report-15.pdf
- Nilsson, T. , Svensson, M. , Sandell, R. , & Clinton, D. (2007). Patients’ experiences of change in cognitive–behavioral therapy and psychodynamic therapy: a qualitative comparative study . Psychotherapy Research , 17 , 553–566. [ Google Scholar ]
- Parry, G. , Barkham, M. , Brazier, J. , Dent‐Brown, K. , Hardy, G. , Kendrick, T. , … Lovell, K. (2011). An evaluation of a new service model: Improving access to psychological therapies demonstration sites 2006–2009 . Final report. NIHR Service Delivery and Organisation programme. [ Google Scholar ]
- Perren, S. (2009). Topics in training: thinking of applying for the IAPT high or low‐intensity training? Sara Perren has some advice . Healthcare Counselling and Psychotherapy Journal , 1 , 29–30. [ Google Scholar ]
- Pybis, J. , Saxon, D. , Hill, A. , & Barkham, M. (2017). The comparative effectiveness and efficiency of cognitive behaviour therapy and counselling in the treatment of depression: evidence from the 2 nd UK national audit of psychological therapies . BMC Psychiatry , 17 , 215. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Radhakrishnan, M. , Hammond, G. , Jones, P. B. , Watson, A. , McMillan‐Shields, F. , & Lafortune, L. (2013). Cost of Improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected primary care trusts in the east of England region . Behaviour Research and Therapy , 51 , 37–45. [ PubMed ] [ Google Scholar ]
- Rawlins, M. (2008). De testimonio: on the evidence for decisions about the use of therapeutic interventions . The Lancet , 372 , 2152–2161. [ PubMed ] [ Google Scholar ]
- Ridge, D. , & Ziebland, S. (2006). ‘The old me could never have done that’: how people give meaning to recovery following depression . Qualitative Health Research , 16 , 1038–1053. [ PubMed ] [ Google Scholar ]
- Roth, A. D. , Hill, A. , & Pilling, S. (2009). The competences required to deliver effective humanistic psychological therapies . London, UK: Department of Health. [ Google Scholar ]
- Sanders, P. , & Hill, A. (2014). Counselling for depression: A person‐centred and experiential approach to practice . London, United Kingdom: Sage. [ Google Scholar ]
- Saxon, D. , Ashley, K. , Bishop‐Edwards, L. , Connell, J. , Harrison, P. , Ohlsen, S. , … Barkham, M. (2017). A pragmatic randomized controlled trial assessing the non‐inferiority of counselling for depression versus cognitive‐behaviour therapy for patients in primary care meeting a diagnosis of moderate or severe depression (PRaCTICED): a study protocol for a randomized controlled trial . Trials , 18 , 93. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Saxon, D. , & Barkham, M. (2012). Patterns of therapist variability: therapist effects and the contribution of patient severity and risk . Journal of Consulting and Clinical Psychology , 80 , 535–546. [ PubMed ] [ Google Scholar ]
- Saxon, D. , Firth, N. , & Barkham, M. (2017). The relationship between therapist effects and therapy delivery factors: therapy modality, dosage, and non‐completion Administration and Policy in Mental Health , 44 , 705–715. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Scottish Intercollegiate Guidelines Network (2017). What we do . Retrieved from http://www.sign.ac.uk/what-we-do.html
- Simpson, S. , Corney, R. , Fitzgerald, P. , & Beecham, J. (2000). A randomised controlled trial to evaluate the effectiveness and cost‐effectiveness of counselling patients with chronic depression . Health Technology Assessment , 4 ( 36 ), 1–83. [ PubMed ] [ Google Scholar ]
- Smith, A. , Graham, L. , & Senthinathan, S. (2007). Mindfulness‐based cognitive therapy for recurring depression in older people: a qualitative study . Aging and Mental Health , 11 , 346–357. [ PubMed ] [ Google Scholar ]
- Spitzer, R. L. , Kroenke, K. , Williams, J. B. W. , & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: the GAD‐7 . Archives of Internal Medicine , 22 , 1092–1097. [ PubMed ] [ Google Scholar ]
- Stiles, W. B. , Barkham, M. , Mellor‐Clark, J. , & Connell, J. (2008). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies in UK primary care routine practice: replication in a larger sample . Psychological Medicine , 38 , 677–688. [ PubMed ] [ Google Scholar ]
- Stiles, W. B. , Barkham, M. , Twigg, E. , Mellor‐Clark, J. , & Cooper, M. (2006). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies as practiced in UK National Health Service settings . Psychological Medicine , 36 , 555–566. [ PubMed ] [ Google Scholar ]
- Stoppard, J. M. , & McMullen, L. M. (1999). Toward an understanding of depression from the standpoint of women: exploring contributions of qualitative research approaches . Canadian Psychology , 40 , 75–76. [ Google Scholar ]
- Straarup, N. S. , & Poulsen, S. (2015). Helpful aspects of metacognitive therapy and cognitive behaviour therapy for depression: a qualitative study . The Cognitive Behaviour Therapist , 8 , e22. [ Google Scholar ]
- Stronks, K. , Wieringa, F. , & Hardon, A. (2013). Confronting diversity in the production of clinical evidence goes beyond merely including under‐represented groups in clinical trials . Trials , 14 , 177. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Stuart, E. A. , Bradshaw, C. P. , & Leaf, P. J. (2015). Assessing the generalizability of randomized trial results to target populations . Prevention Science , 16 , 475–485. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Timulak, L. , & Elliott, R. (2003). Empowerment events in process‐experiential psychotherapy of depression: an exploratory qualitative analysis . Psychotherapy Research , 13 , 443–460. [ PubMed ] [ Google Scholar ]
- Valkonen, J. , Hänninen, V. , & Lindfors, O. (2011). Outcomes of psychotherapy from the perspective of the users . Psychotherapy Research , 21 , 227–240. [ PubMed ] [ Google Scholar ]
- Vlayen, J. , Aertgeerts, B. , Hannes, K. , Sermeus, W. , & Ramaeker, D. (2005). A systematic review of appraisal tools for clinical practice guidelines: multiple similarities and one common deficit . International Journal for Quality in Health Care , 17 , 235–242. [ PubMed ] [ Google Scholar ]
- Wald, N. J. , & Morris, J. K. (2003). Teleoanalysis: combining data from different types of study . BMJ , 327 , 616–618. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Ward, E. , King, M. , Lloyd, M. , Bower, P. , Sibbald, B. , Farrely, S. , … Addington‐Hall, J. (2000). Randomised controlled trial of non‐directive counselling, cognitive‐behaviour therapy, and usual general practitioner care for patients with depression. I: clinical effectiveness . BMJ , 321 , 1383–1388. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Watson, J. C. , Gordon, L. B. , Stermac, L. , Kalogerakos, F. , & Steckley, P. (2003). Comparing the effectiveness of process‐experiential with cognitive‐behavioral psychotherapy in the treatment of depression . Journal of Consulting and Clinical Psychology , 71 , 773–781. [ PubMed ] [ Google Scholar ]
- Wessley, S. (2007). Commentary: a defence of the randomized controlled trial in mental health . BioSocieties , 2 , 115–127. [ Google Scholar ]
- Williams, R. , Farquharson, L. , Palmer, L. , Bassett, P. , Clarke, J. , Clark, D. M. , & Crawford, M. J. (2016). Patient preference in psychological treatment and associations with self‐reported outcome: national cross‐sectional survey in England and Wales . BMC Psychiatry , 16 , 4. [ PMC free article ] [ PubMed ] [ Google Scholar ]
- World Health Organization (2017). WHO guidelines approved by the Guidelines Review Committee . Retrieved from http://www.who.int/publications/guidelines/en/
Online Students
For All Online Programs
International Students
On Campus, need or have Visa
Campus Students
For All Campus Programs
A Guide to the Types of Research and How They're Used
Know before you read At SNHU, we want to make sure you have the information you need to make decisions about your education and your future—no matter where you choose to go to school. That's why our informational articles may reference careers for which we do not offer academic programs, along with salary data for those careers. Cited projections do not guarantee actual salary or job growth.
Research is the discovery of new ideas. By identifying a problem and devising solutions using source-based evidence, you can apply research to just about any academic area and to many professions, according to Dr. Matthew Schandler , an adjunct instructor of history and academic partner at Southern New Hampshire University (SNHU). Schandler applies his background in political science, data science and history of technology to his work teaching a history research capstone course at SNHU.
Jeremy Pedigo , an adjunct instructor of history and academic partner at SNHU, said that research allows you to discover new ideas that are relevant to your academic field and profession. The process involves answering research questions using scholarly sources . Pedigo is a doctoral student in history himself (doctorates in history are not currently available at SNHU).
Overview of Research
Research is about much more than searching for keywords in a search engine. Schandler said that any type of academic research should center around first identifying a problem and then devising solutions to that problem. The process for devising those solutions depends largely on your content area.
According to the National Institutes of Health ( NIH ), quantitative research is the more empirical of the two types, meaning it's based on observation or experience rather than logic. Generally applied to the research conducted with clinical studies or that has measurable outcomes, data tends to be numerical or deductive. Conclusions are typically based on results from studies and various survey methods. This type of research is well-suited for testing hypotheses and establishing cause-and-effect relationships.
Qualitative research tends to be more narrative in scope, as explained by NIH . Interviews measuring viewpoints and opinions, as well as historical or literary studies, are commonly used for qualitative studies. With this type of research, examining theories and describing decision-making or communication processes is common.
What are the Basic Types of Research?
Once you have determined your research question, you’ll need to decide if you are going to apply a fundamental or applied approach, according to Pedigo.
▸ Fundamental and Applied Research
Fundamental research , as its name implies, is the most basic type of research. “It seeks to answer a general question or find a causal relationship between multiple factors,” Pedigo said. This is particularly useful in undergraduate courses when students are building a foundation of knowledge in their subject area.
Pedigo lists these as common sources frequently used to conduct fundamental research:
- Books, letters and private papers
- Credible websites and interviews
- Newspaper and magazine articles
- Manuscripts
- Peer-reviewed articles
Applied research seeks to understand societal problems and find solutions to improve everyday life. This involves applying concepts in business, natural sciences and behavioral and social sciences to improve aspects of society. Applied research allows you to apply what you’ve learned to solving problems, Pedigo said.
Common sources of applied research, according to Pedigo, are:
- Academic books
- Focus groups and surveys
- Government reports
▸ Theoretical and Experimental Research
“Theoretical research attempts to measure a theory or phenomenon to determine its relevancy based on research findings,” Pedigo said. “Whereas experimental research is the study of two or more variables with a control group and an experimental group.”
Deciding which to use depends a great deal on the type of problem you're trying to solve:
- Theoretical research is rooted in hypothetical situations.
- Experimental research is where theories are tested for validity.
Physics is another area where theoretical and experimental research types are commonly applied, according to Schandler.
“Theoretical physicists develop models to consider inexplicable phenomena,” he said. For example, famed astronomer Edwin Hubble conducted theoretical research to try to prove that nebulae existed beyond the Milky Way. Hubble later developed new telescopic equipment to test his theory using experimental research.
Which Careers Focus on Research?
Research skills can enhance virtually any career. Some careers, like scientists or college professors, focus on research as a core aspect of the work. Many other careers benefit from people who bring strong research skills to their roles.
According to Pedigo, some examples of careers where strong research skills could be particularly helpful are:
- Data Scientist , where you could use research skills to gather and analyze data to develop algorithms or recommend systems and processes. According to the U.S. Bureau of Labor Statistics (BLS), the median salary for jobs in this role was $103,500 in 2022.* Explore a master's degree in data analytics .
- Mechanical Engineer , where you could research new ways to build or enhance mechanical systems, or design new energy systems or other processes to solve problems. The median salary for jobs in this role was $96,310 in 2022, according to BLS.* Explore a degree in mechanical engineering .
- Environmental Scientist , where research into climate change, pollution or water sources could help you make the world a cleaner and healthier place. BLS shows the median salary for this position was $76,480 in 2022.* Explore a degree in environmental science .
- Historian , where you could specialize in any number of areas. Historians may work at universities, nonprofits, governmental organizations or museums, to name just a few of the possibilities. While salaries can vary widely, BLS lists the median salary for historians as $64,540 in 2022.* Explore a degree in history .
- Management Analyst , where you could work in a variety of businesses researching market trends and making recommendations to improve business operations. The median salary for jobs in this role was $95,290 per year in 2022, according to BLS.* Explore a Master of Business Administration .
Find Your Program
What are the top skills needed for a research career.
Having the desire to learn a new topic and discover new findings is critical to becoming a strong researcher, according to Pedigo. He noted the following skills in particular as stand-outs for conducting research, regardless of field:
- Computational proficiency , including using online databases. Strong computer search skills enhance the ability to seek out information from different perspectives.
- Documentation and note-taking , both of which are an absolute must, according to Schandler. “If you don’t read materials actively and accurately, you won’t make the nuanced connections needed to draw accurate conclusions,” he said.
- Literary ability , which includes writing outlines, taking notes of research papers and writing in the appropriate academic style for your discipline is critically important, according to both Pedigo and Schandler.
- Intellectual curiosity is a must-have trait. “If one lacks a drive for knowledge, they might not ask the important questions essential to guiding the research process,” Schandler said.
- Organizational skills are also vital. Working with research can involve working with large volumes of information. Developing a process to keep research structured is necessary.
Pedigo also noted that taking full advantage of academic resources through your university library is helpful. “You’ll want to become familiar with your library and its staff to learn about the types of sources and services available to students, faculty and staff, and how they can help aid you in your research,” he said.
Those resources can be very helpful when it comes to ensuring that you write with integrity and report your research findings accurately.
The Key to Conducting Research
Regardless of the method you use, the most important aspect of conducting any kind of research is that it leads to actionable outcomes. By staying focused on the core question you're trying to answer, researchers in any discipline can help increase knowledge in their field and find new ways for information to cross over into other disciplines.
Schandler said that “in a world rife with disinformation and misinformation, research steers analysts towards deeper understandings.” He feels that the interdisciplinary, collaborative sharing of research findings ensures creative solutions to the world’s great problems, past and present.
A degree can change your life. Choose your program from 200+ SNHU degrees that can take you where you want to go.
*Cited job growth projections may not reflect local and/or short-term economic or job conditions and do not guarantee actual job growth. Actual salaries and/or earning potential may be the result of a combination of factors including, but not limited to: years of experience, industry of employment, geographic location, and worker skill.
Marie Morganelli, PhD, is an educator, writer and editor.
Explore more content like this article
Are Online College Courses Worth It?
Academic Spotlight: Dr. Jennifer Varney, Executive VP of Academic Effectiveness
Academic Effectiveness Associate Dean Seth Matthews: A Faculty Q&A
About southern new hampshire university.
SNHU is a nonprofit, accredited university with a mission to make high-quality education more accessible and affordable for everyone.
Founded in 1932, and online since 1995, we’ve helped countless students reach their goals with flexible, career-focused programs . Our 300-acre campus in Manchester, NH is home to over 3,000 students, and we serve over 135,000 students online. Visit our about SNHU page to learn more about our mission, accreditations, leadership team, national recognitions and awards.
This paper is in the following e-collection/theme issue:
Published on 1.2.2024 in Vol 26 (2024)
Race, Ethnicity, and Other Cultural Background Factors in Trials of Internet-Based Cognitive Behavioral Therapy for Depression: Systematic Review
Authors of this article:
- Robinson De Jesús-Romero 1 , MS ;
- Amani R Holder-Dixon 1, 2 , BA ;
- John F Buss 1 , BS ;
- Lorenzo Lorenzo-Luaces 1 , PhD
1 Department of Psychological and Brain Sciences, Indiana University - Bloomington, Bloomington, IN, United States
2 Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN, United States
Corresponding Author:
Robinson De Jesús-Romero, MS
Department of Psychological and Brain Sciences
Indiana University - Bloomington
1101 East 10th Street
Bloomington, IN, 47405
United States
Phone: 1 7872409456
Email: [email protected]
Background: There is a growing interest in developing scalable interventions, including internet-based cognitive behavioral therapy (iCBT), to meet the increasing demand for mental health services. Given the growth in diversity worldwide, it is essential that the clinical trials of iCBT for depression include diverse samples or, at least, report information on the race, ethnicity, or other background indicators of their samples. Unfortunately, the field lacks data on how well diversity is currently reported and represented in the iCBT literature.
Objective: Thus, the main objective of this systematic review was to examine the overall reporting of racial and ethnic identities in published clinical trials of iCBT for depression. We also aimed to review the representation of specific racial and ethnic minoritized groups and the inclusion of alternative background indicators such as migration status or country of residence.
Methods: Studies were included if they were randomized controlled trials in which iCBT was compared to a waiting list, care-as-usual, active control, or another iCBT. The included papers also had to have a focus on acute treatment (eg, 4 weeks to 6 months) of depression, be delivered via the internet on a website or a smartphone app and use guided or unguided self-help. Studies were initially identified from the METAPSY database (n=59) and then extended to include papers up to 2022, with papers retrieved from Embase, PubMed, PsycINFO, and Cochrane (n=3). Risk of bias assessment suggested that reported studies had at least some risk of bias due to use of self-report outcome measures.
Results: A total of 62 iCBT randomized controlled trials representing 17,210 participants are summarized in this study. Out of those 62 papers, only 17 (27%) of the trials reported race, and only 12 (19%) reported ethnicity. Reporting outside of the United States was very poor, with the United States accounting for 15 (88%) out of 17 of studies that reported race and 9 (75%) out of 12 for ethnicity. Out of 3,623 participants whose race was reported in the systematic review, the racial category reported the most was White (n=2716, 74.9%), followed by Asian (n=209, 5.8%) and Black (n=274, 7.6%). Furthermore, only 25 (54%) out of the 46 papers conducted outside of the United States reported other background demographics.
Conclusions: It is important to note that the underreporting observed in this study does not necessarily indicate an underrepresentation in the actual study population. However, these findings highlight the poor reporting of race and ethnicity in iCBT trials for depression found in the literature. This lack of diversity reporting may have significant implications for the scalability of these interventions.
Introduction
In 2020, approximately 34% of the US population identified themselves as belonging to racial minoritized groups, and 19% identified as members of ethnic minoritized groups [ 1 ]. Following the terminology used by other scholars [ 2 ], we employ the term “minoritized” rather than “minorities” to emphasize that these individuals’ experiences are not intrinsic qualities of statistically small groups; instead, they are the result of dominant groups subordinating and, consequently, “minoritizing” them. The proportion of individuals belonging to a racial or ethnic minoritized group in the United States is expected to continue growing [ 3 ]. This increase in the proportion of minoritized groups is also expected in other countries. There are currently an estimated 272 million immigrants worldwide, with 82 million residing in Europe, 59 million in North America, and 49 million in Northern Africa and Western Asia [ 4 ]. Therefore, it is essential that mental health research responds to this increase in racial and ethnic diversity.
Mental health is one of the leading causes of disability in the United States and worldwide [ 5 ]. Among mental disorders, depression is the leading cause of significant disability [ 6 ] with an estimated economic burden of around US $210.5 billion per year in the United States alone [ 6 ]. Thus, given that mental health is strongly correlated with racial-ethnic identity and with other factors that are themselves linked to mental health (eg, socioeconomic status), there is a strong imperative for mental health research to represent the racial-ethnic diversity in our populations [ 7 - 9 ].
Current Reporting on Race and Ethnicity
Current research in mental health continues to use primarily non-Hispanic White populations, which fails to reflect the demographic makeup of countries worldwide. For example, in a study by Mak and colleagues [ 10 ], they reviewed 379 National Institute of Mental Health–funded clinical trials for various mental health disorders published between 1995 and 2004 to investigate how many trials reported sex, race, and ethnicity. They found that 91.6% of the National Institute of Mental Health–funded published trials reported sex. However, only 47.8% included race or ethnicity in their demographics, and 25.6% had incomplete race or ethnicity information.
Since then, the overall pattern of reporting demographic information has improved slightly. A more recent meta-analysis by Polo and colleagues [ 11 ] examines the trends in reporting and representation of racial-ethnic diversity in randomized controlled trials (RCT) of psychotherapy for depression for over 36 years. They found that reporting of racial-ethnic group membership increased from 16% to 55% during this time. This increase was attributed to the introduction of new guidelines aimed at increasing gender, race, and ethnicity reporting on RCTs. These guidelines included the National Institutes of Health Guidelines on the Inclusion of Women and Minorities as Subjects in Clinical Research in 1994 [ 12 ], the CONSORT (Consolidated Standards of Reporting Trials) [ 13 ] in 2001, and the American Psychological Association Publications and Communications Board Working Group on Journal Article Reporting Standards (JARS) in 2008. However, the reporting of treatment effects by ethnic groups remained at a low at 2.1% [ 11 ]. Additionally, only non-Hispanic Black and Latino individuals were represented significantly more than in previous years. Asian Americans, multiracial individuals, Native Americans or Native Alaskans, and Native Hawaiians or Pacific Islanders were still underrepresented in the literature, with no significant change across time [ 11 ] despite evidence that participants are willing to engage in web-based health-related research [ 14 ].
Depression and Health Disparities
Existing research on depression and race-ethnicity presents a somewhat complex picture. In the United States, apart from Native Americans, who have the highest rates of depression, minoritized groups tend to have lower rates of depression than non-Hispanic White individuals [ 15 ]. However, minoritized individuals may be at a higher risk for more severely debilitating depression when compared to non-Hispanic White individuals [ 16 ]. Additionally, the costs associated with treating depression may impact racial groups differently. For example, those who are middle class and Black tend to encounter more obstacles to upward mobility (ie, moving from one social class to another) and are more susceptible to downward mobility, which may impede their access to mental health care [ 17 , 18 ]. In 2020, 53% Black individuals experienced worse access to care than White individuals. Similarly, 29% of Asian individuals had worse access to care, while American Indian or Alaska Native individuals presented the largest disparity with 50% receiving worse access to care than White individuals. Native Hawaiian and Other Pacific Islander individuals, however, experienced the same level of access to care as White individuals [ 19 ].
Furthermore, substantial barriers in access to mental health care tend to affect minoritized racial-ethnic groups more than nonminoritized groups. For example, there is evidence that interpersonal factors (eg, stigma and personal shame), sociocultural factors (eg, fear of negative evaluation by family or peers, preference for traditional coping strategies such as withdrawal and “accepting fate”), and systemic factors (eg, lack of culturally centered clinical environments and culturally responsive services and language barriers) can make it difficult for members of racial-ethnic minoritized groups to access mental health care [ 20 ]. These barriers may be particularly pronounced in face-to-face mental health treatment settings, where patients need to engage fully with another person for it to be successful; however, innovative internet-based delivery methods (eg, internet-based cognitive behavioral therapy [iCBT] self-help) may reduce barriers to treatment by making it more accessible [ 21 ]. Moreover, there is evidence that shows that Latinx and non-Hispanic Black individuals may be willing to engage in smartphone-based interventions and bibliotherapy, respectively [ 22 , 23 ].
In addition to race and ethnicity, migration status is another crucial sociodemographic factor associated with depressive and anxious symptoms. For example, in a study involving 37,076 individuals from 20 European countries, first-generation migrants exhibited higher levels of depression, with rates significantly elevated for those born outside Europe [ 24 ]. Research on refugees resettled in high-income countries indicates that they experience increased rates of anxiety and depression compared to the general population [ 25 ]. These findings suggest that aspects such as migration status, country of origin, refugee status, and other related demographics play a significant role in predicting depression and anxiety symptoms, emphasizing the importance of providing mental health services to cater to these populations.
iCBT for Depression
One of the most studied and promising internet-based interventions for depression is iCBT. iCBT involves the provision of self-help materials (eg, websites, apps, and videos) that impart psychoeducation and teach skills that individuals can use to manage their symptoms. iCBT is usually delivered in 1 of 2 formats: guided or unguided. In guided self-help, individuals access self-help material and are assisted by a trained mental health professional or paraprofessional [ 26 ]. In unguided self-help, individuals access materials independently, without assistance or support. Both guided and unguided iCBT treatments have been shown to be more efficacious than waiting list controls [ 27 ].
Although unguided forms of self-help are more scalable and easier to access, guided self-help has been shown to be more effective than unguided self-help and as effective as face-to-face treatment [ 27 ]. Individuals may also adhere to it more closely [ 28 ]. Considering the wide support for the efficacy of iCBT, this format has the potential to reduce the public health burden of untreated depression, especially among individuals from a racial-ethnic minoritized group [ 21 , 29 ]. Given the potential for iCBT to reach minoritized communities, it is important to understand the reporting and representation of race-ethnicity in iCBT studies. If individuals from racial-ethnic minoritized groups are being underreported, it would be difficult to determine how well our current interventions meet the needs of these individuals.
For these reasons, we explored the reporting and representation of racial-ethnic diversity in clinical trials of iCBT for depression. Our first aim was to examine the overall reporting of individuals from racial-ethnic minoritized groups in published RCTs of iCBT for depression. Our second aim was to explore the representation of specific racial-ethnic minoritized groups in RCTs of iCBT for depression. To achieve these aims, we conducted a systematic review of RCTs of iCBT for depression and examined the reporting of race-ethnicity along with the representation (ie, the sample composition) [ 30 - 90 ]. We also explored the reporting of other background factors (eg, migration status).
Search Strategies
To identify RCTs, we employed a 2-pronged search strategy. First, we searched METAPSY, a database of randomized clinical trials for depression created by Cuijpers [ 91 ] and colleagues [ 92 ]. The public version of the database covers a search of trials between January 1, 1966, and January 2018, including Risk of Bias (ROB) assessments. We obtained an updated database version from one of the METAPSY lead researchers (P Cuijpers), which included studies up to January 1, 2021. The database of 763 studies was created from a search on PubMed, PsycINFO, Embase, and the Cochrane Library. The search was performed using terms involving psychotherapy (eg, “psychotherapy” and “cognitive-behavioral therapy”) and depression (eg, “depressive symptoms” and “major depression”). An example of the search string for PubMed can be found in Multimedia Appendix 1 . Studies were included if they were an RCT in which a psychotherapy condition was compared to another. Control conditions included waiting list, treatment as usual, pill placebo, pharmacotherapy, alternate therapy delivery modalities (eg, single vs group therapy), or any other active control condition. Exclusion criteria included (1) no statement of randomization; (2) depression not being an inclusion criterion; (3) not being focused on acute treatment (eg, maintenance or relapse prevention); (4) studies with children or adolescents; (5) dissertation studies; (6) studies in which the depression was not the target of treatment; (7) if effect sizes could not be calculated; and (8) studies in languages other than English, Spanish, German, or Dutch. METAPSY contains a risk-of-bias assessment using the risk-of-bias assessment tool from the Cochrane Collaboration for papers published before 2018 [ 92 ]. Two raters (ARH-D and RDJ-R) examined the titles and abstracts of the 763 studies identified in METAPSY to decide whether they were relevant for review.
To obtain more recent papers, we used the same search string used for METAPSY [ 92 ] with the addition of a search term for internet-based studies to search Cochrane, PubMed, PsycINFO, and Embase for papers published between January 1, 2021, and July 18, 2022. An example search string for PubMed is available in Multimedia Appendix 2 . This second search yielded an additional 2159 studies, of which 590 were duplicates. We also identified an additional paper by cross-referencing a study protocol. Further, 4 raters used a randomized, counterbalanced design to review the titles and abstracts of the remaining 1570 studies (ARH-D, JFB, LL-L, and RDJ-R). Disagreements between the raters were resolved by consensus.
We included randomized controlled trials in which iCBT was compared to a waiting list, care-as-usual, active control, or another iCBT treatment (eg, 2 different iCBTs or unguided vs guided iCBT). The included papers must have also focused on acute treatment (eg, 4 weeks to 6 months) of depression, delivered via the internet on a website or a smartphone app as guided or unguided self-help iCBT. In addition to the exclusion criteria used for METAPSY [ 91 , 92 ], we also excluded studies if the (1) interventions were not delivered via the internet, and (2) interventions focused on specific medical or psychiatric subpopulations (eg, individuals with cannabis use disorder). We excluded studies that used specific subpopulation samples because we aimed to study the general reporting and representation of race-ethnicity in trials of iCBT for depression [ 93 , 94 ]. Additionally, we concentrated on studies that examined depression to prevent overlapping treatment outcomes (eg, chronic pain, sleep, and biases due to the specificity of the pool of participants) [ 95 ]. Of the 2333 studies, 282 were identified as relevant to self-help iCBT for depression. We then examined the full text of these studies and excluded those that did not meet our inclusion criteria. See Figure 1 for the PRISMA (Preferred Reporting Items for Systematic Reviews) flowchart. We assessed the ROB for 20 papers that did not already have an ROB assessment in the public version of the METAPSY database, 17 that were provided in the METAPSY update and 3 we found in our systematic search ( Multimedia Appendix 3 ).
Coders rated study characteristics, trial design aspects, intervention type, and control group. They also coded various study-level features, including year of publication, whether the studies used guided or unguided iCBT and the country from which the study was sampled. For our primary question of interest, we first coded whether studies included race or ethnicity as “1” if they did or “0” if they did not. Additionally, when studies reported race or ethnicity, we recorded the number of participants in the sample that belonged to the different racial and ethnic groups. We then calculated the proportion of minoritized group members in the sample for the studies that reported overall race or ethnicity. Finally, given that we included studies conducted outside the United States, where there may be different conceptualizations of race and ethnicity, we also recorded other alternative variables indicative of participant background, including migration status, country of birth, country of residence, nationality, or native language. Agreement between raters was based on consensus, with disagreements being resolved by the principal investigator (LL-L).
Study Selection and Characteristics
In total, 62 papers met the criteria of an RCT of iCBT for depression in adults. Some studies used samples from multiple countries or reported multiple arms, therefore, there are more interventions and control conditions than the number of studies (ie, 62). These studies included a total of 64 interventions of which 41 (64%) included guided self-help, and 23 (36%) included unguided self-help. Regarding control groups, 34 (52%) out of the 65 control groups included were compared to waiting lists, 12 (18%) to treatment as usual, 12 (18%) to active controls, and 7 (11%) used an internet-based active treatment comparator (ie, guided or unguided iCBT). The studies drew on samples from 13 countries: United States (n=17), Germany (n=12), Australia (n=10), Sweden (n=10), the Netherlands (n=6), China (n=2), Finland (n=2), United Kingdom (n=2), Canada (n=1), Colombia (n=1), Ireland (n=1), New Zealand (n=1), and Switzerland (n=1). The 62 papers included in this study had a total sample of 17,210 participants. ROB information is available in Multimedia Appendix 3 and suggests that reported studies had at least some ROB. However, most of this bias was due to self-reporting as an outcome measure. In the other ROB domains, 14 (70%) out of 20 of studies had low risk concerning missing outcome data, while 18 (90%) out of 20 of studies showed low risk for deviation from intended interventions.
Reporting of Race and Ethnicity
Of the 62 papers, only 17 (27%) reported race, and 12 (19%) reported ethnicity. Despite this overall low reporting rate, we observed that reporting differed markedly by the country sampled in the study (Fisher exact test P <.001). Of the studies conducted in the United States (n=17), almost all included race (n=15, 88%). Outside of the United States, only 2 studies conducted in the United Kingdom (n=1) and Australia (n=1) reported race. RCTs using samples from Germany, the Netherlands, Sweden, Ireland, Switzerland, China, Finland, Canada, Colombia, and New Zealand did not report race. Similarly, ethnicity was only reported in 12 studies, 4 of which grouped ethnicity with race. A fifth trial described reporting ethnicity, but it combined racial and ethnic categories; therefore, it was labeled as reporting both race and ethnicity. Additionally, 2 of them were conducted specifically on a minoritized group. The results are summarized in Table 1 . Translating the reporting numbers to individuals, of the 17,210 participants in all these RCTs, 13,587 (78.9%) had no reported race, and 14,810 (86.1%) had no reported ethnicity.
Representation of Individuals From Racial and Ethnic Minoritized Groups
Of the 17 papers that reported race, 4 (24%) categorized groups into “minority” versus “non-minority” or “White” versus “other.” The other papers included the following groups on their breakdown: American Indian or Alaskan Native (n=3); Asian (n=8); Black or African American (n=12); Middle Eastern or North African (n=1); multiracial (n=8); Native American (n=1); other (n=8) which included “not specified,” “declined to answer,” and not reported as White; Pacific Islander (n=3); unknown (n=18); and White (n=16) (see Table 2 ).
a AIAN: American Indian or Alaskan Native.
b MENA: Middle Eastern or North African.
The papers that reported ethnicity ( Table 3 ) reported the following ethnic groups: American (n=1), Chinese (n=1), European (n=1), Hispanic or Latino (n=9), other (n=1), Turkish (n=1), and White British (n=1). Many papers specified ethnicity for minoritized member groups (eg, Hispanic) but not for majority members (eg, non-Hispanic).
Representation of Race in iCBT Trials in the United States
Given that reporting of race was highest in the United States, we explored the representation of individuals from racial-ethnic minoritized groups in the 15 RCTs from that country that reported race (n=3354). A total of 2568 out of 3354 (76.6%, 95% CI 75.1-78.0) of individuals in these samples were identified as White. By way of comparison, 61.6% of adults in the United States were reported as being White in the US census [ 1 ]. These rates are significantly different (χ 2 1 =316.9, P <.001). Extrapolating the number of individuals that would be expected to be White based on the 12-month prevalence of depression reported in the latest epidemiological study (National Epidemiologic Survey on Alcohol and Related Conditions-III: 10.4%) [ 96 ] and racial-ethnic differences reported in National Epidemiologic Survey on Alcohol and Related Conditions-III, one would expect 67.3% of individuals in a 12-month depression sample to be non-Hispanic White. The rate of White individuals represented in our study was statistically significantly different from 67.3% (χ 2 1 =130.4, P <.001). When using lifetime prevalence rates, one would expect 72.5% of individuals with lifetime depression to be White. The rate of White individuals we found was significantly higher (χ 2 1 =27.6, P <.001), though it was a relatively small difference (4.1%).
Reporting of Other Group Demographics Outside of the United States
Out of the 46 articles conducted outside of the United States, only 25 (54%) reported on other demographic factors, such as language proficiency (n=13), country of residence (n=11), country of origin (n=5), migration status (n=2), and nationality (n=2). Most (n=11) of the studies that reported language only did so indirectly, as an inclusion criterion (eg, “participants must speak Dutch fluently”). Similarly, most (n=9) of the studies that reported the country of residence did so indirectly, (eg, “participants must reside in Sweden”). Some studies reported on multiple of the aforementioned demographics, therefore, there are more reported demographic variables (ie, 33) than the number of studies reporting these (ie, 25).
Principal Findings
Our first aim was to examine the overall reporting of racial and ethnic diversity in RCTs of iCBT for depression. The second aim was to explore the representation of specific minoritized groups within the studies reporting race-ethnicity. Our search revealed substantial gaps in the literature. Out of the 62, only 17 (27%) of the papers included in this paper had data on race, and 12 (19%) included ethnicity. Focusing on the United States, even when race was reported, White individuals were overrepresented in samples relative to US census population estimates (61.6%) [ 97 ] and the expected estimate of non-Hispanic White individuals in depression samples (67.3%) [ 98 ].
Limitations
Before interpreting our findings, several caveats are worth noting. First, we recognize that failure to report racial-ethnic identity information does not guarantee that racial-ethnic diversity is not well-represented in a trial's sample. However, we believe that it is unlikely that researchers are collecting very racially diverse samples and choosing not to report on that aspect of the sample composition. In some countries reporting race or ethnicity might be more limited by regulations like ethical review board approval. Nonetheless, only 25 (54%) out of 46 of studies conducted outside of the United States reported other characteristics such as nationality, country of origin, or migration status. Additionally, it is worth noting that it is not always clear what the implications of underreporting are. For example, in a large individual-patient data meta-analysis, Karyotaki et al [ 99 ] reported that individuals from racial-ethnic minoritized groups experienced poorer outcomes in guided iCBT than “native-born” participants. These findings are worth replicating as they imply that racial-ethnic groups may respond differently to iCBTs and are concerning regarding the underrepresentation of minoritized groups in iCBT research. Nonetheless, even if race-ethnicity was not a predictor of outcomes in treatment, the differential enrollment of participants from different racial-ethnic groups is itself a racial-ethnic disparity. We excluded gray literature such as posters and other unpublished works. While these exclusions represent a limitation of this study (ie, fewer RCTs included), it is not clear that including these would have changed our conclusions that the reporting of representation is poor in RCTs of self-help iCBT. Finally, we chose to focus on iCBT, given its popularity and potential to reduce the public health burden of depression [ 100 , 101 ]. However, self-help iCBTs are not the only low-intensity treatment available. For example, bibliotherapy (ie, printed self-help media) is also a low-intensity treatment that may be preferred by many individuals, with some evidence suggesting that some racial-ethnic minoritized groups may prefer bibliotherapy over digital interventions [ 23 ].
Implications of Underrepresentation and Methodological Considerations
Our results suggest that the rate of reporting of ethnicity in iCBT studies in the United States has improved relative to the prior report by Polo and colleagues [ 11 ] for RCTs of face-to-face psychotherapy for depression. However, reporting of background characteristics in iCBT RCTs is relatively poor in Europe and elsewhere. Indeed, only 2 RCTs of iCBT for depression conducted outside of the United States reported race or ethnicity, one of which was a cultural adaptation. This finding is surprising given that an estimated 23 million non-European citizens (5% of the population) live in Europe [ 102 ]. This underreporting makes it challenging to determine how well our current interventions fit with the experiences of minoritized groups. While it is a possibility that logistic barriers such as internet access may be responsible for the lower representation of individuals from racial-ethnic minoritized groups in iCBT studies, it is unclear if these differences are large enough to account for the disparities in trial representation [ 103 ]. Ramos et al [ 29 ] noted that diversity, equity, and inclusion were not guiding principles in iCBT research. Similarly, our findings suggest a need to understand how individuals from racial-ethnic minoritized groups engage in iCBT research and how to increase that engagement.
Another explanation for the lack of reporting and representation could be group differences in the acceptability of iCBT (ie, internet-based self-help not being a preferred format). However, there is evidence that compared to non-Hispanic White adults, Asian, Hispanic, non-Hispanic Black, and other racial-ethnic groups report either equal or greater willingness to use and choose to learn about iCBTs. Thus, overall willingness to use iCBTs is unlikely to explain differences in initiating iCBT trials.
Furthermore, the relevance of race and ethnicity as terms in countries outside the United States warrants additional examination. It is essential to investigate alternative methods of describing samples that are appropriate for different countries while allowing for cross-country comparisons. The low number of iCBT studies focusing on minoritized individuals' mental health makes it challenging to advocate for the using of iCBT to reduce health disparities in these populations, especially outside the United States. Hence, it is essential to conduct research that encompasses diverse populations and reports on their characteristics.
Future Directions
This lack of knowledge regarding race and ethnicity has implications for implementing iCBT programs globally. Providers, for example, worry about the lack of diversity represented in the literature since this could lead to engagement challenges [ 104 ]. Acknowledging that a primarily White sample does not represent the general population may be an essential step to improving engagement. Numerous efforts to culturally adapt interventions to increase cultural competence of clinicians have been made [ 105 ]. However, we still lack information on how these adaptations translate to a self-help format to reduce barriers to access for minoritized groups. Currently, hundreds of online mental health resources are available to the public, but it is sometimes unclear what they offer users [ 106 ]. Clinicians and clients have no easy way of knowing which resources are evidence-based and suitable for individuals from racial-ethnic minoritized groups. The label “CBT” (cognitive behavioral therapy) could be a way to identify evidence-based mental health services. However, it is hard to determine whether digital interventions apply CBT principles correctly and whether this delivery format is ideal for members of racial-ethnic minoritized groups.
The extent to which racial-ethnic minoritized groups, other than US non-Hispanic Black adults, use self-help iCBT has received little attention [ 11 , 29 ]. It has been more than a decade since the National Institutes of Health, CONSORT, and JARS guidelines [ 12 , 13 , 107 ] introduced criteria regarding race and ethnicity, and the outlook has not improved significantly. Although reporting of race has improved, it is not being reported to the extent that it could be helpful. Further, a potential future direction includes developing new guidelines similar to the CONSORT and JARS [ 13 , 107 ] for journals oriented toward internet-based mental health. These guidelines could involve (1) including racial-ethnic groups with a breakdown; (2) describing the type and intensity of support being provided for guided self-help; (3) ensuring that samples are representative of gender, racial-ethnic groups, and other underreported identities; and (4) detailing which elements of CBT are included in the protocol. This standardization could facilitate the implementation of similar protocols in hospitals, mental health foundations, and businesses interested in providing mental health services to a broader population. The lack of reporting could also be improved by increasing outreach efforts and ensuring racial-ethnic diversity is included in RCTs for self-help iCBT.
Conclusions
There is a lack of reporting of racial and ethnic diversity in clinical trials of iCBT, and the representation of individuals from racial-ethnic minoritized groups is quite poor. This gap has significant implications for the generalizability of the findings currently in the literature, as these might not apply to individuals from racial-ethnic minoritized groups. Therefore, improving the representation of racial-ethnic diversity in trials of iCBT should be a key direction for the field going forward.
Acknowledgments
This research was partially funded by the Global Mental Health Fellowship (principal investigator: LL-L), the National Institute of Mental Health (grant T32 MH103213-06) and (grants KL2TR002530 and UL1TR002529) (A Shekhar, principal investigator) from the National Institutes of Health, National Center for Advancing Translational Sciences, Clinical and Translational Sciences Award (principal investigator: LL-L).
Conflicts of Interest
LL-L has received consulting fees from Syra Health, Inc who were not involved with the current work.
Search string from MetaPsy.
Search string for articles published from 2021 to 2022.
Risk of bias assessment for articles not included in MetaPsy (n=20).
- U.S. Census Buerau. 2020 census illuminates racial and ethnic composition of the country. Census.gov. 2020. URL: https://www.census.gov/library/stories/2021/08/improved-race-ethnicity-measures-reveal-united-states-population-much-more-multiracial.html [accessed 2022-06-06]
- Wingrove-Haugland E, McLeod J. Not "minority" but "minoritized". Teach Ethics. 2021;21(1):1-11. [ CrossRef ]
- Colby SL, Ortman JM. Projections of the size and composition of the U.S. population: 2014 to 2060. United States Census Bureau. 2015. URL: https://www.census.gov/library/publications/2015/demo/p25-1143.html [accessed 2023-12-05]
- United Nations. International Migration 2019. Department of Economic Affaris. 2019. URL: https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/files/documents/2020/Jan/un_2019_internationalmigration_highlights.pdf [accessed 2023-12-05]
- Wang PS, Aguilar-Gaxiola S, Alonso J, Angermeyer MC, Borges G, Bromet EJ, et al. Use of mental health services for anxiety, mood, and substance disorders in 17 countries in the WHO world mental health surveys. Lancet. 2007;370(9590):841-850. [ https://europepmc.org/abstract/MED/17826169 ] [ CrossRef ] [ Medline ]
- Greenberg PE, Fournier AA, Sisitsky T, Pike CT, Kessler RC. The economic burden of adults with major depressive disorder in the United States (2005 and 2010). J Clin Psychiatry. 2015;76(2):155-162. [ https://www.psychiatrist.com/jcp/economic-burden-adults-major-depressive-disorder-united/ ] [ CrossRef ] [ Medline ]
- Pascoe EA, Richman LS. Perceived discrimination and health: a meta-analytic review. Psychological Bulletin. 2009;135(4):531-554. [ CrossRef ]
- Carter RT, Lau MY, Johnson V, Kirkinis K. Racial discrimination and health outcomes among racial/ethnic minorities: a meta-analytic review. J Multicult Couns Devel. 2017;45(4):232-259. [ CrossRef ]
- Allen J, Balfour R, Bell R, Marmot M. Social determinants of mental health. Int Rev Psychiatry. 2014;26(4):392-407. [ CrossRef ] [ Medline ]
- Mak WWS, Law RW, Alvidrez J, Pérez-Stable EJ. Gender and ethnic diversity in NIMH-funded clinical trials: review of a decade of published research. Adm Policy Ment Health. 2007;34(6):497-503. [ CrossRef ] [ Medline ]
- Polo AJ, Makol BA, Castro AS, Colón-Quintana N, Wagstaff AE, Guo S. Diversity in randomized clinical trials of depression: a 36-year review. Clin Psychol Rev. 2019;67:22-35. [ https://www.sciencedirect.com/science/article/abs/pii/S0272735818303684?via%3Dihub ] [ CrossRef ] [ Medline ]
- NIH policy and guidelines on the inclusion of women and minorities as subjects in clinical research. National Institutes of Health. 2001. URL: http://grants1.nih.gov/grants/funding/women_min/guidelines_amended_10_2001.htm [accessed 2022-06-06]
- Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. Int J Surg. 2011;9(8):672-677. [ https://www.sciencedirect.com/science/article/pii/S1743919111005620?via%3Dihub ] [ CrossRef ] [ Medline ]
- King DB, O'Rourke N, DeLongis A. Social media recruitment and online data collection: a beginner's guide and best practices for accessing low-prevalence and hard-to-reach populations. Can Psychol. 2014;55(4):240-249. [ CrossRef ]
- Budhwani H, Hearld KR, Chavez-Yenter D. Depression in racial and ethnic minorities: the impact of nativity and discrimination. J Racial Ethn Health Disparities. 2015;2(1):34-42. [ https://link.springer.com/article/10.1007/s40615-014-0045-z ] [ CrossRef ] [ Medline ]
- Woodward AT, Taylor RJ, Abelson JM, Matusko N. Major depressive disorder among older African Americans, Caribbean Blacks, and non-Hispanic Whites: secondary analysis of the National Survey of American Life. Depress Anxiety. 2013;30(6):589-597. [ http://hdl.handle.net/2027.42/98298 ] [ CrossRef ] [ Medline ]
- McBrier DB, Wilson G. Going down? Race and downward occupational mobility for white-collar workers in the 1990s. Work Occup. 2004;31(3):283-322. [ CrossRef ]
- Hardaway CR, McLoyd VC. Escaping poverty and securing middle class status: how race and socioeconomic status shape mobility prospects for African Americans during the transition to adulthood. J Youth Adolesc. 2009;38(2):242-256. [ https://europepmc.org/abstract/MED/19636721 ] [ CrossRef ] [ Medline ]
- 2021 National Healthcare Quality and Disparities Report. Agency for Healthcare Research and Quality. Rockville, MD.; 2021. URL: https://www.ahrq.gov/research/findings/nhqrdr/nhqdr21/index.html [accessed 2023-12-05]
- Holden KB, McGregor BS, Blanks SH, Mahaffey C. Psychosocial, socio-cultural, and environmental influences on mental health help-seeking among African-American men. J Mens Health. 2012;9(2):63-69. [ https://europepmc.org/abstract/MED/22905076 ] [ CrossRef ] [ Medline ]
- Ramos G, Chavira DA. Use of technology to provide mental health care for racial and ethnic minorities: evidence, promise, and challenges. Cogn Behav Pract. 2022;29(1):15-40. [ CrossRef ]
- Schueller SM, Hunter JF, Figueroa C, Aguilera A. Use of digital mental health for marginalized and underserved populations. Curr Treat Options Psych. 2019;6(3):243-255. [ CrossRef ]
- De Jesús-Romero R, Wasil A, Lorenzo-Luaces L. Willingness to use internet-based versus bibliotherapy interventions in a representative US sample: cross-sectional survey study. JMIR Form Res. 2022;6(8):e39508. [ https://formative.jmir.org/2022/8/e39508 ] [ CrossRef ] [ Medline ]
- Levecque K, Van Rossem R. Depression in Europe: does migrant integration have mental health payoffs? A cross-national comparison of 20 European countries. Ethn Health. 2015;20(1):49-65. [ CrossRef ] [ Medline ]
- Henkelmann JR, de Best S, Deckers C, Jensen K, Shahab M, Elzinga B, et al. Anxiety, depression and post-traumatic stress disorder in refugees resettling in high-income countries: systematic review and meta-analysis. BJPsych Open. 2020;6(4):e68. [ https://www.cambridge.org/core/journals/bjpsych-open/article/anxiety-depression-and-posttraumatic-stress-disorder-in-refugees-resettling-in-highincome-countries-systematic-review-and-metaanalysis/8AC75A92D4164C16A8755B333E8D4DB8 ] [ CrossRef ] [ Medline ]
- Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res. 2011;13(1):e30. [ https://www.jmir.org/2011/1/e30/ ] [ CrossRef ] [ Medline ]
- Cuijpers P, Noma H, Karyotaki E, Cipriani A, Furukawa TA. Effectiveness and acceptability of cognitive behavior therapy delivery formats in adults with depression: a network meta-analysis. JAMA Psychiatry. 2019;76(7):700-707. [ https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2730724 ] [ CrossRef ] [ Medline ]
- Mohr DC, Duffecy J, Ho J, Kwasny M, Cai X, Burns MN, et al. A randomized controlled trial evaluating a manualized TeleCoaching protocol for improving adherence to a web-based intervention for the treatment of depression. PLoS One. 2013;8(8):e70086. [ https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0070086 ] [ CrossRef ] [ Medline ]
- Ramos G, Ponting C, Labao JP, Sobowale K. Considerations of diversity, equity, and inclusion in mental health apps: a scoping review of evaluation frameworks. Behav Res Ther. 2021;147:103990. [ https://www.sciencedirect.com/science/article/pii/S0005796721001893?via%3Dihub ] [ CrossRef ] [ Medline ]
- Andersson G, Bergström J, Holländare F, Carlbring P, Kaldo V, Ekselius L. Internet-based self-help for depression: randomised controlled trial. Br J Psychiatry. 2005;187:456-461. [ CrossRef ] [ Medline ]
- Ly KH, Trüschel A, Jarl L, Magnusson S, Windahl T, Johansson R, et al. Behavioural activation versus mindfulness-based guided self-help treatment administered through a smartphone application: a randomised controlled trial. BMJ Open. 2014;4(1):e003440. [ https://bmjopen.bmj.com/lookup/pmidlookup?view=long&pmid=24413342 ] [ CrossRef ] [ Medline ]
- Lindner P, Olsson E, Johnsson A, Dahlin M, Andersson G, Carlbring P. The impact of telephone versus e-mail therapist guidance on treatment outcomes, therapeutic alliance and treatment engagement in internet-delivered CBT for depression: a randomised pilot trial. Internet Interventions. 2014;1(4):182-187. [ CrossRef ]
- Lappalainen P, Langrial S, Oinas-Kukkonen H, Tolvanen A, Lappalainen R. Web-based acceptance and commitment therapy for depressive symptoms with minimal support: a randomized controlled trial. Behav Modif. 2015;39(6):805-834. [ CrossRef ] [ Medline ]
- Berger T, Hämmerli K, Gubser N, Andersson G, Caspar F. Internet-based treatment of depression: a randomized controlled trial comparing guided with unguided self-help. Cogn Behav Ther. 2011;40(4):251-266. [ CrossRef ] [ Medline ]
- Löbner M, Pabst A, Stein J, Dorow M, Matschinger H, Luppa M, et al. Computerized cognitive behavior therapy for patients with mild to moderately severe depression in primary care: A pragmatic cluster randomized controlled trial (@ktiv). J Affect Disord. 2018;238:317-326. [ CrossRef ] [ Medline ]
- Meyer B, Bierbrodt J, Schröder J, Berger T, Beevers C, Weiss M, et al. Effects of an Internet intervention (Deprexis) on severe depression symptoms: randomized controlled trial. Internet Interventions. 2015;2(1):48-59. [ CrossRef ]
- Moritz S, Schilling L, Hauschildt M, Schröder J, Treszl A. A randomized controlled trial of internet-based therapy in depression. Behav Res Ther. 2012;50(7-8):513-521. [ CrossRef ] [ Medline ]
- Nyström MBT, Stenling A, Sjöström E, Neely G, Lindner P, Hassmén P, et al. Behavioral activation versus physical activity via the internet: a randomized controlled trial. J Affect Disord. 2017;215:85-93. [ https://linkinghub.elsevier.com/retrieve/pii/S0165-0327(16)32224-8 ] [ CrossRef ] [ Medline ]
- Oehler C, Görges F, Rogalla M, Rummel-Kluge C, Hegerl U. Efficacy of a guided web-based self-management intervention for depression or dysthymia: randomized controlled trial with a 12-month follow-up using an active control condition. J Med Internet Res. 2020;22(7):e15361. [ https://www.jmir.org/2020/7/e15361/ ] [ CrossRef ] [ Medline ]
- Stiles-Shields C, Montague E, Kwasny MJ, Mohr DC. Behavioral and cognitive intervention strategies delivered via coached apps for depression: pilot trial. Psychol Serv. 2019;16(2):233-238. [ https://europepmc.org/abstract/MED/30407055 ] [ CrossRef ] [ Medline ]
- O'moore KA, Newby JM, Andrews G, Hunter DJ, Bennell K, Smith J, et al. Internet cognitive-behavioral therapy for depression in older adults with knee osteoarthritis: a randomized controlled trial. Arthritis Care Res (Hoboken). 2018;70(1):61-70. [ CrossRef ] [ Medline ]
- Salamanca-Sanabria A, Richards D, Timulak L, Connell S, Mojica Perilla M, Parra-Villa Y, et al. A culturally adapted cognitive behavioral internet-delivered intervention for depressive symptoms: randomized controlled trial. JMIR Ment Health. 2020;7(1):e13392. [ https://mental.jmir.org/2020/1/e13392/ ] [ CrossRef ] [ Medline ]
- Titov N, Dear BF, Staples LG, Terides MD, Karin E, Sheehan J, et al. Disorder-specific versus transdiagnostic and clinician-guided versus self-guided treatment for major depressive disorder and comorbid anxiety disorders: a randomized controlled trial. J Anxiety Disord. 2015;35:88-102. [ https://linkinghub.elsevier.com/retrieve/pii/S0887-6185(15)30008-6 ] [ CrossRef ] [ Medline ]
- Ruwaard J, Schrieken B, Schrijver M, Broeksteeg J, Dekker J, Vermeulen H, et al. Standardized web-based cognitive behavioural therapy of mild to moderate depression: a randomized controlled trial with a long-term follow-up. Cogn Behav Ther. 2009;38(4):206-221. [ CrossRef ] [ Medline ]
- Smith J, Newby JM, Burston N, Murphy MJ, Michael S, Mackenzie A, et al. Help from home for depression: a randomised controlled trial comparing internet-delivered cognitive behaviour therapy with bibliotherapy for depression. Internet Interv. 2017;9:25-37. [ https://linkinghub.elsevier.com/retrieve/pii/S2214-7829(17)30022-2 ] [ CrossRef ] [ Medline ]
- Schröder J, Brückner K, Fischer A, Lindenau M, Köther U, Vettorazzi E, et al. Efficacy of a psychological online intervention for depression in people with epilepsy: a randomized controlled trial. Epilepsia. 2014;55(12):2069-2076. [ https://onlinelibrary.wiley.com/doi/10.1111/epi.12833 ] [ CrossRef ] [ Medline ]
- Richards D, Timulak L, O'Brien E, Hayes C, Vigano N, Sharry J, et al. A randomized controlled trial of an internet-delivered treatment: its potential as a low-intensity community intervention for adults with symptoms of depression. Behav Res Ther. 2015;75:20-31. [ CrossRef ] [ Medline ]
- Vernmark K, Lenndin J, Bjärehed J, Carlsson M, Karlsson J, Oberg J, et al. Internet administered guided self-help versus individualized e-mail therapy: a randomized trial of two versions of CBT for major depression. Behav Res Ther. 2010;48(5):368-376. [ CrossRef ] [ Medline ]
- Schure MB, Lindow JC, Greist JH, Nakonezny PA, Bailey SJ, Bryan WL, et al. Use of a fully automated internet-based cognitive behavior therapy intervention in a community population of adults with depression symptoms: randomized controlled trial. J Med Internet Res. 18, 2019;21(11):e14754. [ https://www.jmir.org/2019/11/e14754/ ] [ CrossRef ] [ Medline ]
- Titov N, Andrews G, Davies M, McIntyre K, Robinson E, Solley K. Internet treatment for depression: a randomized controlled trial comparing clinician vs. technician assistance. PLoS One. 2010;5(6):e10939. [ https://dx.plos.org/10.1371/journal.pone.0010939 ] [ CrossRef ] [ Medline ]
- Ünlü Ince B, Cuijpers P, van 't Hof E, van Ballegooijen W, Christensen H, Riper H. Internet-based, culturally sensitive, problem-solving therapy for Turkish migrants with depression: randomized controlled trial. J Med Internet Res. 2013;15(10):e227. [ https://www.jmir.org/2013/10/e227/ ] [ CrossRef ] [ Medline ]
- Williams AD, Blackwell SE, Mackenzie A, Holmes EA, Andrews G. Combining imagination and reason in the treatment of depression: a randomized controlled trial of internet-based cognitive-bias modification and internet-CBT for depression. J Consult Clin Psychol. 2013;81(5):793-799. [ https://europepmc.org/abstract/MED/23750459 ] [ CrossRef ] [ Medline ]
- Warmerdam L, van Straten A, Twisk J, Riper H, Cuijpers P. Internet-based treatment for adults with depressive symptoms: randomized controlled trial. J Med Internet Res. 2008;10(4):e44. [ https://www.jmir.org/2008/4/e44/ ] [ CrossRef ] [ Medline ]
- Zhao C, Wampold BE, Ren Z, Zhang L, Jiang G. The efficacy and optimal matching of an internet-based acceptance and commitment therapy intervention for depressive symptoms among university students: a randomized controlled trial in China. J Clin Psychol. 2022;78(7):1354-1375. [ CrossRef ] [ Medline ]
- Pots WTM, Fledderus M, Meulenbeek PAM, ten Klooster PM, Schreurs KMG, Bohlmeijer ET. Acceptance and commitment therapy as a web-based intervention for depressive symptoms: randomised controlled trial. Br J Psychiatry. 2016;208(1):69-77. [ CrossRef ] [ Medline ]
- Pihlaja S, Lahti J, Lipsanen JO, Ritola V, Gummerus EM, Stenberg JH, et al. Scheduled telephone support for internet cognitive behavioral therapy for depression in patients at risk for dropout: pragmatic randomized controlled trial. J Med Internet Res. 2020;22(7):e15732. [ https://www.jmir.org/2020/7/e15732/ ] [ CrossRef ] [ Medline ]
- Zagorscak P, Heinrich M, Sommer D, Wagner B, Knaevelsrud C. Benefits of individualized feedback in internet-based interventions for depression: a randomized controlled trial. Psychother Psychosom. 2018;87(1):32-45. [ CrossRef ] [ Medline ]
- Reins JA, Boß L, Lehr D, Berking M, Ebert DD. The more I got, the less I need? efficacy of internet-based guided self-help compared to online psychoeducation for major depressive disorder. J Affect Disord. 2019;246:695-705. [ CrossRef ] [ Medline ]
- Bücker L, Schnakenberg P, Karyotaki E, Moritz S, Westermann S. Diminishing effects after recurrent use of self-guided internet-based interventions in depression: randomized controlled trial. J Med Internet Res. 2019;21(10):e14240. [ https://www.jmir.org/2019/10/e14240/ ] [ CrossRef ] [ Medline ]
- Roepke AM, Jaffee SR, Riffle OM, McGonigal J, Broome R, Maxwell B. Randomized controlled trial of SuperBetter, a smartphone-based/internet-based self-help tool to reduce depressive symptoms. Games Health J. 2015;4(3):235-246. [ CrossRef ] [ Medline ]
- Carlbring P, Hägglund M, Luthström A, Dahlin M, Kadowaki Å, Vernmark K, et al. Internet-based behavioral activation and acceptance-based treatment for depression: a randomized controlled trial. J Affect Disord. 2013;148(2-3):331-337. [ CrossRef ] [ Medline ]
- Bur OT, Krieger T, Moritz S, Klein JP, Berger T. Optimizing the context of support of web-based self-help in individuals with mild to moderate depressive symptoms: a randomized full factorial trial. Behav Res Ther. 2022;152:104070. [ https://boris.unibe.ch/id/eprint/167861 ] [ CrossRef ] [ Medline ]
- Choi I, Zou J, Titov N, Dear BF, Li S, Johnston L, et al. Culturally attuned internet treatment for depression amongst Chinese Australians: a randomised controlled trial. J Affect Disord. 2012;136(3):459-468. [ CrossRef ] [ Medline ]
- Perini S, Titov N, Andrews G. Clinician-assisted internet-based treatment is effective for depression: randomized controlled trial. Aust N Z J Psychiatry. 2009;43(6):571-578. [ CrossRef ] [ Medline ]
- Jelinek L, Arlt S, Moritz S, Schröder J, Westermann S, Cludius B. Brief web-based intervention for depression: randomized controlled trial on behavioral activation. J Med Internet Res. 2020;22(3):e15312. [ https://www.jmir.org/2020/3/e15312/ ] [ CrossRef ] [ Medline ]
- Johansson O, Bjärehed J, Andersson G, Carlbring P, Lundh LG. Effectiveness of guided internet-delivered cognitive behavior therapy for depression in routine psychiatry: a randomized controlled trial. Internet Interv. 2019;17:100247. [ https://linkinghub.elsevier.com/retrieve/pii/S2214-7829(19)30010-7 ] [ CrossRef ] [ Medline ]
- Flygare AL, Engström I, Hasselgren M, Jansson-Fröjmark M, Frejgrim R, Andersson G, et al. Internet-based CBT for patients with depressive disorders in primary and psychiatric care: is it effective and does comorbidity affect outcome? Internet Interv. 2020;19:100303. [ https://linkinghub.elsevier.com/retrieve/pii/S2214-7829(19)30093-4 ] [ CrossRef ] [ Medline ]
- Kenter RMF, Cuijpers P, Beekman A, van Straten A. Effectiveness of a web-based guided self-help intervention for outpatients with a depressive disorder: short-term results from a randomized controlled trial. J Med Internet Res. 2016;18(3):e80. [ https://www.jmir.org/2016/3/e80/ ] [ CrossRef ] [ Medline ]
- Krämer R, Köhler S. Evaluation of the online-based self-help programme "Selfapy" in patients with unipolar depression: study protocol for a randomized, blinded parallel group dismantling study. Trials. 2021;22(1):264. [ https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-021-05218-4 ] [ CrossRef ] [ Medline ]
- Kivi M, Eriksson MCM, Hange D, Petersson EL, Vernmark K, Johansson B, et al. Internet-based therapy for mild to moderate depression in Swedish primary care: short term results from the PRIM-NET randomized controlled trial. Cogn Behav Ther. 2014;43(4):289-298. [ https://europepmc.org/abstract/MED/24911260 ] [ CrossRef ] [ Medline ]
- Fischer A, Schröder J, Vettorazzi E, Wolf OT, Pöttgen J, Lau S, et al. An online programme to reduce depression in patients with multiple sclerosis: a randomised controlled trial. Lancet Psychiatry. 2015;2(3):217-223. [ CrossRef ] [ Medline ]
- Tomasino KN, Lattie EG, Ho J, Palac HL, Kaiser SM, Mohr DC. Harnessing peer support in an online intervention for older adults with depression. Am J Geriatr Psychiatry. 2017;25(10):1109-1119. [ https://europepmc.org/abstract/MED/28571785 ] [ CrossRef ] [ Medline ]
- Rosso IM, Killgore WDS, Olson EA, Webb CA, Fukunaga R, Auerbach RP, et al. Internet-based cognitive behavior therapy for major depressive disorder: a randomized controlled trial. Depress Anxiety. 2017;34(3):236-245. [ https://europepmc.org/abstract/MED/28009467 ] [ CrossRef ] [ Medline ]
- Guo Y, Hong YA, Cai W, Li L, Hao Y, Qiao J, et al. Effect of a WeChat-based intervention (Run4Love) on depressive symptoms among people living with HIV in China: a randomized controlled trial. J Med Internet Res. 2020;22(2):e16715. [ https://www.jmir.org/2020/2/e16715/ ] [ CrossRef ] [ Medline ]
- Hallgren M, Kraepelien M, Öjehagen A, Lindefors N, Zeebari Z, Kaldo V, et al. Physical exercise and internet-based cognitive-behavioural therapy in the treatment of depression: randomised controlled trial. Br J Psychiatry. 2015;207(3):227-234. [ CrossRef ] [ Medline ]
- Christensen H, Griffiths KM, Mackinnon AJ, Brittliffe K. Online randomized controlled trial of brief and full cognitive behaviour therapy for depression. Psychol Med. 2006;36(12):1737-1746. [ CrossRef ] [ Medline ]
- Christensen H, Griffiths KM, Jorm AF. Delivering interventions for depression by using the internet: randomised controlled trial. BMJ. 2004;328(7434):265. [ https://europepmc.org/abstract/MED/14742346 ] [ CrossRef ] [ Medline ]
- Donker T, Bennett K, Bennett A, Mackinnon A, van Straten A, Cuijpers P, et al. Internet-delivered interpersonal psychotherapy versus internet-delivered cognitive behavioral therapy for adults with depressive symptoms: randomized controlled noninferiority trial. J Med Internet Res. 2013;15(5):e82. [ https://www.jmir.org/2013/5/e82/ ] [ CrossRef ] [ Medline ]
- Kingston J, Becker L, Woeginger J, Ellett L. A randomised trial comparing a brief online delivery of mindfulness-plus-values versus values only for symptoms of depression: does baseline severity matter? J Affect Disord. 2020;276:936-944. [ CrossRef ] [ Medline ]
- Forand NR, Barnett JG, Strunk DR, Hindiyeh MU, Feinberg JE, Keefe JR. Efficacy of guided iCBT for depression and mediation of change by cognitive skill acquisition. Behav Ther. 2018;49(2):295-307. [ https://europepmc.org/abstract/MED/29530267 ] [ CrossRef ] [ Medline ]
- Hobfoll SE, Blais RK, Stevens NR, Walt L, Gengler R. Vets prevail online intervention reduces PTSD and depression in veterans with mild-to-moderate symptoms. J Consult Clin Psychol. 2016;84(1):31-42. [ CrossRef ] [ Medline ]
- Dahne J, Collado A, Lejuez CW, Risco CM, Diaz VA, Coles L, et al. Pilot randomized controlled trial of a Spanish-language behavioral activation mobile app (¡Aptívate!) for the treatment of depressive symptoms among United States Latinx adults with limited English proficiency. J Affect Disord. 2019;250:210-217. [ https://europepmc.org/abstract/MED/30870770 ] [ CrossRef ] [ Medline ]
- Clarke G, Reid E, Eubanks D, O'Connor E, DeBar LL, Kelleher C, et al. Overcoming Depression on the Internet (ODIN): a randomized controlled trial of an internet depression skills intervention program. J Med Internet Res. 2002;4(3):E14. [ https://www.jmir.org/2002/3/e14/ ] [ CrossRef ] [ Medline ]
- Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The use and effectiveness of mobile apps for depression: results from a fully remote clinical trial. J Med Internet Res. 2016;18(12):e330. [ https://www.jmir.org/2016/12/e330/ ] [ CrossRef ] [ Medline ]
- Birney AJ, Gunn R, Russell JK, Ary DV. MoodHacker mobile web app with email for adults to self-manage mild-to-moderate depression: randomized controlled trial. JMIR mHealth uHealth. 2016;4(1):e8. [ https://mhealth.jmir.org/2016/1/e8/ ] [ CrossRef ] [ Medline ]
- Clarke G, Eubanks D, Reid E, Kelleher C, O'Connor E, DeBar LL, et al. Overcoming Depression on the Internet (ODIN) (2): a randomized trial of a self-help depression skills program with reminders. J Med Internet Res. 2005;7(2):e16. [ https://www.jmir.org/2005/2/e16/ ] [ CrossRef ] [ Medline ]
- Beevers CG, Pearson R, Hoffman JS, Foulser AA, Shumake J, Meyer B. Effectiveness of an internet intervention (Deprexis) for depression in a United States adult sample: a parallel-group pragmatic randomized controlled trial. J Consult Clin Psychol. 2017;85(4):367-380. [ CrossRef ] [ Medline ]
- Dahne J, Lejuez CW, Diaz VA, Player MS, Kustanowitz J, Felton JW, et al. Pilot randomized trial of a self-help behavioral activation mobile app for utilization in primary care. Behav Ther. 2019;50(4):817-827. [ https://europepmc.org/abstract/MED/31208690 ] [ CrossRef ] [ Medline ]
- Clarke G, Kelleher C, Hornbrook M, Debar L, Dickerson J, Gullion C. Randomized effectiveness trial of an internet, pure self-help, cognitive behavioral intervention for depressive symptoms in young adults. Cogn Behav Ther. 2009;38(4):222-234. [ https://europepmc.org/abstract/MED/19440896 ] [ CrossRef ] [ Medline ]
- de Graaf LE, Gerhards SAH, Arntz A, Riper H, Metsemakers JFM, Evers SMAA, et al. Clinical effectiveness of online computerised cognitive-behavioural therapy without support for depression in primary care: randomised trial. Br J Psychiatry. 2009;195(1):73-80. [ https://core.ac.uk/reader/15457383?utm_source=linkout ] [ CrossRef ] [ Medline ]
- Cuijpers P. Four decades of outcome research on psychotherapies for adult depression: an overview of a series of meta-analyses. Can Psychol. 2017;58(1):7-19. [ CrossRef ]
- Cuijpers P, Karyotaki E, Ebert D, Harrer M. METAPSY Database. 2021. URL: https://www.metapsy.org/database/depression-psychotherapy [accessed 2022-06-06]
- Lorenzo-Luaces L, Zimmerman M, Cuijpers P. Are studies of psychotherapies for depression more or less generalizable than studies of antidepressants? J Affect Disord. 2018;234:8-13. [ https://www.sciencedirect.com/science/article/abs/pii/S016503271732356X?via%3Dihub ] [ CrossRef ] [ Medline ]
- Lorenzo-Luaces L, Johns E, Keefe JR. The generalizability of randomized controlled trials of self-guided internet-based cognitive behavioral therapy for depressive symptoms: systematic review and meta-regression analysis. J Med Internet Res. 2018;20(11):e10113. [ https://www.jmir.org/2018/11/e10113/ ] [ CrossRef ] [ Medline ]
- Lopez-Quintero C, de los Cobos JP, Hasin DS, Okuda M, Wang S, Grant BF, et al. Probability and predictors of transition from first use to dependence on nicotine, alcohol, cannabis, and cocaine: results of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Drug Alcohol Depend. 2011;115(1-2):120-130. [ https://www.sciencedirect.com/science/article/abs/pii/S0376871610003753?via%3Dihub ] [ CrossRef ] [ Medline ]
- Smith SM, Goldstein RB, Grant BF. The association between post-traumatic stress disorder and lifetime DSM-5 psychiatric disorders among veterans: data from the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III). J Psychiatr Res. 2016;82:16-22. [ https://europepmc.org/abstract/MED/27455424 ] [ CrossRef ] [ Medline ]
- Race and ethnicity in the United States: 2010 census and 2020 census. US Census Bureau. 2021. URL: https://www.census.gov/library/visualizations/interactive/race-and-ethnicity-in-the-united-state-2010-and-2020-census.html [accessed 2023-12-05]
- Hasin DS, Sarvet AL, Meyers JL, Saha TD, Ruan WJ, Stohl M, et al. Epidemiology of adult DSM-5 major depressive disorder and its specifiers in the United States. JAMA Psychiatry. 2018;75(4):336-346. [ https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2671413 ] [ CrossRef ] [ Medline ]
- Karyotaki E, Ebert DD, Donkin L, Riper H, Twisk J, Burger S, et al. Do guided internet-based interventions result in clinically relevant changes for patients with depression? An individual participant data meta-analysis. Clin Psychol Rev. 2018;63:80-92. [ https://www.sciencedirect.com/science/article/abs/pii/S0272735818300199?via%3Dihub ] [ CrossRef ] [ Medline ]
- Kazdin AE, Rabbitt SM. Novel models for delivering mental health services and reducing the burdens of mental illness. Clin Psychol Sci. 2013;1(2):170-191. [ CrossRef ]
- Kazdin AE, Blase SL. Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspect Psychol Sci. 2011;6(1):21-37. [ CrossRef ] [ Medline ]
- Migration and migrant population statistics. Eurostat. 2022. URL: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Migration_and_migrant_population_statistics [accessed 2023-12-05]
- Internet/broadband fact sheet. Pew Research Center. 2021. URL: https://www.pewresearch.org/internet/fact-sheet/internet-broadband/ [accessed 2023-12-05]
- Southam-Gerow MA, Rodríguez A, Chorpita BF, Daleiden EL. Dissemination and implementation of evidence based treatments for youth: challenges and recommendations. Prof Psychol Res Pr. 2012;43(5):527-534. [ CrossRef ]
- Huey SJ, Tilley JL, Jones EO, Smith CA. The contribution of cultural competence to evidence-based care for ethnically diverse populations. Annu Rev Clin Psychol. 2014;10:305-338. [ https://www.annualreviews.org/doi/10.1146/annurev-clinpsy-032813-153729 ] [ CrossRef ] [ Medline ]
- Jennings K. Venture funding for mental health startups hits record high as anxiety, depression skyrocket. Forbes Magazine. 2021. URL: https://www.forbes.com/sites/katiejennings/2021/06/07/venture-funding-for-mental-health-startups-hits-record-high-as-anxiety-depression-skyrocket/?sh=76021c8c1116 [accessed 2022-06-06]
- APA Publications and Communications Board Working Group on Journal Article Reporting Standards. Reporting standards for research in psychology: why do we need them? what might they be? Am Psychol. 2008;63(9):839-851. [ https://europepmc.org/abstract/MED/19086746 ] [ CrossRef ] [ Medline ]
Abbreviations
Edited by G Eysenbach; submitted 12.07.23; peer-reviewed by N Mathews; comments to author 25.09.23; revised version received 05.11.23; accepted 16.11.23; published 01.02.24
©Robinson De Jesús-Romero, Amani R Holder-Dixon, John F Buss, Lorenzo Lorenzo-Luaces. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 01.02.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
10 new breakthroughs in the fight against cancer
Medical advances are continuing to help the world fight cancer. Image: Unsplash/National Cancer Institute
.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Victoria Masterson
Madeleine north.
.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Global Health is affecting economies, industries and global issues
.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:, global health.
Listen to the article
This article was originally published in May 2022, and most recently updated in January 2024 .
- Cancer is one of the world’s biggest killers, with around 10 million deaths per year due to the disease.
- Scientists are using artificial intelligence, DNA sequencing, precision oncology and other technologies to improve treatment and diagnosis.
- The Centre for the Fourth Industrial Revolution India, a collaboration with the World Economic Forum, hopes to accelerate 18 cancer interventions.
Cancer kills around 10 million people a year and is a leading cause of death globally, according to the World Health Organization.
Breast, lung and colon cancer are among the most common. Death rates from cancer were falling before the pandemic . But COVID-19 caused a big backlog in diagnosis and treatment .
There is some good news, however. Medical advances are accelerating the battle against cancer. Here are 10 recent developments.
Test to identify 18 early-stage cancers
Researchers in the US have developed a test they say can identify 18 early-stage cancers. Instead of the usual invasive and costly methods, Novelna's test works by analyzing a patient's blood protein. In a screening of 440 people already diagnosed with cancer, the test correctly identified 93% of stage 1 cancers in men and 84% in women. The researchers believe the findings "pave the way for a cost-effective, highly accurate, multi-cancer screening test that can be implemented on a population-wide scale". It's early days, however. With such a small sample screening and a lack of information on co-existing conditions, the test is currently more of "a starting point for developing a new generation of screening tests for the early detection of cancer".
The seven-minute cancer treatment jab
England's National Health Service (NHS) is to be the first in the world to make use of a cancer treatment injection , which takes just seven minutes to administer, rather than the current time of up to an hour to have the same drug via intravenous infusion. This will not only speed up the treatment process for patients, but also free up time for medical professionals. The drug, Atezolizumab or Tecentriq, treats cancers including lung and breast, and it's expected most of the 3,600 NHS patients in England currently receiving it intravenously will now switch to the jab.
Precision oncology
Precision oncology is the “ best new weapon to defeat cancer ”, the chief executive of Genetron Health, Sizhen Wang, says in a blog for the World Economic Forum. This involves studying the genetic makeup and molecular characteristics of cancer tumours in individual patients. The precision oncology approach identifies changes in cells that might be causing the cancer to grow and spread. Personalized treatments can then be developed. The 100,000 Genomes Project, a National Health Service initiative, studied more than 13,000 tumour samples from UK cancer patients , successfully integrating genomic data to more accurately pin-point effective treatment. Because precision oncology treatments are targeted – as opposed to general treatments like chemotherapy – it can mean less harm to healthy cells and fewer side effects as a result.
Artificial intelligence fights cancer
In India, World Economic Forum partners are using emerging technologies like artificial intelligence (AI) and machine learning to transform cancer care. For example, AI-based risk profiling can help screen for common cancers like breast cancer, leading to early diagnosis. AI technology can also be used to analyze X-rays to identify cancers in places where imaging experts might not be available. These are two of 18 cancer interventions that The Centre for the Fourth Industrial Revolution India, a collaboration with the Forum , hopes to accelerate.
Greater prediction capabilities
Lung cancer kills more people in the US yearly than the next three deadliest cancers combined. It's notoriously hard to detect the early stages of the disease with X-rays and scans alone. However, MIT scientists have developed an AI learning model to predict a person's likelihood of developing lung cancer up to six years in advance via a low-dose CT scan. Trained using complex imaging data, 'Sybil' can forecast both short- and long-term lung cancer risk, according to a recent study. "We found that while we as humans couldn't quite see where the cancer was, the model could still have some predictive power as to which lung would eventually develop cancer," said co-author Jeremy Wohlwend.
Clues in the DNA of cancer
At Cambridge University Hospitals in England, the DNA of cancer tumours from 12,000 patients is revealing new clues about the causes of cancer, scientists say. By analyzing genomic data, oncologists are identifying different mutations that have contributed to each person’s cancer. For example, exposure to smoking or UV light, or internal malfunctions in cells. These are like “fingerprints in a crime scene”, the scientists say – and more of them are being found. “We uncovered 58 new mutational signatures and broadened our knowledge of cancer,” says study author Dr Andrea Degasperi, from Cambridge’s Department of Oncology.
Liquid and synthetic biopsies
Biopsies are the main way doctors diagnose cancer – but the process is invasive and involves removing a section of tissue from the body, sometimes surgically, so it can be examined in a laboratory. Liquid biopsies are an easier and less invasive solution where blood samples can be tested for signs of cancer. Synthetic biopsies are another innovation that can force cancer cells to reveal themselves during the earliest stages of the disease.
The application of “precision medicine” to save and improve lives relies on good-quality, easily-accessible data on everything from our DNA to lifestyle and environmental factors. The opposite to a one-size-fits-all healthcare system, it has vast, untapped potential to transform the treatment and prediction of rare diseases—and disease in general.
But there is no global governance framework for such data and no common data portal. This is a problem that contributes to the premature deaths of hundreds of millions of rare-disease patients worldwide.
The World Economic Forum’s Breaking Barriers to Health Data Governance initiative is focused on creating, testing and growing a framework to support effective and responsible access – across borders – to sensitive health data for the treatment and diagnosis of rare diseases.
The data will be shared via a “federated data system”: a decentralized approach that allows different institutions to access each other’s data without that data ever leaving the organization it originated from. This is done via an application programming interface and strikes a balance between simply pooling data (posing security concerns) and limiting access completely.
The project is a collaboration between entities in the UK (Genomics England), Australia (Australian Genomics Health Alliance), Canada (Genomics4RD), and the US (Intermountain Healthcare).
CAR-T-cell therapy
A treatment that makes immune cells hunt down and kill cancer cells was recently declared a success for leukaemia patients. The treatment, called CAR-T-cell therapy, involves removing and genetically altering immune cells, called T cells, from cancer patients. The altered cells then produce proteins called chimeric antigen receptors (CARs). These recognize and can destroy cancer cells. In the journal Nature, scientists at the University of Pennsylvania announced that two of the first people treated with CAR-T-cell therapy were still in remission 12 years on.
Fighting pancreatic cancer
Pancreatic cancer is one of the deadliest cancers. It is rarely diagnosed before it starts to spread and has a survival rate of less than 5% over five years. At the University of California San Diego School of Medicine, scientists developed a test that identified 95% of early pancreatic cancers in a study. The research, published in Nature Communications Medicine , explains how biomarkers in extracellular vesicles – particles that regulate communication between cells – were used to detect pancreatic, ovarian and bladder cancer at stages I and II.
Have you read?
Cancer: how to stop the next global health crisis, how to improve access to cancer medicines in low and middle-income countries, why is cancer becoming more common among millennials, a tablet to cut breast cancer risk.
A drug that could halve the chance of women developing breast cancer is being tested out by England's National Health Service (NHS). It will be made available to almost 300,000 women seen as being at most risk of developing breast cancer, which is the most common type of cancer in the UK . The drug, named anastrozole, cuts the level of oestrogen women produce by blocking the enzyme aromatase . It has already been used for many years as a breast cancer treatment but has now been repurposed as a preventive medicine. “This is the first drug to be repurposed through a world-leading new programme to help us realize the full potential of existing medicines in new uses to save and improve more lives on the NHS," says NHS Chief Executive Amanda Pritchard.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.
A weekly update of the most important issues driving the global agenda
.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Global Health .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all
This is what data tells us about transforming our food
Thea de Gallier
February 2, 2024
Measles cases are rising – here’s what can be done
Shyam Bishen
Cameroon starts malaria vaccine programme, and other health stories you need to know this week
January 31, 2024
Cameroon has launched the world's first routine malaria vaccination programme
Lessons from Rwanda: Building systems to protect against infectious diseases and biothreats
Lora du Moulin and Molly Shapiro
January 22, 2024
Which health areas are impacted by climate change and where is more guidance needed?
Camilla Corte, Vanessa Racloz and Annika Green
January 18, 2024
An official website of the United States government
Here’s how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
JavaScript appears to be disabled on this computer. Please click here to see any active alerts .
New Approach Methods (NAMs) - Animal Use Metrics for Research and Development
Related metrics.
Animal use metrics for the Office of Pesticide Programs
EPA's New Approach Methods (NAMs) Work Plan was created to prioritize Agency efforts and resources toward activities that will reduce the use of vertebrate animal testing while continuing to protect human health and the environment.
To fulfill a deliverable under the Work Plan, EPA’s Office of Research and Development (ORD) and Office of Chemical Safety and Pollution Prevention (OCSPP) established initial baselines and metrics for transparency in animal use.
This page reports metrics on animal use in research and development activities and under the Agency’s regulatory authority.
Animal Use Metrics for the Office of Research and Development
The number of mammals used for research purposes in recent fiscal years is provided below. Over time, EPA will transition to extend reporting to include all vertebrate animal use. The selected baseline is average number of mammals used for research purposes between fiscal years 2016 and 2018 prior to the Work Plan.
a The fiscal year corresponds to October 1 – September 30.
b Number of mammals rounded to the nearest hundred. The numbers currently include rats, mice, and rabbits.
c Baseline numbers do not include animals used in contract research activities.
d Number of mammals used was likely impacted by reorganization in the Office of Research and Development and laboratory remodeling activities.
e Number of mammals used was likely impacted by COVID-19 pandemic.
Additional Information
- NAMs EPA New Approach Methods Work Plan: Trackin g Use of Vertebrate Animals in Chemical Testing
- Alternative Test Methods and Strategies to Reduce Vertebrate Animal Testing
- Efforts to Reduce Vertebrate Animal Testing at EPA
- Past Conferences on the State of the Science on Development and Use of New Approach Methods (NAMs) for Chemical Safety Testing
- Chemical Safety Research Home
- Chemical Evaluation & Characterization
- Complex Systems Science
- Translation, Training, & Tools
- New Approach Methodologies Research
- Chemical Research to Inform Decision Making
- Collaborations & Funding
Announcing updated 2024 Patient-Centered Outcomes Research Institute (PCORI) fees
The fee, effective 2012–2029, is treated like an excise tax by the Internal Revenue Service (IRS) with the final payment due in July 2030.
The Patient-Centered Outcomes Research Institute (PCORI) fee helps fund research that evaluates and compares health outcomes, clinical effectiveness, and risks and benefits of medical treatments and services. The fee, effective 2012–2029, is treated like an excise tax by the Internal Revenue Service (IRS) with the final payment due in July 2030. The PCORI fee is assessed on all covered lives — including employees, retirees, spouses and dependents. The PCORI fee is due July 31, 2024.
- For plan and policy years that end on or after Oct. 1, 2023, and before Oct. 1, 2024, the PCORI fee is $3.22 per covered life. The increase is 7.33%.
- For plan and policy years that end on or after Oct. 1, 2022, and before Oct. 1, 2023, the PCORI fee is $3.00 per covered life.
The IRS maintains a website dedicated to PCORI and has a table that identifies the types of coverage subject to the PCORI fee .
Fully insured groups
UnitedHealthcare is responsible for filing Form 720 and paying the fee for fully insured coverage. The company will submit the required payment by July 31. UnitedHealthcare fully insured customers do not need to take any action except for certain account plans:
- Fully insured customers with self-funded (ASO) health reimbursement accounts and flexible spending accounts (unless excepted benefit) are required to pay the fee for each employee covered under the account.
ASO groups including Level Funded groups
Employers and plan sponsors are responsible for submitting IRS Form 720 and paying the PCORI fee by July 31. Instructions for completing the form will be posted on the IRS website.
ASO groups may use one of three available counting methods:
- Actual count method
- Snapshot method
- Form 5500 method
Note: For Level Funded/All Savers employer groups the membership information will be posted to the employer website. Customers are required to complete and file the IRS Form 720. For general questions, contact Broker Services at 866-405-7174.
Contact your broker or UnitedHealthcare representative if you have questions.
Related links IRS Coverage Subject to the Fee IRS PCORI website
More articles
Broker - page template - more news experience fragment.
Current broker or employer group client?
Access uhceservices to check commissions, manage eligibility, request ID cards and more.
Research team develops game-changing method to track a growing human threat: ‘[It] has good consistency’
A research team in China may have just developed a better way to track and predict flooding.
The team of researchers at Hunan University, just a few hours south of Dongting Lake, which experienced historic flooding in 2020, believes they’ve found a new approach to flood monitoring using synthetic aperture radar . SAR is basically creating a model image of a single area using satellite imagery from multiple locations.
Historically, satellite data has faced challenges from something as simple as cloud cover. However, the team from Hunan University looked at SAR data from a yearlong time period to help identify specific flood patterns, including when flood waters normally rise and when they recede.
The team used this method of gathering data over a matter of space and time, along with using “ fuzzy logic ,” to wind up at accurate conclusions. Fuzzy logic is a way of variable processing that considers all available information, including information that may only be partially true or relative in nature.
Xinxin Liu, co-author of the paper and associate professor of electrical and information engineering at Hunan University, said , “The proposed method has good consistency and stability in mapping the development of flood events. It can also identify flood regions that are prone to misidentification or omission thanks to time-series modeling.”
Liu believes this method can be used to quickly and accurately map a flood as it’s happening.
Liu and the team tested this method by applying it to data from the Dongting floods in 2020. The team found that their model was highly accurate and outperformed other models.
As the planet continues to warm , flooding has become more frequent . A warmer planet means more water vapor in the air, which means more rainfall that can potentially lead to flooding .
This method, developed by Liu and her team, can not only be used to map floods as they occur but also to help understand changing flood patterns over time.
“Because our method can reuse the historical data, we are also thinking about whether we can create some benchmark pattern of flood events based on historical data and apply them to the rapid warning of flood risk,” Liu said . “This should be more interesting and meaningful.”
Research team develops game-changing method to track a growing human threat: ‘[It] has good consistency’ first appeared on The Cool Down .
IMAGES
VIDEO
COMMENTS
1 Doing Research in Counselling and Psychotherapy: Basic Principles Introduction Principle 1: The primary aim of research is to create knowledge products Principle 2: The meaning, significance and value of any research study depend on where it fits within the existing literature
Description Contents Resources Reviews Features Preview This text provides a rich, culturally sensitive presentation of current research techniques in counseling. Author Robert J. Wright introduces the theory and research involved in research design, measurement, and assessment with an appealingly clear writing style.
Qualitative research in counseling: Applying robust methods and illuminating human context. Based on a program presented at the ACA Annual Conference & Exhibition, Honolulu, HI. Retrieved June 27, 2008, from http://counselingoutfitters.com/vistas/vistas08/Levers.htm
Among the quantitative methods receiving attention are analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA), factor analysis, structural equation modeling, and discriminant analysis.
Research methodology: a basic awareness study By Jane Bronwyn Holder, Counselling. BA (Hons); MBACP; MACC Published on 3rd March, 2014 For any profession to survive and develop, on-going research is necessary, challenging old concepts and introducing and developing new ideas.
COUNSELING PSYCHOLOGY RESEARCH METHODS: QUALITATIVE APPROACHES Susan L. Morrow, Carrie L. Castaneda-Sound, and Elizabeth M. Abrams Qualitative research methods have gained increasing acceptance and popularity in counseling psychology since early calls (e.g., Hoshmand, 1989; Howard, 1983; Neimeyer & Resnikoff, 1982; Polkinghorne,
If qualitative research in counselling and psychotherapy is to make a significant contribution to the quality of services that are offered to users, it is essential for both researchers and consumers of research to possess a clear idea of the nature and achievements of qualitative research within the field of counselling/psychotherapy.
The questions raised by these three studies are anchored in an understanding of research paradigms; thus, in this chapter, we first identify the paradigmatic issues that form the foundation of qualitative inquiry. Building on this framework, we describe the current status of the genre by reviewing content analyses of qualitative research in counseling and counseling psychology.
This introductory text for counselors-in-training and emerging researchers focuses on research methodology, design, measurement, and evaluation. Richard Balkin and David Kleist explain the primary research methods used in counseling while emphasizing the importance of ethics and multicultural issues, demonstrating a professional counselor identity within the framework of research, and ...
Counselling and Psychotherapy Research (CPR) is an innovative international peer-reviewed journal dedicated to linking research with practice. Pluralist in orientation, the journal recognises the value of qualitative, quantitative and mixed methods strategies of inquiry and aims to promote high-quality, ethical research that informs and develops counselling and psychotherapy practice.
Research Methods Research materials and methods tips for counselors, DSM-5 Access and Psychological Tests Research Methods Books SAGE Research Methods An online research methods tool that helps students, faculty and researchers with their research projects.
Beehler, Funderburk, Possemato and Vair (2013) used qualitative methods to develop a self-report measure of behavioral health provider adherence to co-located, collaborative care. Finally, qualitative methods have been used in mental health services research for an evaluation of process. Such methods are frequently used in evaluation research ...
Fifteen years have passed since the publication of a landmark issue of the Journal of Counseling Psychology on qualitative and mixed methods research (Haverkamp et al., 2005), which signaled a methodological shift in counseling psychology and related fields. At the time, qualitative research was certainly less popular in the field and arguably less respected than it is now.
This paper is thus written to support novice counselor researchers, and to inspire an emerging research culture through sharing formative experiences and lessons learned during a qualitative research project exploring minority issues in counseling. Key Words: Counseling, Health, Qualitative, Methods, and Narrative.
In addition, the author provides guidelines for accessing research information and resources. With an emphasis on the acquisition of research skills and their practical application to counselling issues, Practitioner Research in Counselling shows how research can be used in a meaningful way by all practitioners.
There are four primary reasons for this impetus. First, by prioritizing counseling research, we move forward as a discipline to our next developmental step — from the conceptual to the empirical. Second, there is a need for more empirical articles that reflect our pedagogical perspective.
Research is a vital and often daunting component of many counselling and psychotherapy courses. As well as completing their own research projects, trainees a ... Whether embarking on research for the first time or already a little familiar with research and research methods, this unique guide is something counselling and psychotherapy students ...
Most of the existing computerized content analytic methods used in the field of psychotherapy research follow a top-down approach based on pre-defined dictionaries: ... Content analysis of acculturation research in counseling and counseling psychology: A 22-year review. Journal of Counseling Psychology, 58 (1), 83-96. doi: 10.1037/a0021128 ...
Counseling Research by Richard S. Balkin; David M. Kleist This introductory text for counselors-in-training and emerging researchers focuses on research methodology, design, measurement, and evaluation. Richard Balkin and David Kleist explain the primary research methods used in counseling while emphasizing the importance of ethics and multicultural issues, demonstrating a professional ...
1 Unless explicitly noted, the terms counseling and psychotherapy shall be used interchangeably. While there are historical, political and sometimes substantive bases for differentiating these terms in some contexts, research issues, especially methodological ones, are so intertwined between the activities conventionally labeled counseling and psychotherapy, that differential use of the two ...
Introduction. English health guidelines are created and regularly updated with the aim of improving patient care by ensuring that the most recent and 'best available evidence' is used to guide treatment (National Institute for Health and Care Excellence Guidance, 2017a).As stated on its website: 'National Institute for Health and Care Excellence (NICE) guidelines are evidence‐based ...
Research evidence is important to help ensure that clients receive the best possible therapy - and to support counselling agencies in attracting funding (through being able to convince commissioners that their services are effective and therefore worth funding).
Two more common research types include theoretical research and experimental research. "Theoretical research attempts to measure a theory or phenomenon to determine its relevancy based on research findings," Pedigo said. "Whereas experimental research is the study of two or more variables with a control group and an experimental group.".
Background: There is a growing interest in developing scalable interventions, including internet-based cognitive behavioral therapy (iCBT), to meet the increasing demand for mental health services. Given the growth in diversity worldwide, it is essential that the clinical trials of iCBT for depression include diverse samples or, at least, report information on the race, ethnicity, or other ...
A treatment that makes immune cells hunt down and kill cancer cells was recently declared a success for leukaemia patients. The treatment, called CAR-T-cell therapy, involves removing and genetically altering immune cells, called T cells, from cancer patients. The altered cells then produce proteins called chimeric antigen receptors (CARs).
This page reports metrics on animal use in research and development activities and under the Agency's regulatory authority. ... Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock A locked ...
The fee, effective 2012-2029, is treated like an excise tax by the Internal Revenue Service (IRS) with the final payment due in July 2030.
The team used this method of gathering data over a matter of space and time, along with using "fuzzy logic," to wind up at accurate conclusions. Fuzzy logic is a way of variable processing ...