Module 3: Clinical Assessment, Diagnosis, and Treatment

3rd edition as of July 2023

Module Overview

Module 3 covers the issues of clinical assessment, diagnosis, and treatment. We will define assessment and then describe key issues such as reliability, validity, standardization, and specific methods that are used. In terms of clinical diagnosis, we will discuss the two main classification systems used around the world – the DSM-5-TR and ICD-11. Finally, we discuss the reasons why people may seek treatment and what to expect when doing so.

Module Outline

3.1. Clinical Assessment of Abnormal Behavior

3.2. diagnosing and classifying abnormal behavior, 3.3. treatment of mental disorders – an overview.

Module Learning Outcomes

  • Describe clinical assessment and methods used in it.
  • Clarify how mental health professionals diagnose mental disorders in a standardized way.
  • Discuss reasons to seek treatment and the importance of psychotherapy.

Section Learning Objectives

  • Define clinical assessment.
  • Clarify why clinical assessment is an ongoing process.
  • Define and exemplify reliability.
  • Define and exemplify validity.
  • Define standardization.
  • List and describe seven methods of assessment.

3.1.1. What is Clinical Assessment?

For a mental health professional to be able to effectively help treat a client and know that the treatment selected worked (or is working), they first must engage in the clinical assessment of the client, or collecting information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews to determine the person’s problem and the presenting symptoms. This collection of information involves learning about the client’s skills, abilities, personality characteristics, cognitive and emotional functioning, the social context in terms of environmental stressors that are faced, and cultural factors particular to them such as their language or ethnicity. Clinical assessment is not just conducted at the beginning of the process of seeking help but throughout the process. Why is that?

Consider this. First, we need to determine if a treatment is even needed. By having a clear accounting of the person’s symptoms and how they affect daily functioning, we can decide to what extent the individual is adversely affected. Assuming a treatment is needed, our second reason to engage in clinical assessment will be to determine what treatment will work best. As you will see later in this module, there are numerous approaches to treatment.  These include Behavior Therapy, Cognitive and Cognitive-Behavioral Therapy (CBT), Humanistic-Experiential Therapies, Psychodynamic Therapies, Couples and Family Therapy, and biological treatments (psychopharmacology). Of course, for any mental disorder, some of the aforementioned therapies will have greater efficacy than others. Even if several can work well, it does not mean a particular therapy will work well for that specific client. Assessment can help figure this out. Finally, we need to know if the treatment we employed worked. This will involve measuring before any treatment is used and then measuring the behavior while the treatment is in place. We will even want to measure after the treatment ends to make sure symptoms of the disorder do not return. Knowing what the person’s baselines are for different aspects of psychological functioning will help us to see when improvement occurs.

In recap, obtaining the baselines happens in the beginning, implementing the treatment plan that is agreed upon happens more so in the middle, and then making sure the treatment produces the desired outcome occurs at the end. It should be clear from this discussion that clinical assessment is an ongoing process.

3.1.2. Key Concepts in Assessment

The assessment process involves three critical concepts – reliability, validity, and standardization. These three are important to science in general. First, we want the assessment to be reliable or consistent. Outside of clinical assessment, when our car has an issue and we take it to the mechanic, we want to make sure that what one mechanic says is wrong with our car is the same as what another says, or even two others. If not, the measurement tools they use to assess cars are flawed. The same is true of a patient who is suffering from a mental disorder. If one mental health professional says the person suffers from major depressive disorder and another says the issue is borderline personality disorder, then there is an issue with the assessment tool being used. Ensuring that two different raters are consistent in their assessment of patients is called interrater reliability . Another type of reliability occurs when a person takes a test one day, and then the same test on another day. We would expect the person’s answers to be consistent, which is called test-retest reliability . For example, let’s say the person takes the MMPI on Tuesday and then the same test on Friday. Unless something miraculous or tragic happened over the two days in between tests, the scores on the MMPI should be nearly identical to one another. What does identical mean? The score at test and the score at retest are correlated with one another. If the test is reliable, the correlation should be very high (remember, a correlation goes from -1.00 to +1.00, and positive means as one score goes up, so does the other, so the correlation for the two tests should be high on the positive side).

In addition to reliability, we want to make sure the test measures what it says it measures. This is called validity . Let’s say a new test is developed to measure symptoms of depression. It is compared against an existing and proven test, such as the Beck Depression Inventory (BDI).  If the new test measures depression, then the scores on it should be highly comparable to the ones obtained by the BDI. This is called concurrent or descriptive validity . We might even ask if an assessment tool looks valid. If we answer yes, then it has face validity, though it should be noted that this is not based on any statistical or evidence-based method of assessing validity. An example would be a personality test that asks about how people behave in certain situations. Therefore, it seems to measure personality, or we have an overall feeling that it measures what we expect it to measure.

Predictive validity is when a tool accurately predicts what will happen in the future. Let’s say we want to tell if a high school student will do well in college. We might create a national exam to test needed skills and call it something like the Scholastic Aptitude Test (SAT). We would have high school students take it by their senior year and then wait until they are in college for a few years and see how they are doing. If they did well on the SAT, we would expect that at that point, they should be doing well in college. If so, then the SAT accurately predicts college success. The same would be true of a test such as the Graduate Record Exam (GRE) and its ability to predict graduate school performance.

Finally, we want to make sure that the experience one patient has when taking a test or being assessed is the same as another patient taking the test the same day or on a different day, and with either the same tester or another tester. This is accomplished with the use of clearly laid out rules, norms, and/or procedures, and is called standardization . Equally important is that mental health professionals interpret the results of the testing in the same way, or otherwise, it will be unclear what the meaning of a specific score is.

3.1.3. Methods of Assessment

So how do we assess patients in our care? We will discuss observation, psychological tests, neurological tests, the clinical interview, and a few others in this section.

            3.1.3.1. Observation. In Section 1.5.2.1 we talked about two types of observation – naturalistic , or observing the person or animal in their environment, and laboratory , or observing the organism in a more controlled or artificial setting where the experimenter can use sophisticated equipment and videotape the session to examine it later. One-way mirrors can also be used. A limitation of this method is that the process of recording a behavior causes the behavior to change, called reactivity. Have you ever noticed someone staring at you while you sat and ate your lunch? If you have, what did you do? Did you change your behavior? Did you become self-conscious? Likely yes, and this is an example of reactivity. Another issue is that the behavior made in one situation may not be made in other situations, such as your significant other only acting out at the football game and not at home. This form of validity is called cross-sectional validity. We also need our raters to observe and record behavior in the same way or to have high inter-rater reliability.

            3.1.3.2. The clinical interview. A clinical interview is a face-to-face encounter between a mental health professional and a patient in which the former observes the latter and gathers data about the person’s behavior, attitudes, current situation, personality, and life history. The interview may be unstructured in which open-ended questions are asked, structured in which a specific set of questions according to an interview schedule are asked, or semi-structured , in which there is a pre-set list of questions, but clinicians can follow up on specific issues that catch their attention. A mental status examination is used to organize the information collected during the interview and systematically evaluates the patient through a series of questions assessing appearance and behavior. The latter includes grooming and body posture, thought processes and content to include disorganized speech or thought and false beliefs, mood and affect such that whether the person feels hopeless or elated, intellectual functioning to include speech and memory, and awareness of surroundings to include where the person is and what the day and time are. The exam covers areas not normally part of the interview and allows the mental health professional to determine which areas need to be examined further. The limitation of the interview is that it lacks reliability, especially in the case of the unstructured interview.

            3.1.3.3. Psychological tests and inventories. Psychological tests assess the client’s personality, social skills, cognitive abilities, emotions, behavioral responses, or interests. They can be administered either individually or to groups in paper or oral fashion. Projective tests consist of simple ambiguous stimuli that can elicit an unlimited number of responses. They include the Rorschach or inkblot test and the Thematic Apperception Test which asks the individual to write a complete story about each of 20 cards shown to them and give details about what led up to the scene depicted, what the characters are thinking, what they are doing, and what the outcome will be. From the response, the clinician gains perspective on the patient’s worries, needs, emotions, conflicts, and the individual always connects with one of the people on the card.  Another projective test is the sentence completion test and asks individuals to finish an incomplete sentence. Examples include ‘My mother…’ or ‘I hope…’

Personality inventories ask clients to state whether each item in a long list of statements applies to them, and could ask about feelings, behaviors, or beliefs. Examples include the MMPI or Minnesota Multiphasic Personality Inventory and the NEO-PI-R, which is a concise measure of the five major domains of personality – Neuroticism, Extroversion, Openness, Agreeableness, and Conscientiousness. Six facets define each of the five domains, and the measure assesses emotional, interpersonal, experimental, attitudinal, and motivational styles (Costa & McCrae, 1992). These inventories have the advantage of being easy to administer by either a professional or the individual taking it, are standardized, objectively scored, and can be completed electronically or by hand. That said, personality cannot be directly assessed, and so you do not ever completely know the individual.

            3.1.3.4. Neurological tests. Neurological tests are used to diagnose cognitive impairments caused by brain damage due to tumors, infections, or head injuries; or changes in brain activity. Positron Emission Tomography or PET is used to study the brain’s chemistry. It begins by injecting the patient with a radionuclide that collects in the brain and then having them lie on a scanning table while a ring-shaped machine is positioned over their head. Images are produced that yield information about the functioning of the brain. Magnetic Resonance Imaging or MRI provides 3D images of the brain or other body structures using magnetic fields and computers. It can detect brain and spinal cord tumors or nervous system disorders such as multiple sclerosis. Finally, computed tomography or the CT scan involves taking X-rays of the brain at different angles and is used to diagnose brain damage caused by head injuries or brain tumors.

            3.1.3.5. Physical examination. Many mental health professionals recommend the patient see their family physician for a physical examination, which is much like a check-up. Why is that? Some organic conditions, such as hyperthyroidism or hormonal irregularities, manifest behavioral symptoms that are like mental disorders. Ruling out such conditions can save costly therapy or surgery.

            3.1.3.6. Behavioral assessment. Within the realm of behavior modification and applied behavior analysis, we talk about what is called behavioral assessment , which is the measurement of a target behavior. The target behavior is whatever behavior we want to change, and it can be in excess and needing to be reduced, or in a deficit state and needing to be increased. During the behavioral assessment we learn about the ABCs of behavior in which Antecedents are the environmental events or stimuli that trigger a behavior; Behaviors are what the person does, says, thinks/feels; and Consequences are the outcome of a behavior that either encourages it to be made again in the future or discourages its future occurrence. Though we might try to change another person’s behavior using behavior modification, we can also change our own behavior, which is called self-modification. The person does their own measuring and recording of the ABCs, which is called self-monitoring. In the context of psychopathology, behavior modification can be useful in treating phobias, reducing habit disorders, and ridding the person of maladaptive cognitions.

            3.1.3.7. Intelligence tests. Intelligence testing determines the patient’s level of cognitive functioning and consists of a series of tasks asking the patient to use both verbal and nonverbal skills. An example is the Stanford-Binet Intelligence test , which assesses fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory. Intelligence tests have been criticized for not predicting future behaviors such as achievement and reflecting social or cultural factors/biases and not actual intelligence. Also, can we really assess intelligence through one dimension, or are there multiple dimensions?

Key Takeaways

You should have learned the following in this section:

  • Clinical assessment is the collecting of information and drawing conclusions through the use of observation, psychological tests, neurological tests, and interviews.
  • Reliability refers to consistency in measurement and can take the form of interrater and test-retest reliability.
  • Validity is when we ensure the test measures what it says it measures and takes the forms of concurrent or descriptive, face, and predictive validity.
  • Standardization is all the clearly laid out rules, norms, and/or procedures to ensure the experience each participant has is the same.
  • Patients are assessed through observation, psychological tests, neurological tests, and the clinical interview, all with their own strengths and limitations.

Section 3.1 Review Questions

  • What does it mean that clinical assessment is an ongoing process?
  • Define and exemplify reliability, validity, and standardization.
  • For each assessment method, define it and then state its strengths and limitations.
  • Explain what it means to make a clinical diagnosis.
  • Define syndrome.
  • Clarify and exemplify what a classification system does.
  • Identify the two most used classification systems.
  • Outline the history of the DSM.
  • Identify and explain the elements of a diagnosis.
  • Outline the major disorder categories of the DSM-5-TR.
  • Describe the ICD-11.
  • Clarify why the DSM-5-TR and ICD-11 need to be harmonized.

3.2.1. Clinical Diagnosis and Classification Systems

Before starting any type of treatment, the client/patient must be clearly diagnosed with a mental disorder. Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder outlined in an established classification system such as the DSM-5-TR or ICD-11 (both will be described shortly). Any diagnosis should have clinical utility , meaning it aids the mental health professional in determining prognosis, the treatment plan, and possible outcomes of treatment (APA, 2022). Receiving a diagnosis does not necessarily mean the person requires treatment. This decision is made based upon how severe the symptoms are, level of distress caused by the symptoms, symptom salience such as expressing suicidal ideation, risks and benefits of treatment, disability, and other factors (APA, 2022). Likewise, a patient may not meet the full criteria for a diagnosis but demonstrate a clear need for treatment or care, nonetheless. As stated in the DSM, “The fact that some individuals do not show all symptoms indicative of a diagnosis should not be used to justify limiting their access to appropriate care” (APA, 2022).

Symptoms that cluster together regularly are called a syndrome . If they also follow the same, predictable course, we say that they are characteristic of a specific disorder .  Classification systems provide mental health professionals with an agreed-upon list of disorders falling into distinct categories for which there are clear descriptions and criteria for making a diagnosis. Distinct is the keyword here. People suffering from delusions, hallucinations, disorganized thinking (speech), grossly disorganized or abnormal motor behavior, and/or negative symptoms are different from people presenting with a primary clinical deficit in cognitive functioning that is not developmental but acquired (i.e., they have shown a decline in cognitive functioning over time). The former suffers from a schizophrenia spectrum disorder while the latter suffers from a neurocognitive disorder (NCD). The latter can be further distinguished from neurodevelopmental disorders which manifest early in development and involve developmental deficits that cause impairments in social, personal, academic, or occupational functioning (APA, 2022). These three disorder groups or categories can be clearly distinguished from one another. Classification systems also permit the gathering of statistics to determine incidence and prevalence rates and conform to the requirements of insurance companies for the payment of claims.

The most widely used classification system in the United States is the Diagnostic and Statistical Manual of Mental Disorders (DSM) which is a “medical classification of disorders and as such serves as a historically determined cognitive schema imposed on clinical and scientific information to increase its comprehensibility and utility. The classification of disorders (the way in which disorders are grouped) provides a high-level organization for the manual” (APA, 2022, pg. 11). The DSM is currently in its 5th edition Text-Revision (DSM-5-TR) and is produced by the American Psychiatric Association (APA, 2022). Alternatively, the World Health Organization (WHO) publishes the International Statistical Classification of Diseases and Related Health Problems (ICD) currently in its 11th edition. We will begin by discussing the DSM and then move to the ICD.

 3.2.2. The DSM Classification System

            3.2.2.1. A brief history of the DSM . The DSM-5 was published in 2013 and took the place of the DSM IV-TR (TR means Text Revision; published in 2000). In March 2022, a Text-Revision was published for the DSM-5, making it the DSM-5-TR.

The history of the DSM goes back to 1952 when the American Psychiatric Association published the first edition of the DSM which was “…the first official manual of mental disorders to contain a glossary of descriptions of the diagnostic categories” (APA, 2022, p. 5). The DSM evolved through four major editions after World War II into a diagnostic classification system to be used by psychiatrists and physicians, but also other mental health professionals. The Herculean task of revising the DSM began in 1999 when the APA embarked upon an evaluation of the strengths and weaknesses of the DSM in coordination with the World Health Organization (WHO) Division of Mental Health, the World Psychiatric Association, and the National Institute of Mental Health (NIMH). This collaboration resulted in the publication of a monograph in 2002 called A Research Agenda for DSM-V . From 2003 to 2008, the APA, WHO, NIMH, the National Institute on Drug Abuse (NIDA), and the National Institute on Alcoholism and Alcohol Abuse (NIAAA) convened 13 international DSM-5 research planning conferences “to review the world literature in specific diagnostic areas to prepare for revisions in developing both DSM-5 and the International Classification of Disease, 11th Revision (ICD-11)” (APA, 2022, pg. 6).

After the naming of a DSM-5 Task Force Chair and Vice-Chair in 2006, task force members were selected and approved by 2007, and workgroup members were approved in 2008. An intensive 6-year process of “conducting literature reviews and secondary analyses, publishing research reports in scientific journals, developing draft diagnostic criteria, posting preliminary drafts on the DSM-5 website for public comment, presenting preliminary findings at professional meetings, performing field trials, and revisiting criteria and text” was undertaken (APA, 2022, pg. 7). The process involved physicians, psychologists, social workers, epidemiologists, neuroscientists, nurses, counselors, and statisticians, all who aided in the development and testing of DSM-5 while individuals with mental disorders, families of those with a mental disorder, consumer groups, lawyers, and advocacy groups provided feedback on the mental disorders contained in the book. Additionally, disorders with low clinical utility and weak validity were considered for deletion while “Conditions for Future Study” were placed in Section 3 and “contingent on the amount of empirical evidence generated on the proposed diagnosis, diagnostic reliability or validity, presence of clear clinical need, and potential benefit in advancing research” (APA, 2022, pg. 7).

            3.2.2.2. The DSM-5 text revision process. In the spring 2019, APA started work on the Text-Revision for the DSM-5. This involved more than 200 experts who were asked to conduct literature reviews of the past 10 years and to review the text to identify any material that was out-of-date. Experts were divided into 20 disorder review groups, each with its own section editor. Four cross-cutting review groups to include Culture, Sex and Gender, Suicide, and Forensic, reviewed each chapter and focused on material involving their specific expertise. The text was also reviewed by an Ethnoracial Equity and Inclusion work group whose task was to “ensure appropriate attention to risk factors such as racism and discrimination and the use of nonstigmatizing language” (APA, 2022, pg. 11).

As such, the DSM-5-TR “is committed to the use of language that challenges the view that races are discrete and natural entities” (APA, 2022, pg. 18). Some of changes include:

  • Use of racialized instead of racial to indicate the socially constructed nature of race
  • Ethnoracial is used to denote U.S. Census categories such as Hispanic, African American, or White
  • Latinx is used in place of Latino or Latina to promote gender-inclusive terminology
  • The term Caucasian is omitted since it is “based on obsolete and erroneous views about the geographic origin of a prototypical pan-European ethnicity” (pg. 18)
  • To avoid perpetuating social hierarchies, the terms minority and non-White are avoided since they describe social groups in relation to a racialized “majority”
  • The terms cultural contexts and cultural backgrounds are preferred to culture which is only used to refer to a “heterogeneity of cultural views and practices within societies” (pg. 18)
  • The inclusion of data on specific ethnoracial groups only when “existing research documented reliable estimates based on representative samples.” This led to limited inclusion of data on Native Americans since data from nonrepresentative samples may be misleading.
  • The use of gender differences or “women and men” or “boys and girls” since much of the information on the expressions of mental disorders in women and men is based on self-identified gender.
  • Inclusion of a new section for each diagnosis providing information about suicidal thoughts or behavior associated with that diagnosis.

            3.2.2.3. Elements of a diagnosis. The DSM-5-TR states that the following make up the key elements of a diagnosis (APA, 2022):

  • Diagnostic Criteria and Descriptors – Diagnostic criteria are the guidelines for making a diagnosis and should be informed by clinical judgment. When the full criteria are met, mental health professionals can add severity and course specifiers to indicate the patient’s current presentation. If the full criteria are not met, designators such as “other specified” or “unspecified” can be used. If applicable, an indication of severity (mild, moderate, severe, or extreme), descriptive features, and course (type of remission – partial or full – or recurrent) can be provided with the diagnosis. The final diagnosis is based on the clinical interview, text descriptions, criteria, and clinical judgment.
  • Subtypes and Specifiers – Subtypes denote “mutually exclusive and jointly exhaustive phenomenological subgroupings within a diagnosis” (APA, 2022, pg. 22). For example, non-rapid eye movement (NREM) sleep arousal disorders can have either a sleepwalking or sleep terror type. Enuresis is nocturnal-only, diurnal-only, or both. Specifiers are not mutually exclusive or jointly exhaustive and so more than one specifier can be given. For instance, binge eating disorder has remission and severity specifiers. Somatic symptom disorder has a specifier for severity, if with predominant pain, and/or if persistent. Again, the fundamental distinction between subtypes and specifiers is that there can be only one subtype but multiple specifiers. As the DSM-5-TR says, “Specifiers and subtypes provide an opportunity to define a more homogeneous subgrouping of individuals with the disorder who share certain features… and to convey information that is relevant to the management of the individual’s disorder” (pg. 22).
  • Principle Diagnosis – A principal diagnosis is used when more than one diagnosis is given for an individual. It is the reason for the admission in an inpatient setting or the basis for a visit resulting in ambulatory care medical services in outpatient settings. The principal diagnosis is generally the focus of attention or treatment.
  • Provisional Diagnosis – If not enough information is available for a mental health professional to make a definitive diagnosis, but there is a strong presumption that the full criteria will be met with additional information or time, then the provisional specifier can be used.            

            3.2.2.4. DSM-5 disorder categories. The DSM-5 includes the following categories of disorders:

Table 3.1. DSM-5 Classification System of Mental Disorders

3.2.3. The ICD-11

In 1893, the International Statistical Institute adopted the International List of Causes of Death which was the first international classification edition. The World Health Organization was entrusted with the development of the ICD in 1948 and published the 6th version (ICD-6). The ICD-11 went into effect January 1, 2022, though it was adopted in May 2019. The WHO states:

ICD serves a broad range of uses globally and provides critical knowledge on the extent, causes and consequences of human disease and death worldwide via data that is reported and coded with the ICD. Clinical terms coded with ICD are the main basis for health recording and statistics on disease in primary, secondary and tertiary care, as well as on cause of death certificates. These data and statistics support payment systems, service planning, administration of quality and safety, and health services research. Diagnostic guidance linked to categories of ICD also standardizes data collection and enables large scale research.

As a classification system, it “allows the systematic recording, analysis, interpretation and comparison of mortality and morbidity data collected in different countries or regions and at different times.” As well, it “ensures semantic interoperability and reusability of recorded data for the different use cases beyond mere health statistics, including decision support, resource allocation, reimbursement, guidelines and more.”

Source: http://www.who.int/classifications/icd/en/

The ICD lists many types of diseases and disorders to include Chapter 06: Mental, Behavioral, or Neurodevelopmental Disorders. The list of mental disorders is broken down as follows:

  • Neurodevelopmental disorders
  • Schizophrenia or other primary psychotic disorders
  • Mood disorders
  • Anxiety or fear-related disorders
  • Obsessive-compulsive or related disorders
  • Disorders specifically associated with stress
  • Dissociative disorders
  • Feeding or eating disorders
  • Elimination disorders
  • Disorders of bodily distress or bodily experience
  • Disorders due to substance use or addictive behaviours
  • Impulse control disorders
  • Disruptive behaviour or dissocial disorders
  • Personality disorders and related traits
  • Paraphilic disorders
  • Factitious disorders
  • Neurocognitive disorders
  • Mental or behavioural disorders associated with pregnancy, childbirth or the puerperium

It should be noted that Sleep-Wake Disorders are listed in Chapter 07.

To access Chapter 06 of the ICD-11, please visit the following:

https://icd.who.int/browse11/l-m/en#/http%3a%2f%2fid.who.int%2ficd%2fentity%2f334423054

3.2.4. Harmonization of DSM-5-TR and ICD-11

According to the DSM-5-TR, there is an effort to harmonize the two classification systems: 1) for a more accurate collection of national health statistics and design of clinical trials aimed at developing new treatments, 2) to increase the ability to replicate scientific findings across national boundaries, and 3) to rectify the issue of DSM-IV and ICD-10 diagnoses not agreeing (APA, 2022, pg. 13). Complete harmonization of the DSM-5 diagnostic criteria with the ICD-11 disorder definitions has not occurred due to differences in timing. The DSM-5 developmental effort was several years ahead of the ICD-11 revision process. Despite this, some improvement in harmonization did occur as many ICD-11 working group members had participated in the development of the DSM-5 diagnostic criteria and all ICD-11 work groups were given instructions to review the DSM-5 criteria sets and make them as similar as possible (unless there was a legitimate reason not to). This has led to the ICD and DSM being closer than at any time since DSM-II and ICD-8 (APA, 2022).

  • Clinical diagnosis is the process of using assessment data to determine if the pattern of symptoms the person presents with is consistent with the diagnostic criteria for a specific mental disorder outlined in an established classification system such as the DSM-5-TR or ICD-11.
  • Classification systems provide mental health professionals with an agreed-upon list of disorders falling into distinct categories for which there are clear descriptions and criteria for making a diagnosis.
  • Elements of a diagnosis in the DSM include the diagnostic criteria and descriptors, subtypes and specifiers, the principle diagnosis, and a provisional diagnosis.

Section 3.2 Review Questions

  • What is clinical diagnosis?
  • What is a classification system and what are the two main ones used today?
  • Outline the diagnostic categories used in the DSM-5-TR.
  • Clarify reasons why an individual may need to seek treatment.
  • Critique myths about psychotherapy.

3.3.1. Seeking Treatment

            3.3.1.1. Who seeks treatment? Would you describe the people who seek treatment as being on the brink, crazy, or desperate? Or can the ordinary Joe in need of advice seek out mental health counseling? The answer is that anyone can. David Sack, M.D. (2013) writes in the article 5 Signs Its Time to Seek Therapy , published in Psychology Today, that “most people can benefit from therapy at least some point in their lives,” and though the signs you need to seek help are obvious at times, we often try “to sustain [our] busy life until it sets in that life has become unmanageable.” So, when should we seek help? First, if we feel sad, angry, or not like ourselves. We might be withdrawing from friends and families or sleeping more or less than we usually do. Second, if we are abusing drugs, alcohol, food, or sex to deal with life’s problems. In this case, our coping skills may need some work. Third, in instances when we have lost a loved one or something else important to us, whether due to death or divorce, the grief may be too much to process. Fourth, a traumatic event may have occurred, such as abuse, a crime, an accident, chronic illness, or rape. Finally, if you have stopped doing the things you enjoy the most. Sack (2013) says, “If you decide that therapy is worth a try, it doesn’t mean you’re in for a lifetime of head shrinking.” A 2001 study in the Journal of Counseling Psychology found that most people feel better within seven to 10 visits. In another study, published in 2006 in the Journal of Consulting and Clinical Psychology, 88% of therapy-goers reported improvements after just one session.”

For more on this article, please visit:

https://www.psychologytoday.com/blog/where-science-meets-the-steps/201303/5-signs-its-time-seek-therapy

            3.3.1.2. When friends, family, and self-healing are not enough. If you are experiencing any of the aforementioned issues, you should seek help. Instead of facing the potential stigma of talking to a mental health professional, many people think that talking through their problems with friends or family is just as good. Though you will ultimately need these people to see you through your recovery, they do not have the training and years of experience that a psychologist or similar professional has. “Psychologists can recognize behavior or thought patterns objectively, more so than those closest to you who may have stopped noticing — or maybe never noticed. A psychologist might offer remarks or observations similar to those in your existing relationships, but their help may be more effective due to their timing, focus, or your trust in their neutral stance” ( http://www.apa.org/helpcenter/psychotherapy-myths.aspx ). You also should not wait to recover on your own. It is not a failure to admit you need help, and there could be a biological issue that makes it almost impossible to heal yourself.

            3.3.1.3. What exactly is psychotherapy? According to the APA, in psychotherapy “psychologists apply scientifically validated procedures to help people develop healthier, more effective habits.” Several different approaches can be utilized to include behavior, cognitive and cognitive-behavior, humanistic-experiential, psychodynamic, couples and family, and biological treatments.

            3.3.1.4. The client-therapist relationship. What is the ideal client-therapist relationship? APA says, “Psychotherapy is a collaborative treatment based on the relationship between an individual and a psychologist. Grounded in dialogue, it provides a supportive environment that allows you to talk openly with someone who’s objective, neutral and nonjudgmental. You and your psychologist will work together to identify and change the thought and behavior patterns that are keeping you from feeling your best.” It’s not just about solving the problem you saw the therapist for, but also about learning new skills to help you cope better in the future when faced with the same or similar environmental stressors.

So how do you find a psychotherapist? Several strategies may prove fruitful. You could ask family and friends, your primary care physician (PCP), look online, consult an area community mental health center, your local university’s psychology department, state psychological association, or use APA’s Psychologist Locator Service ( https://locator.apa.org/?_ga=2.160567293.1305482682.1516057794-1001575750.1501611950 ). Once you find a list of psychologists or other practitioners, choose the right one for you by determining if you plan on attending alone or with family, what you wish to get out of your time with a psychotherapist, how much your insurance company pays for and if you have to pay out of pocket how much you can afford, when you can attend sessions, and how far you are willing to travel to see the mental health professional. Once you have done this, make your first appointment.

But what should you bring? APA suggests, “to make the most of your time, make a list of the points you want to cover in your first session and what you want to work on in psychotherapy. Be prepared to share information about what’s bringing you to the psychologist. Even a vague idea of what you want to accomplish can help you and your psychologist proceed efficiently and effectively.” Additionally, they suggest taking report cards, a list of medications, information on the reasons for a referral, a notebook, a calendar to schedule future visits if needed, and a form of payment. What you take depends on the reason for the visit.

In terms of what you should expect, you and your therapist will work to develop a full history which could take several visits. From this, a treatment plan will be developed. “This collaborative goal-setting is important, because both of you need to be invested in achieving your goals. Your psychologist may write down the goals and read them back to you, so you’re both clear about what you’ll be working on. Some psychologists even create a treatment contract that lays out the purpose of treatment, its expected duration and goals, with both the individual’s and psychologist’s responsibilities outlined.”

After the initial visit, the mental health professional may conduct tests to further understand your condition but will continue talking through the issue. He/she may even suggest involving others, especially in cases of relationship issues. Resilience is a skill that will be taught so that you can better handle future situations.

            3.3.1.5. Does it work? APA writes, “Reviews of these studies show that about 75 percent of people who enter psychotherapy show some benefit. Other reviews have found that the average person who engages in psychotherapy is better off by the end of treatment than 80 percent of those who don’t receive treatment at all.” Treatment works due to finding evidence-based treatment that is specific for the person’s problem; the expertise of the therapist; and the characteristics, values, culture, preferences, and personality of the client.

            3.3.1.6. How do you know you are finished? “How long psychotherapy takes depends on several factors: the type of problem or disorder, the patient’s characteristics and history, the patient’s goals, what’s going on in the patient’s life outside psychotherapy and how fast the patient is able to make progress.” It is important to note that psychotherapy is not a lifelong commitment, and it is a joint decision of client and therapist as to when it ends. Once over, expect to have a periodic check-up with your therapist. This might be weeks or even months after your last session. If you need to see him/her sooner, schedule an appointment. APA calls this a “mental health tune up” or a “booster session.”

For more on psychotherapy, please see the very interesting APA article on this matter:

http://www.apa.org/helpcenter/understanding-psychotherapy.aspx

  • Anyone can seek treatment and we all can benefit from it at some point in our lives.
  • Psychotherapy is when psychologists apply scientifically validated procedures to help a person feel better and develop healthy habits.

Section 3.3 Review Questions

  • When should you seek help?
  • Why should you seek professional help over the advice dispensed by family and friends?
  • How do you find a therapist and what should you bring to your appointment?
  • Does psychotherapy work?

Module Recap

That’s it. With the conclusion of Module 3, you now have the necessary foundation to understand each of the groups of disorders we discuss beginning in Module 4 and through Module 14.

In Module 3 we reviewed clinical assessment, diagnosis, and treatment. In terms of assessment, we covered key concepts such as reliability, validity, and standardization; and discussed methods of assessment such as observation, the clinical interview, psychological tests, personality inventories, neurological tests, the physical examination, behavioral assessment, and intelligence tests. In terms of diagnosis, we discussed the classification systems of the DSM-5-TR and ICD-11. For treatment, we discussed the reasons why someone may seek treatment, self-treatment, psychotherapy, the client-centered relationship, and how well psychotherapy works.

3rd edition

Creative Commons License

Share This Book

  • Increase Font Size

Psychological Assessment Tools For Mental Health

This page is maintained as a service to mental health professionals. The scales and measures listed here are designed to assist clinicians to practice effectively. Resources linked-to from this page should only be used by appropriately qualified, experienced, and supervised professionals. Psychology Tools does not host any of these scales and cannot take responsibility for the accuracy or availability of linked resources. To the best of our knowledge the assessment measures listed here are either free of copyright restrictions, or are being shared by the relevant rights-holders.

Only qualified mental health professionals should use these materials.

Introduction

Mental health professionals use a variety of instruments to assess mental health and wellbeing. Common purposes for psychological testing include: screening for the presence or absence of common mental health conditions; making a formal diagnosis of a mental health condition; assessment of changes in symptom severity; and monitoring client outcomes across the course of therapy.

Screening:  Brief psychological measures can be used to ‘screen’ individuals for a range of mental health conditions. Screening measures are often questionnaires completed by clients. Screening tends are quick to administer but results are only indicative: if a positive result is found on a screening test then the screening test can be followed up by a more definitive test.

Diagnosis:  Psychological assessment measures can support a qualified clinician in making a formal diagnosis of a mental health problem. Mental health assessment with the purpose of supporting a diagnosis can include the use of semi-structured diagnostic interviews and validated questionnaires. Items in self-report measures used for diagnosis often bear a close correspondence to criteria specified in the diagnostic manuals (ICD and DSM).

Symptom & outcome monitoring:  One strand of evidence-based practice requires that therapists use outcome measures to monitor progress and guide the course of therapy. Psychologists, CBT therapists, and other mental health professionals often ask their clients to complete self-report measures regularly to assess changes in symptom severity.

Links to external resources

Psychology Tools makes every effort to check external links and review their content. However, we are not responsible for the quality or content of external links and cannot guarantee that these links will work all of the time.

  • Cooper, M. L., Russell, M., Skinner, J. B., & Windle, M. (1992). Development and validation of a three-dimensional measure of drinking motives. Psychological Assessment,4,123-132.
  • Scale Download Primary Link Archived Link
  • Cooper, M.L. (1994). Motivations for alcohol use among adolescents:  Development and validation of a four-factor model.  Psychological Assessment, 6,117-128.
  • Raistrick, D.S., Bradshaw, J., Tober, G., Weiner, J., Allison, J. & Healey, C. (1994) Development of the Leeds Dependence Questionnaire, Addiction, 89, pp 563-572.
  • Marsden, J. Gossop, M. Stewart, D. Best, D. Farrell, M. Lehmann, P. Edwards, C. & Strang, J. (1998) The Maudsley Addiction Profile (MAP): A brief instrument for assessing treatment outcome, Addiction 93(12): 1857-1867.
  • Rollnick, Heather, Gold, Hall (1992)
  • User’s manual Download Primary Link Archived Link
  • Gossop, M., Darke, S., Griffiths, P., Hando, J., Powis, B., Hall, W., Strang, J. (1995). The Severity of Dependence Scale (SDS): psychometric properties of the SDS in English and Australian samples of heroin, cocaine and amphetamine users. Addiction 90(5): 607-614.
  • Reference Miller, W. R., & Tonigan, J. S. (1996). Assessing drinkers’ motivation for change: the Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES). Psychology of Addictive Behaviors, 10(2), 81.
  • Buss, A.H., & Perry, M. (1992). The Aggression Questionnaire. Journal of Personality and Social Psychology, 63, 452-459.
  • Scale Download Archived Link
  • Snell, W. E., Jr., Gum, S., Shuck, R. L., Mosley, J. A., & Hite, T. L.. (1995).  The Clinical Anger Scale: Preliminary reliability and validity. Journal of Clinical Psychology, 51, 215-226
  • Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium of Assessment Tools, 2nd ed. | National Center for Injury Prevention and Control of the Centers for Disease Control and Prevention: Dahlberg, Toal, Swahn, Behrens | 2005 Download Primary Link Archived Link
  • Test (PDF) Download Primary Link Archived Link
  • Test (Word) Download Primary Link Archived Link
  • Scoring and interpretation Download Primary Link Archived Link
  • Website link Download Primary Link
  • Reference Garner et al. (1982). The Eating Attitudes Test: Psychometric features and clinical correlates. Psychological Medicine, 12, 871-878
  • Interview Download Primary Link Archived Link
  • Questionnaire (EDE-Q) Download Primary Link Archived Link
  • Questionnaire for Adolescents (EDE-A) Download Primary Link Archived Link
  • Reference Fairburn, C. G., Cooper, Z., & O’Connor, M. (1993). The eating disorder examination. International Journal of Eating Disorders, 6, 1-8.
  • Brief Fear Of Negative Evaluation Scale | Leary | 1983 Download Primary Link Archived Link
  • Marks, I. M., & Mathews, A. M. (1979). Brief standard self-rating for phobic patients. Behaviour Research and Therapy, 17(3), 263-267.
  • Spitzer RL, Kroenke K, Williams JBW, Lowe B. A brief measure for assessing generalized anxiety disorder. Arch Inern Med. 2006;166:1092-1097.
  • Scale Download Primary Link
  • Hamilton, M. (1959).The assessment of anxiety states by rating. British Journal of Medical Psychology 32, 50-55.
  • Reference Salkovskis, P. M., Rimes, K. A., Warwick, H. M. C., & Clark, D. M. (2002). The Health Anxiety Inventory: development and validation of scales for the measurement of health anxiety and hypochondriasis. Psychological Medicine, 32(05), 843-853.
  • Chambless, D. L., Caputo, G. C., Jasin, S. E., Gracely, E. J., & Williams, C. (1985). The mobility inventory for agoraphobia. Behaviour research and therapy, 23(1), 35-44.
  • Shear, M. K., Brown, T. A., Barlow, D. H., Money, R., Sholomskas, D. E., Woods, S. W., … & Papp, L. A. (1997). Multicenter collaborative panic disorder severity scale. American Journal of Psychiatry, 154(11), 1571-1575.
  • Meyer, T. J., Miller, M. L., Metzger, R. L., & Borkovec, T. D. (1990). Development and validation of the penn state worry questionnaire. Behaviour Research and Therapy, 28(6), 487-495.
  • Scale archive.org Download Primary Link
  • Spence, S. H. (1998). A measure of anxiety symptoms among children. Behaviour Research and Therapy, 36 (5), 545-566.
  • Scale website link Download Primary Link Archived Link
  • Scale – Adult Download Primary Link Archived Link
  • Scale – Child Age 11-17 Download Primary Link Archived Link
  • Reference Lambe, S., Bird, J. C., Loe, B. S., Rosebrock, L., Kabir, T., Petit, A., ... & Freeman, D. (2023). The Oxford agoraphobic avoidance scale. Psychological Medicine, 53(4), 1233-1243.

Assertiveness

  • Reference Alberti, R.E. and Emmons, M.L. (2017). Your Perfect Right: Assertiveness and Equality in Your Life and Relationships(10th ed.). Oakland, CA: Impact Publishers/New Harbinger Publications.
  • Reference Gay, M. L., Hollandsworth, J. G., & Galassi, J. P. (1975). An assertiveness inventory for adults.Journal of Counseling Psychology, 22(4), 340-344.
  • Tinnitus Handicap inventory Download Primary Link Archived Link
  • Tinnitus Reaction Questionnaire (TRQ) Download Primary Link Archived Link
  • Dizziness Handicap Inventory Download Archived Link

Bipolar disorder

  • Journal article Download Primary Link
  • Reference Jones, S., Mulligan, L. D., Higginson, S., Dunn, G., & Morrison, A. P. (2013). The bipolar recovery questionnaire: psychometric properties of a quantitative measure of recovery experiences in bipolar disorder.Journal of affective disorders,147(1-3), 34-43. Download Archived Link
  • Reference guide Download Archived Link
  • Reference Hirschfeld, R. M., Williams, J. B., Spitzer, R. L., Calabrese, J. R., Flynn, L., Keck Jr, P. E., … & Russell, J. M. (2000). Development and validation of a screening instrument for bipolar spectrum disorder: the Mood Disorder Questionnaire.American Journal of Psychiatry,157(11), 1873-1875. Download Primary Link

Body dysmorphia (BDD)

  • Cosmetic Procedure Screening Questionnaire (COPS) | Veale | 2009 Download Archived Link
  • Body dysmorphic disorder screening tools for the dermatologist: A systematic review | Danesh, M, Beroukhim, K., Nguyen, C., Levin, E., & Koo, J. | 2015 Download Primary Link Archived Link
  • Zimmerman, M., Chelminski, I., McGlinchey, J. B., & Posternak, M. A. (2008). A clinically useful depression outcome scale. Comprehensive psychiatry, 49(2), 131-140.
  • Cox, J. L., Holden, J. M., & Sagovsky, R. (1987). Detection of postnatal depression: development of the 10-item Edinburgh Postnatal Depression Scale. The British Journal of Psychiatry, 150(6), 782-786.
  • Hamilton M. (1960). A rating scale for depression. J Neurol Neurosurg Psychiatry, 23, 56–62.
  • MADRS Score Card Download Archived Link
  • Montgomery, S.A., Asberg, M. (1979). A new depression scale designed to be sensitive to change. British Journal of Psychiatry, 134 (4): 382–89.
  • Scale phqscreeners.com Download Primary Link
  • Kroenke, K., & Spitzer, R. L. (2002). The PHQ-9: a new depression diagnostic and severity measure. Psychiatric annals, 32(9), 509-515.
  • Reference Zung, W. W. (1965). A self-rating depression scale. Archives of General Psychiatry, 12(1), 63-70.
  • Valued Living Questionnaire (Version 2) | Wilson, Groom | 2002 Download Archived Link

Dissociation

  • DES-B (Dalenberg C, Carlson E, 2010) modified for DSM-5 by C. Dalenberg and E. Carlson.
  • Carlson, E.B. & Putnam, F.W. (1993). An update on the Dissociative Experience Scale. Dissociation 6(1), p. 16-27.
  • Scale (English) Download Primary Link Archived Link
  • Scale (German) Download Primary Link Archived Link
  • Scale (Norwegian) Download Primary Link Archived Link
  • Reference Schalinski, I., Schauer, M., & Elbert, T. (2015). The Shutdown Dissociation Scale (Shut-D). European Journal of Psychotraumatology, 6

Eating disorders

  • Journal article Download Primary Link Archived Link
  • Reference Tatham, M., Turner, H., Mountford, V. A., Tritt, A., Dyas, R., & Waller, G. (2015). Development, psychometric properties and preliminary clinical validation of a brief, session‐by‐session measure of eating disorder cognitions and behaviors: The ED‐15. International Journal of Eating Disorders, 48(7), 1005-1015.

Generalized anxiety (GAD)

Grief, loss & bereavement.

  • Paper Download Primary Link Archived Link
  • Reference Prigerson, H. G., Maciejewski, P. K., Reynolds, C. F., Bierhals, A. J., Newsom, J. T., Fasiczka, A., … & Miller, M. (1995). Inventory of Complicated Grief: a scale to measure maladaptive symptoms of loss.Psychiatry research,59(1), 65-79

Health anxiety

Interpersonal relationships.

  • Reference Crowell, J. & Owens, G. (1998) Manual For The Current Relationship Interview And Scoring System. Version 4. Retrieved (current date) from http://ww.psychology.sunysb.edu/attachment/ measures/content/cri_manual.pdf.
  • Vancouver Obsessional Compulsive Inventory (VOCI) Download Primary Link Archived Link
  • Obsessive Compulsive Inventory (OCI) scoring grid Download Primary Link
  • Relationship Obsessive Compulsive Inventory (ROCI) Download Primary Link Archived Link
  • Partner Related Obsessive Compulsive Symptom Inventory Download Primary Link Archived Link
  • OCD Trauma Timeline Interview (OTTI) | Wadsworth, Van Kirk,August, MacLaren Kelly, Jackson, Nelson & Luehrs | 2023 Download Primary Link Archived Link
  • Obsessive Compulsive Cognition Working Group. (2001). Development and initial validation of the Obsessive Beliefs Questionnaire and the Interpretation of Intrusions Inventory. Behaviour Research and Therapy, 39, 987–1006.
  • Reference McCraken, L. M., Vowles, K. E. & Eccleston, C. (2004). Acceptance of chronic pain: component analysis and a revised assessment method. Pain, 107, 159-166.
  • Reference Moss-Morris, R., Weinman, J., Petrie, K., Horne, R., Cameron, L., & Buick, D. (2002). The revised illness perception questionnaire (IPQ-R). Psychology and health, 17(1), 1-16.

Panic (attacks & disorder)

Perinatal mental health, personality disorders.

  • Reference Bohus, M., Limberger, M. F., Frank, U., Chapman, A. L., Kühler, T., & Stieglitz, R.-D. (2007). Psychometric properties of the Borderline Symptom List (BSL). Psychopathology, 40(2), 126–132. https://doi.org/10.1159/000098493
  • Beliefs About Voices Questionnaire – Revised (BAVQ-R) | Chadwick et al | 2000 Download Primary Link
  • The Subjective Experiences Of Psychosis Scale | Psychosis Research Unit Download Primary Link Archived Link
  • Scale archived version Download Primary Link
  • Weathers, F. W., Blake, D. D., Schnurr, P. P., Kaloupek, D. G., Marx, B. P., & Keane, T. M. (2015). The Clinician-Administered PTSD Scale for DSM-5 (CAPS-5) – Past Week [Measurement instrument].
  • Weiss, D. S., & Marmar, C. R. (1996). The Impact of Event Scale – Revised. In J. Wilson & T. M. Keane (Eds.), Assessing psychological trauma and PTSD (pp. 399-411). New York: Guilford.
  • Weathers, F. W., Blake, D. D., Schnurr, P. P., Kaloupek, D. G., Marx, B. P., & Keane, T. M. (2013). The Life Events Checklist for DSM-5 (LEC-5) – Standard. [Measurement instrument].
  • Weathers, F. W., Litz, B. T., Keane, T. M., Palmieri, P. A., Marx, B. P., & Schnurr, P. P. (2013). The PTSD Checklist for DSM-5 (PCL-5) – Standard. [Measurement instrument].
  • Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of Traumatic Stress, 28, 489-498. doi: 10.1002/jts.22059
  • Manual Download Primary Link Archived Link
  • Scale (Standard) Download Primary Link Archived Link
  • Scale with criterion A Download Primary Link Archived Link
  • Scale with life events checklist and criterion A Download Primary Link Archived Link
  • Foa, Edna B.,McLean, Carmen P.,Zang, Yinyin,Zhong, Jody,Rauch, Sheila,Porter, Katherine,Knowles, Kelly,Powers, Mark B.,Kauffman, Brooke Y. (2016) Psychological Assessment, Vol 28(10), 1159-1165
  • DePrince, A. P., Zurbriggen, E. L., Chu, A. T., & Smart, L. (2010). Development of the trauma appraisal questionnaire.Journal of Aggression, Maltreatment & Trauma,19(3), 275-299. Download Primary Link Archived Link
  • Brewin, C. R, Rose, S., Andrews, B., Green, J., Tata, P., McEvedy, C. Turner, S, Foa, E. B. (2002). Brief screening instrument for post-traumatic stress disorder. British Journal of Psychiatry, 181, 158-162.
  • Wilson, J.P., & Keane, T.M. (Eds.). (2004). Assessing psychological trauma and PTSD: A practitioner’s handbook (2nd ed.). New York, NY: Guilford Press.

Self-esteem & self-criticism

Sleep & insomnia.

  • Johns, M. W. (1991). A new method for measuring daytime sleepiness: the Epworth sleepiness scale. sleep, 14(6), 540-545.
  • Buysse, D. J., Reynolds III, C. F., Monk, T. H., Berman, S. R., & Kupfer, D. J. (1989). The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry research, 28(2), 193-213.

Social anxiety

Suicide & self-harm.

  • Psychometrics Download Primary Link
  • Ribeiro, J. D., Witte, T. K., Van Orden, K. A., Selby, E. A., Gordon, K. H., Bender, T. W., & Joiner Jr, T. E. (2014). Fearlessness about death: The psychometric properties and construct validity of the revision to the Acquired Capability for Suicide Scale. Psychological assessment, 26(1), 115.
  • Full scale for healthcare professionals lifetime/recent Download Primary Link Archived Link
  • Columbia Lighthouse Project (formerly the Center for Suicide Risk Assessment) link Download Primary Link
  • Posner, K., Brent, D., Lucas, C., Gould, M., Stanley, B., Brown, G., … & Mann, J. (2008). Columbia-suicide severity rating scale (C-SSRS). New York, NY: Columbia University Medical Center.
  • Van Orden, K. A., Cukrowicz, K. C., Witte, T. K., & Joiner Jr, T. E. (2012). Thwarted belongingness and perceived burdensomeness: Construct validity and psychometric properties of the Interpersonal Needs Questionnaire. Psychological assessment, 24(1), 197.
  • Brief version Download Primary Link Archived Link
  • Full version Download Primary Link Archived Link
  • Severity assessment Download Primary Link Archived Link
  • Whitlock, J.L., Exner-Cortens, D. & Purington, A. (2014). Validity and reliability of the non-suicidal self-injury assessment test (NSSI-AT). Psychological Assessment, 26(3): 935-946.
  • SITBI – Long form Download Primary Link Archived Link
  • SITBI – Short form Download Primary Link Archived Link
  • Paper Nock, M. K., Holmberg, E. B., Photos, V. I. , & Michel, B. D. (2007). The Self-Injurious Thoughts and Behaviors Interview: Development, reliability, and validity in an adolescent sample. Psychological Assessment, 19, 309-317. Download Primary Link
  • Reference Osman, A., Bagge, C. L., Gutierrez, P. M., Konick, L. C., Kopper, B. A., & Barrios, F. X. (2001). The Suicidal Behaviors Questionnaire-Revised (SBQ-R): validation with clinical and nonclinical samples. Assessment, 8(4), 443-454.
  • Scale (Long Form) Download Primary Link Archived Link
  • Scale (Short Form) Download Primary Link Archived Link
  • Scoring Download Primary Link Archived Link
  • For clinicians
  • For students
  • Resources at your fingertips
  • Designed for effectiveness
  • Resources by problem
  • Translation Project
  • Help center
  • Try us for free
  • Terms & conditions
  • Privacy Policy
  • Cookies Policy

Introduction to Psychological Assessment

  • First Online: 14 May 2021

Cite this chapter

Book cover

  • Cecil R. Reynolds 4 ,
  • Robert A. Altmann 5 &
  • Daniel N. Allen 6  

2825 Accesses

Psychological testing and assessment are important in virtually every aspect of professional psychology. Assessment has widespread application in health, educational, occupational, forensic, research, and other settings. This chapter provides a historical and theoretical introduction to psychological testing, discussing basic terminology, major types of psychological tests, and types of scores that are used for interpreting results. Assumptions that underlie psychological assessment are considered as well as the rationale for using tests to make important decisions about people. Common applications and common criticisms of testing and assessment are reviewed, and characteristics of those involved in the testing process are presented. The chapter ends with a discussion of expected changes to current testing to meet new demands for test applications in the twenty-first century.

  • Achievement tests
  • Aptitude tests
  • Criterion-referenced score
  • Maximum performance tests
  • Measurement
  • Norm-referenced scores
  • Objective personality tests
  • Power tests
  • Projective personality tests
  • Reliability
  • Speed tests
  • Typical response tests
Why do I need to learn about testing and assessment?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing . Washington, DC: American Educational Research Association.

Google Scholar  

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: Author.

Book   Google Scholar  

American Psychological Association. (1993). Call for book proposals for test instruments. APA Monitor, 24 , 12.

American Psychological Association. (2019). Retrieved from https://www.accreditation.apa.org/accredited-programs .

Amrein, A. L., & Berliner, D. C. (2002). High stakes testing, uncertainty, and student learning. Education Policy Analysis Archives, 10 (18) . Retrieved from https://epaa.asu.edu/ojs/article/viewFile/297/423 .

Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). MMPI-2: Manual for administration and scoring . Minneapolis, MN: University of Minnesota Press.

Cohen, R. C., & Swerdlik, M. E. (2002). Psychological testing and assessment: An introduction to tests and measurement . New York, NY: McGraw-Hill.

Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York, NY: HarperCollins.

Dahlstrom, W. G. (1993). Tests: Small samples, large consequences. American Psychologist, 48 , 393–399.

Article   Google Scholar  

Doherty, K. M. (2002). Education issues: Assessment. Education Week . Retrieved from http://www.edweek.org/context/topics/issuespage.cfm?id=41 .

Fancher, R. E. (1985). The intelligence men: Makers of the IQ controversy . New York, NY: Norton.

Friedenberg, L. (1995). Psychological testing: Design, analysis, and use . Boston, FL: Allyn & Bacon.

Galton, F. (1879). Psychometric experiments. Brain: A Journal of Neurology, 11 , 149–162.

Gregory, R. (2004). Psychological testing: History, principles, and applications . Needham Heights, MA: Allyn & Bacon.

Huddleston, M. W., & Boyer, W. W. (1996). The Higher Civil Service in the United States: Quest for reform . Pittsburgh, PA: University of Pittsburgh Press.

Joint Committee on Testing Practices. (1998). Rights and responsibilities of test takers: Guidelines and expectations . Washington, DC: American Psychological Association.

Karazsia, B. T., & Smith, L. (2016). Preparing for graduate-level training in professional psychology: Comparisons Across Clinical PhD, Counseling PhD, and Clinical PsyD Programs. Teaching of Psychology, 43 (4), 305–313.

Kaufman, A. S. (1994). Intelligent testing with the WISC-III . New York, NY: Wiley.

Kober, N. (2002). Teaching to the test: The good, the bad, and who’s responsible. Test Talk for Leaders (Issue 1). Washington, DC: Center on Education Policy. Retrieved from http://www.cep-dc.org/testing/testtalkjune2002.htm .

Maruish, M. E. (2004). Introduction. In M. Maruish (Ed.), The use of psychological testing for treatment planning and out comes assessment (General considerations) (Vol. 1, 3rd ed., pp. 1–64). Mahwah, NJ: Erlbaum.

McFall, R. M., & Treat, T. T. (1999). Quantifying the information value of clinical assessment with signal detection theory. Annual Review of Psychology, 50 , 215–241.

Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., … Reed, G. M. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56 , 128–165.

Morgan, R. D., & Cohen, L. M. (2008). Clinical and counseling psychology: Can differences be gleaned from printed recruiting materials? Training and Education in Professional Psychology, 2 (3), 156–164.

National Council on Measurement in Education. (1995). Code of professional responsibilities in educational measurement . Washington, DC: Author.

Nitko, A. J. (2001). Educational assessment of students . Upper Saddle River, NJ: Merrill Prentice Hall.

Norcross, J. C. (2000). Clinical versus counseling psychology: What’s the diff? Eye on Psi Chi, 5 (1), 20–22.

Norcross, J. C., Karg, R. S., & Prochaska, J. O. (1997). Clinical psychologists in the 1990s: Part II. Clinical Psychologist, 50 , 4–11.

Phelps, R., Eisman, E. J., & Kohout, J. (1998). Psychological practice and managed care: Results of the CAPP practitioner survey. Professional Psychology: Research and Practice, 29 , 31–36.

Popham, W. J. (2000). Modern educational measurement: Practical guidelines for educational leaders . Boston, MA: Allyn & Bacon.

Reynolds, C. R. (1998). Fundamentals of measurement and assessment in psychology. In A. Bellack & M. Hersen (Eds.), Comprehensive clinical psychology (pp. 33–55). New York, NY: Elsevier.

Chapter   Google Scholar  

Reynolds, C. R., & Fletcher-Janzen, E. (2002). Intelligent testing. In C. R. Reynolds & E. Fletcher-Janzen (Eds.), Concise encyclopedia of special education (2nd ed., pp. 522–523). New York, NY: Wiley.

Roid, G. H. (2003). Stanford-Binet Intelligence Scale (5th ed.). Itasca, IL: Riverside.

Sternberg, R. J. (1990). Metaphors of mind: Conceptions of the nature of intelligence . Cambridge, England: Cambridge University Press.

Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive theory. Applied Psychological Measurement, 6 , 473–492.

Weiss, D. J. (1985). Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53 , 774–789.

Weiss, D. J. (1995). Improving individual difference measurement with item response theory and computerized adaptive testing. In D. Lubinski & R. Dawis (Eds.), Assessing individual differences in human behavior: New concepts, methods, and findings (pp. 49–79). Palo Alto, CA: Davies-Black.

Recommended Reading and Internet Sites

American Educational Research Association, American Psychological Association,, and National Council on Measurement in Education. (2014). Standards for educational and psychological testing . Washington, DC: American Educational Research Association. In practically every content area this resource is indispensable!

American Psychological Association. www.apa.org . In addition to general information about the Association, this site has much information on psychology as a field of study, current reviews, and archival documents such as specialty definitions and practice guidelines. Links to the divisions’ websites are provided here as well as applications for student membership.

Download references

Author information

Authors and affiliations.

Austin, TX, USA

Cecil R. Reynolds

Minneapolis, MN, USA

Robert A. Altmann

Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV, USA

Daniel N. Allen

You can also search for this author in PubMed   Google Scholar

1.1 Electronic Supplementary Material

Supplementary file 1.1.

(PPTX 53 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Reynolds, C.R., Altmann, R.A., Allen, D.N. (2021). Introduction to Psychological Assessment. In: Mastering Modern Psychological Testing. Springer, Cham. https://doi.org/10.1007/978-3-030-59455-8_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-59455-8_1

Published : 14 May 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-59454-1

Online ISBN : 978-3-030-59455-8

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IResearchNet

The realm of school psychology underscores the importance of accurate and holistic assessments to foster optimal student outcomes. This article delves into the multifaceted world of assessment in school psychology, exploring a variety of tools and methodologies ranging from academic achievement evaluations to neuropsychological assessments. By elucidating key historical shifts, embracing contemporary practices such as the Responsiveness to Intervention Model, and addressing controversial topics like bias in testing, this comprehensive overview aims to provide educators, psychologists, and stakeholders with a deeper understanding of the intricate processes underlying effective educational assessments. The piece underscores the continual need for assessments that are both reflective of diverse student populations and adaptive to the ever-evolving educational landscape.

Introduction

Assessment in school psychology plays an indispensable role in shaping the educational trajectories and personal development of students. Rooted in the early days of psychology and education, the journey of assessment mirrors the evolution of both disciplines, reflecting changing paradigms, advancements in understanding, and adaptations to the diverse needs of learners (Fagan & Wise, 2007). The essence of these assessments extends beyond mere data collection or the assigning of numerical values. They serve as a compass, providing insights into a student’s cognitive landscape, emotional depth, social nuances, and academic abilities, paving the way for informed interventions and holistic development (Merrell, Ervin, & Gimpel, 2006).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Over the years, the methodologies employed in assessment have metamorphosed considerably. Traditional methods, once heavily reliant on standardized tests, have evolved to embrace a more inclusive, multifaceted approach, taking into account cultural contexts, neurodiversity, and the myriad factors that influence a student’s journey through the educational system (Jimerson, Stewart, Skokut, Cardenas, & Malone, 2008). The adaptation of these tools has been influenced by shifts in educational theories, groundbreaking discoveries in neuroscience, and a growing recognition of the importance of socio-cultural factors in learning and development (Smith, 2012).

The landscape of assessment in school psychology is rich and varied. It encompasses methodologies ranging from evaluations of academic achievements and cognitive capabilities to in-depth explorations of personality dynamics, social relationships, and behavioral patterns (Bailenson, 2018). Each tool and technique serves a unique purpose: guiding instruction, tailoring interventions, identifying potential challenges, and, importantly, catalyzing opportunities for students to flourish. As we delve into the realm of assessment in this article, we seek to shed light on its multifaceted nature, historical roots, practical applications, and the challenges it faces in the contemporary educational scenario.

The Essence of Assessment in Education

The overarching purpose of assessment in education transcends mere measurements or quantifications. It acts as a guiding star for educators, students, and stakeholders, shedding light on progress, areas of improvement, and potential avenues of intervention (Salvia, Ysseldyke, & Bolt, 2010). Assessments are not just endpoints; they are, in essence, part of an ongoing conversation about the learning journey, providing valuable feedback loops for all parties involved.

Historically, assessments were seen predominantly as tools for stratification, with a heavy focus on rankings and categorizations. The intent was to gauge the capabilities of students, often through one-size-fits-all standardized tests, which determined their educational and occupational paths (Phelps, 2005). However, as the educational arena evolved, so did the understanding of assessments. There was a growing acknowledgment that learning is a multifaceted process, shaped by a host of cognitive, emotional, social, and environmental factors. This realization heralded a shift from mere summative assessments, which typically culminate at the end of instructional units, to formative assessments, which are interspersed throughout the learning journey and provide real-time feedback to educators and learners (Black & Wiliam, 2009).

The role of assessments in facilitating individualized learning experiences cannot be overstated. With the recognition that every student possesses a unique learning style and pace, the need for differentiated instruction gained prominence. Assessments, in this context, provide a blueprint for educators to tailor their teaching strategies to meet the distinct needs of each student (Tomlinson, 2001). Moreover, the insights garnered from assessments go beyond academic proficiency. They offer a window into a student’s emotional well-being, socio-cultural experiences, motivation levels, and behavioral tendencies, allowing for a holistic understanding of the student (Volante & Beckett, 2011).

The value of assessment in education also extends to accountability. In modern educational systems, there’s an increasing demand for transparency, ensuring that institutions are delivering quality education and that students are making adequate progress. Assessments, when used ethically and effectively, serve as indicators of educational quality and student achievement, fostering trust and collaboration among educators, parents, policymakers, and community members (Hargreaves, Earl, & Schmidt, 2002).

In conclusion, the essence of assessment in education is multifaceted. It encompasses a broad spectrum of purposes, from informing instructional practices and promoting individualized learning to ensuring accountability and fostering collaborative educational ecosystems. By understanding and harnessing the power of effective assessment, educators and stakeholders can navigate the complex terrain of education, ensuring that every student receives the support, challenge, and guidance they need to thrive.

Historical Context of Assessments in School Psychology

Assessment within the realm of school psychology has witnessed profound evolutions over the past century. Understanding the historical context is imperative, as it provides a backdrop against which the present-day practices and philosophies can be appreciated and critically examined.

Early Beginnings:

The genesis of formalized assessments in school settings can be traced back to the late 19th and early 20th centuries, when the first standardized intelligence tests were introduced. Spearheaded by pioneers like Alfred Binet and Theodore Simon, the aim of these tests was to identify children in schools who required specialized educational interventions (Binet & Simon, 1916). The Binet-Simon scale, a precursor to many modern intelligence tests, laid foundational principles for understanding mental age , a concept still relevant today.

World Wars and Expansion

The two World Wars catalyzed the widespread adoption and further development of standardized assessments, primarily due to the military’s requirement for large-scale intelligence and aptitude testing of recruits (Yerkes, 1917). This period saw the birth of several new tests and the introduction of group testing methods. The wartime needs indirectly contributed to bolstering the field of school psychology and expanding the realm of assessments beyond just intelligence testing.

The Era of Standardization

Post-war periods, particularly the 1950s and 1960s, witnessed a surge in the standardization of educational assessments. With the advent of psychometric theories and the establishment of bodies like the Buros Mental Measurements Yearbook, there was a rigorous pursuit of validity, reliability, and norms in testing methodologies (Buros, 1978). During this period, assessments became more structured, scientifically validated, and started to encompass areas like personality, behavior, and achievement.

Inclusion and the Shift to Holism

The latter half of the 20th century was marked by increasing advocacy for inclusive education. Landmark legislations, such as the Individuals with Disabilities Education Act (IDEA) in the U.S., necessitated the evolution of assessment practices to ensure they catered to a diverse range of students, including those with special needs (Kavale & Forness, 2000). The recognition of multiple intelligences (Gardner, 1983) and the shift towards a more holistic understanding of students also nudged school psychology assessments to broaden their scope and methodologies.

Contemporary Shifts

In more recent decades, the field of school psychology has been deeply influenced by broader societal and educational shifts. With the advent of technological advancements, digital assessments have become more prevalent. Furthermore, there’s an increasing emphasis on formative assessments, ongoing assessments designed to inform and adapt instruction in real-time, as opposed to solely relying on summative, endpoint assessments (Shepard, 2000). Contemporary school psychology also integrates principles from applied behavior analysis, neuropsychological assessments , and socio-emotional evaluations to offer a comprehensive picture of student well-being and capabilities (Jimerson, Burns, & VanDerHeyden, 2016).

In retracing the history of assessments in school psychology, it becomes evident that this evolution has been a dialogic process, informed by societal needs, scientific advancements, and educational philosophies. As society continues to evolve, so too will the paradigms and practices of assessment, reaffirming its central role in understanding and facilitating student development.

Types of Assessments

In school psychology, a diverse array of assessments exists, each serving specific purposes and offering distinct insights into a student’s cognitive, emotional, behavioral, and social functioning. These assessments, administered with ethical considerations and in culturally responsive manners, form the bedrock of evidence-based decision-making in educational settings.

Intelligence Assessment

Rooted in early works by psychologists like Alfred Binet, intelligence assessments provide insights into a student’s cognitive abilities, often quantified as an Intelligence Quotient (IQ). These assessments evaluate various facets of intelligence, including fluid intelligence—the ability to reason and think abstractly—and crystallized intelligence, which encompasses accumulated knowledge (Cattell, 1963). Tools like the Wechsler Intelligence Scale for Children (WISC) are commonly employed for such purposes.

Academic Achievement Assessment

Unlike intelligence tests that gauge potential, academic achievement assessments measure a student’s current proficiency in academic domains such as reading, mathematics, and writing. These tests help identify areas of strength and weakness and can be used to monitor academic progress over time (Marston, 1989).

Behavioral and Social–Emotional Assessment

Behavior is a vital component of a student’s school experience. Behavioral assessment , often informed by principles of Applied Behavior Analysis (Cooper, Heron, & Heward, 2007), delve into observable behaviors, seeking patterns, triggers, and outcomes. Social-emotional assessments , on the other hand, provide insights into a student’s emotional regulation, interpersonal skills, and overall well-being.

Personality Assessment

While personality is a complex construct, assessments in this domain aim to capture the underlying traits, motivations, and temperaments of students. Personality assessment instruments may range from rating scales to projective tests, where ambiguous stimuli are presented to respondents to interpret, revealing hidden aspects of their personalities (Meyer & Kurtz, 2006).

Neuropsychological Assessment

Neuropsychological assessments map the relationship between the brain’s function and observable behaviors. These evaluations, often necessitated by medical conditions, traumas, or developmental concerns, provide a comprehensive picture of cognitive processes such as memory, attention, and executive functions (Lezak, 2004).

Curriculum-Based and Performance-Based Assessments

These are intimately tied to specific curricula and pedagogical practices. Curriculum-Based Assessments (CBA) offer insights into a student’s performance on specific instructional materials (Deno, 1985). In contrast, Performance-Based Assessments require students to demonstrate skills and competencies through real-world tasks, often emphasizing the application of knowledge over rote memorization.

Authentic and Portfolio Assessments

Emerging from the quest to make assessments more reflective of real-world tasks, authentic assessment engage students in tasks that mirror real-life situations. Portfolios, a collection of student work over time, represent another form of authentic assessment, spotlighting growth and showcasing a student’s multifaceted abilities (Herman, Aschbacher, & Winters, 1992).

In the grand tapestry of school psychology assessments, each type of assessment threads its unique narrative, contributing to a holistic understanding of students. Their judicious and ethical application facilitates not just academic growth, but the overall well-being and success of every student.

Contemporary Assessment Approaches

The modern era of school psychology is marked by a proliferation of assessment methodologies, each reflecting different facets of a student’s learning and development. While traditional forms of assessment relied heavily on standardized testing and norm-referenced evaluations, contemporary methodologies highlight the need for a more comprehensive, nuanced, and context-sensitive approach to understanding students’ abilities, achievements, and needs. Below, we delve into several of these contemporary assessment approaches.

Authentic Assessment

Moving away from rote memorization and standardized testing formats, authentic assessments immerse students in tasks that mirror real-world situations and challenges. These assessments, whether they be project-based tasks, simulations, or experiential activities, prioritize the applicability of knowledge and skills in practical, everyday contexts (Wiggins, 1993). The essence of authentic assessment lies in its capacity to provide educators with insights into how students might apply their learning in real-life situations.

Criterion-Referenced Assessment

Unlike norm-referenced tests that compare an individual’s performance to a broader population, criterion-referenced assessments evaluate a student’s performance based on predetermined, explicit criteria or benchmarks. This type of assessment is crucial for understanding whether a student has achieved specific competencies or skills, irrespective of how their peers perform (Popham, 1978).

Curriculum-Based Assessment (CBA)

Intimately tied to instructional content, Curriculum-Based Assessment evaluates a student’s performance and skills directly based on the objectives and content of the curriculum. It provides educators with real-time feedback on a student’s current instructional level, helping guide subsequent teaching strategies and interventions (Deno, 1985).

Outcomes-Based Assessment

Centered on the end-goals or desired outcomes of the educational process, this approach assesses the skills, knowledge, and attitudes students have garnered after an instructional period. By focusing on demonstrable outcomes or end results, outcomes-based assessment  underscores the tangible skills and knowledge students will carry with them beyond the classroom (Spady, 1994).

Performance-Based Assessment

This assessment mode requires students to demonstrate specific skills and competencies through task completion. Whether it be scientific experiments, oral presentations, or other hands-on tasks, performance-based assessments offer a dynamic and interactive approach to evaluation, emphasizing the process as much as the final product (Herman, Aschbacher, & Winters, 1992).

Portfolio Assessment

Portfolios offer a holistic view of a student’s learning journey. Comprising various artifacts—from essays and projects to reflective entries and art pieces— portfolio assessments provide a multi-dimensional perspective on a student’s growth, efforts, and accomplishments over time. It empowers students to actively engage in their assessment process, fostering self-awareness and reflective practices (Paulson, Paulson, & Meyer, 1991).

Responsiveness to Intervention (RTI) Model

A transformative approach in the realm of school psychology, RTI is a multi-tiered system designed for early identification and support of students exhibiting learning and behavioral challenges. By providing targeted interventions and closely monitoring student progress, the Responsiveness to Intervention model ensures timely, data-driven decisions to support students’ diverse needs (Fuchs & Fuchs, 2006).

Each of these assessment approaches offers unique insights and benefits. Employed judiciously and in alignment with a student’s individual context, they collectively illuminate the complex tapestry of learning and development that unfolds within the educational setting.

Specialized Areas of Assessment

School psychology, with its multidimensional purview, necessitates specialized assessment tools and strategies tailored to address varied aspects of student development, needs, and contexts. These specialized assessments range from evaluating career aptitudes and classroom behaviors to understanding communication skills and written language proficiencies. Each of these methods offers valuable insights, guiding educational interventions, instructional strategies, and support systems to ensure holistic student development.

Career Assessment

As students transition from educational institutions to professional avenues, understanding their aptitudes, interests, and potential career trajectories becomes paramount. Career assessments , which encompass a spectrum of psychometric tools and structured inventories, assist in this endeavor. Tools like the Strong Interest Inventory (SII) or the Holland Code (RIASEC) model help gauge a student’s interests and potential career paths, ensuring informed decision-making as they embark on their professional journeys (Holland, 1997). Through these instruments, practitioners can guide students towards careers that align with their innate strengths, motivations, and passions.

Classroom Observation

The classroom, a microcosm of the broader educational ecosystem, presents a fertile ground for direct observation and assessment. Through structured classroom observation scales or anecdotal records, professionals assess student behavior, peer interactions, engagement levels, and responses to instructional strategies. Methods like the Classroom Assessment Scoring System (CLASS) provide standardized criteria for evaluating the quality of teacher-student interactions and the classroom environment, offering insights into areas requiring interventions (Pianta, La Paro, & Hamre, 2008).

Interviewing

Interviews , with their capacity for depth and nuance, offer a window into the subjective experiences and perspectives of students. Whether structured—with predetermined questions and scales—or unstructured, leaning on open-ended dialogues, interviews elicit insights into a student’s thoughts, emotions, aspirations, and challenges. These verbal exchanges, which may also involve parents, teachers, or peers, often complement other assessment tools, ensuring a well-rounded understanding of the student’s context (McIntosh & Morse, 2015).

Written Language Assessment

With written communication being a cornerstone of educational curricula, assessing a student’s writing skills and associated challenges is critical. Through standardized tools like the Test of Written Language (TOWL) or curriculum-based assessments, professionals can evaluate multiple facets of written language—grammar, vocabulary, coherence, and organization. Written language assessment assessment also helps in identifying specific challenges, such as dysgraphia, guiding subsequent instructional strategies and interventions (Hammill & Larsen, 2009).

In sum, these specialized areas of assessment underscore the expansive nature of school psychology. By honing in on specific dimensions of student development and experience, these tools and strategies provide educators, psychologists, and stakeholders with the nuanced data and insights essential for shaping meaningful educational trajectories.

Controversies and Challenges in Assessment

Assessment in school psychology, while essential, is not without its share of challenges and controversies. The goal of achieving fair, valid, and comprehensive evaluations of student abilities and needs often clashes with inherent biases in testing materials, methodologies, and interpretations. Furthermore, foundational concepts, like the idea of a general intelligence factor, face academic scrutiny and debate. This section delves into two significant controversies that have shaped the discourse in educational assessment.

Bias in Testing

Bias in testing , whether overt or subtle, can distort the results of an assessment, leading to unfair or systematic disadvantages for certain groups of students. These biases can stem from cultural references, linguistic nuances, or content that may be unfamiliar or irrelevant to certain populations. For instance, a test developed in one cultural context may not be suitable or fair when administered in another. The result? Students from minority or different cultural backgrounds may find themselves at an unfair disadvantage, not because of their actual abilities but because of the test’s inherent biases (Helms, 2006). This challenge underscores the importance of culturally sensitive and contextually relevant assessments. As Neisser et al. (1996) pointed out, fair testing should be tailored to the experiences and backgrounds of the examinees, ensuring that the assessment does not privilege one group over another.

Psychometric g

At the heart of intelligence testing lies the controversial concept of ‘ psychometric g ‘ or the general intelligence factor. Proposed by Charles Spearman in the early 20th century, ‘g’ suggests that various cognitive abilities, like mathematical skill or verbal prowess, share a common underlying factor. In essence, individuals with high ‘g’ would excel in multiple cognitive domains. While the idea has been influential, leading to the development of many intelligence tests, it has also been the subject of intense debate. Critics argue that intelligence is multi-faceted and cannot be reduced to a single factor. They point out that different cultures might prioritize different cognitive skills, making the universal application of ‘g’ problematic. Furthermore, Gardner’s (1983) theory of multiple intelligences proposes a model where various forms of intelligence (e.g., linguistic, logical-mathematical, musical) exist relatively independently of each other, challenging the unitary nature of ‘g’.

The complexities inherent in assessing human cognition and ability mean controversies are somewhat inevitable. The field’s challenge lies in navigating these controversies, refining assessment tools, and methodologies to ensure they genuinely reflect the rich tapestry of human potential and experience.

Resources and Publications

In the evolving domain of school psychology and assessment, staying abreast of the latest tools, techniques, and methodologies is pivotal. Resources and publications, serving as repositories of knowledge and critical evaluation, facilitate this continuous learning. Two particularly influential resources in the realm of educational assessment are the Buros Mental Measurements Yearbook and the art and science of crafting psychological reports .

Buros Mental Measurements Yearbook

A cornerstone in the landscape of psychological and educational assessment, the Buros Mental Measurements Yearbook (BMMY) stands as a critical resource for professionals in the field. This publication, initiated in the 1930s, provides comprehensive reviews and critical evaluations of commercially available tests in areas like psychology, education, and business. Each edition presents detailed information about test purpose, pricing, test developer contact information, and in-depth reviews penned by experts. For practitioners, the BMMY is an invaluable tool, offering insights into the reliability, validity, and utility of numerous assessments, ensuring informed choices in test selection (Buros Center, 2020). Furthermore, its emphasis on critical, peer-reviewed evaluations ensures that the tests included are held to the highest standards of empirical and theoretical rigor (Geisinger, 2013).

Reports (Psychological)

Assessment, no matter how meticulously conducted, is only as valuable as the clarity and utility of its communication. In this context, psychological reports become paramount. These documents, which encapsulate findings from assessments, are shared with various stakeholders—parents, teachers, administrators, and sometimes the students themselves. A well-crafted report elucidates the assessment’s findings, interprets the data in the context of the student’s unique circumstances, and offers tailored recommendations. However, the art of report writing goes beyond mere clarity. Collaboration, ensuring that reports are accessible and actionable for its intended audience, is pivotal (Shapiro, 2004). Furthermore, as Kamphaus and Frick (2005) pointed out, psychological reports must strike a delicate balance—being comprehensive yet concise, technical yet understandable, and objective yet empathetic.

These resources and publications underscore the foundational bedrock upon which effective school psychology and assessment practices are built. By offering rigorous evaluations, critical insights, and clear communication methodologies, they elevate the quality and impact of assessments in educational settings.

The realm of school psychology, with its intricate tapestry of cognitive, emotional, social, and behavioral facets, demands a multifaceted approach to assessment. Gone are the days when a singular test or metric could encapsulate the complexities of a student’s learning experience. Instead, the modern landscape champions a holistic methodology, one that considers the myriad factors shaping a student’s academic and personal journey.

The importance of this comprehensive approach cannot be understated. By acknowledging and evaluating the diverse dimensions of a student’s experience, professionals can tailor interventions, strategies, and supports with greater precision and efficacy (Reschly & Ysseldyke, 2002). Such a nuanced approach does more than just diagnose; it helps in shaping a pathway, grounded in understanding and empathy, that guides students towards their fullest potential.

As we gaze ahead, the assessment landscape in school psychology promises further evolution. Technological advances, increasing global interconnectedness, and a deeper understanding of neurodiversity are likely to shape new methodologies and tools. Moreover, as the importance of socio-emotional learning gains traction, assessments will need to transcend traditional academic metrics, offering insights into aspects like emotional intelligence, resilience, and interpersonal skills (Durlak et al., 2011).

Yet, while the tools and techniques might evolve, the foundational principles remain consistent. At its heart, assessment in school psychology is a means to an end—a tool that, when wielded with care, expertise, and empathy, can illuminate the path towards meaningful, sustained learning and personal growth (Jimerson et al., 2006).

In conclusion, the journey of assessment in school psychology, enriched by its history, guided by its contemporary practices, and propelled by future prospects, embodies a profound commitment. A commitment to understanding, supporting, and nurturing every student’s unique trajectory, ensuring that every child, irrespective of their challenges, finds their place in the tapestry of learning and growth.

References:

  • Almond, P., Steinberg, L. S., & Mislevy, R. J. (2015). Design patterns for learning and assessment: Facilitating the introduction of a complex simulation-based learning environment into a community of instructors . Journal of Research in Science Teaching, 52(2), 180-216.
  • Bailenson, J. (2018). Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do . W.W. Norton & Company.
  • Binet, A., & Simon, T. (1916). The development of intelligence in children . Williams & Wilkins.
  • Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation, and Accountability , 21(1), 5-31.
  • Buros, O. K. (1978). The eighth mental measurements yearbook . Buros Institute of Mental Measurements.
  • Buros Center for Testing. (2020). The mental measurements yearbook series . Buros Center for Testing.
  • Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology , 54(1), 1-22.
  • Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Pearson.
  • Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children , 52(3), 219-232.
  • Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta‐analysis of school‐based universal interventions. Child Development , 82(1), 405-432.
  • Fagan, T. K., & Wise, P. S. (2007). School psychology: Past, present, and future (3rd ed.). National Association of School Psychologists.
  • Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly , 41(1), 93-99.
  • Gardner, H. (1983). Frames of mind: The theory of multiple intelligences . Basic Books.
  • Geisinger, K. F. (2013). APA’s model act for state licensure of psychologists: Another step in a long journey. American Psychologist , 68(7), 511.
  • Hammill, D. D., & Larsen, S. C. (2009). Test of Written Language (TOWL-4) . Pro-Ed.
  • Hargreaves, A., Earl, L., & Schmidt, M. (2002). Perspectives on alternative assessment reform. American Educational Research Journal , 39(1), 69-95.
  • Helms, J. E. (2006). Fairness is not validity or cultural bias in racial/ethnic assessment by psychologists. American Psychologist , 61(8), 845.
  • Herman, J. L., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment . Association for Supervision and Curriculum Development.
  • Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Psychological Assessment Resources.
  • Jimerson, S. R., Oakland, T. D., & Farrell, P. T. (2006). The handbook of international school psychology . Sage Publications.
  • Jimerson, S. R., Stewart, K., Skokut, M., Cardenas, S., & Malone, H. (2008). How many school psychologists are there in each country of the world? International estimates of school psychologists and school psychologist-to-student ratios. School Psychology International, 29 (1), 79-88.
  • Jimerson, S. R., Burns, M. K., & VanDerHeyden, A. M. (2016). Handbook of response to intervention: The science and practice of multi-tiered systems of support (2nd ed.). Springer.
  • Kamphaus, R. W., & Frick, P. J. (2005). Clinical assessment of children’s intelligence: A handbook for professional practice. In Clinical assessment of child and adolescent intelligence (pp. 3-17). Springer, Boston, MA.
  • Kavale, K. A., & Forness, S. R. (2000). History, rhetoric, and reality: Analysis of the inclusion debate. Remedial and Special Education , 21(5), 279-296.
  • Lezak, M. D. (2004). Neuropsychological assessment (4th ed.). Oxford University Press.
  • Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What is it and why do it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18-78). Guilford Press.
  • McIntosh, M. J., & Morse, J. M. (2015). Situating and constructing diversity in semi-structured interviews. Global Qualitative Nursing Research , 2, 2333393615597674.
  • Merrell, K. W., Ervin, R. A., & Gimpel, G. A. (2006). School psychology for the 21st century: Foundations and practices . Guilford Press.
  • Meyer, G. J., & Kurtz, J. E. (2006). Advancing personality assessment terminology: Time to retire “objective” and “projective” as personality test descriptors. Journal of Personality Assessment , 87(3), 223-225.
  • Neisser, U., Boodoo, G., Bouchard, T. J., Boykin, A., Brody, N., Ceci, S. J., … & Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist , 51(2), 77.
  • Paulson, F. L., Paulson, P. R., & Meyer, C. (1991). What makes a portfolio a portfolio? Educational Leadership , 48(5), 60-63.
  • Phelps, R. P. (2005). The rich, robust research literature on testing’s achievement benefits. The Phi Delta Kappan , 86(7), 500-507.
  • Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS) manual: K-3 . Brookes Publishing.
  • Reschly, D. J., & Ysseldyke, J. E. (2002). Paradigm shift: The past is not the future. In Second national research symposium on limited English proficient student issues: Focus on evaluation and measurement (Vol. 1, pp. 1-48). U.S. Department of Education, Office of Bilingual Education and Minority Languages Affairs (OBEMLA).
  • Salvia, J., Ysseldyke, J., & Bolt, S. (2010). Assessment in special and inclusive education . Cengage Learning.
  • Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). Guilford Press.
  • Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher , 29(7), 4-14.
  • Smith, L. T. (2012). Decolonizing methodologies: Research and indigenous peoples . Zed Books Ltd.
  • Spady, W. G. (1994). Outcome-based education: Critical issues and answers . American Association of School Administrators.
  • Tomlinson, C. A. (2001). Differentiated instruction in the regular classroom: What does it mean? How does it look? Understanding Our Gifted , 14(1), 3-6.
  • Volante, L., & Beckett, D. (2011). Formative assessment and the contemporary classroom: Synergies and tensions between research and practice. Canadian Journal of Education , 34(2), 239-255.
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Harvard University Press.
  • Wiggins, G. (1993). Assessing student performance: Exploring the purpose and limits of testing. Jossey-Bass .
  • Yerkes, R. M. (Ed.). (1917). Psychological examining in the United States Army . Government Printing Office.
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Personnel Assessment and Selection

  • < Previous chapter
  • Next chapter >

18 Individual Psychological Assessment

S. Morton McPhail, Valtera Corporation, Houston, TX

P. Richard Jeanneret, I/O Psychologist, Retired, Houston, TX

  • Published: 21 November 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

We concentrate on six major themes that organize both the scientific and practical knowledge regarding individual psychological assessment. After providing a definition and associated contextual background, we discuss the various organizational purposes for which individual assessments can be used, followed by a description of how assessments should be designed to satisfy those purposes. A fourth theme is devoted to the “nuts and bolts” of developing and implementing an assessment process, followed by a discussion of assessment data integration and presenting it to both the assessee and the organizational representatives who will rely upon the information. After reviewing several “special issues,” our concluding comments describe the state of the science and art, and our outlooks and recommendations for the future of individual psychological assessment.

Faced with filling the President and CEO position in a large marine transport company, the Chair of the Board and its Executive Committee find themselves in a dilemma. Several highly qualified candidates have been identified, any one of whom appears to have the capacity to competently lead the organization. The senior executives know, however, that the industry is facing a difficult and complex period of consolidation with increased scrutiny by regulatory agencies. The next few years will likely prove crucial for the company, as it must navigate a narrow path that balances growth against resource availability and operational efficiency with an increased concern for safety. Therefore attention will be less on exercising competent business judgment and more on dealing with the issues in a way that focuses the organization's perspective and instills confidence and a sense of energy and urgency.

A very large service organization is in a struggle for its existence as changes in both technology and culture have reduced the demand for its core services. As it works to reinvent itself in a changed world, it must continue to provide its current services with greater efficiency while simultaneously identifying and initiating an array of new offerings. Senior management knows that many of the organization's senior executives will be retiring soon, and a new generation of leaders must be ready to play an increasing role in the changing organization. But it is not clear what development is needed by those currently in upper management positions who represent the organization's “bench strength” to prepare them for these leadership positions.

Every time an “old-line” manufacturing organization fills a plant manager position the concern arises about how well the new person will fit into the existing management team, both within the facility and at corporate. Candidates have varied experience and education, but successful managers must fit into an existing organizational culture that has developed over decades. Experience has taught the hiring managers that a variety of different backgrounds can provide the knowledge and skills necessary to do the job, but the difficulty comes in understanding the style and interpersonal characteristics that each candidate brings to the table.

A manager is now in a new position and has to deal with problems different from any previously faced. The manager has been unable to develop rapport with peers or staff and is falling behind on crucial projects requiring completion. Meetings led by the manager often end without reaching conclusions, and the people being directed feel frustrated and even a little angry. The senior manager to whom the manager reports is becoming impatient with the lack of progress and has begun making frequent visits to the unit, asking questions and giving directions inconsistent with the manager's plans. The manager is aware that the company has a coaching program and has contacted the human resources liaison to request assistance.

Definition of Individual Psychological Assessment

In each of these situations and a wide variety of others, the key requirement is to understand an individual's personal characteristics, including personality dimensions, problem solving capabilities, interpersonal and leadership styles, interests, values, and motivations, and to integrate those characteristics with specific work situations and requirements. It is this descriptive and integrative function that is the hallmark of individual psychological assessment. In the introductory chapter to their edited book on assessment, Jeanneret and Silzer ( 1998 ) defined individual assessment as “a process of measuring a person's knowledge, skills, abilities, and personal style to evaluate characteristics and behavior that are relevant to (predictive of) successful job performance” (Jeanneret & Silzer, 1998 , p. 3).

This definition emphasizes the measurement of a person's characteristics specifically in the context of job performance. These assessments employ both measurement tools (broadly including “tests,” simulations, self-descriptions, and interviews) and professional judgment focusing on the range of normal human behavior in the work setting. For purposes of this chapter, we differentiate individual psychological assessment from other (though sometimes similar) assessment methods and purposes. In particular, we are not considering assessments conducted in clinical, counseling, or educational settings. The purposes, functions, and techniques to be included in this context specifically exclude psychodiagnosis (even if a measure of nonnormal functioning is included in an assessment test battery for some specific purpose), neurological evaluation, counseling (other than nontherapeutic coaching), or educational evaluation.

The concept of “holistic” assessment of individuals that integrates information about the entire person can be traced [as can so much of the foundation of Industrial-Organizational (I/O) psychology] to the urgent days of World War II. In the case of mental ability assessment, the origins of widespread use can be found even earlier in World War I (Landy & Conte, 2007 ). Highhouse ( 2002 ) has provided a detailed history of individual assessment beginning with the “total-person concept” used by the German army just prior to World War II. “Clinical” measures were often part of these assessment processes, including use of projective instruments such as Rorschach and the Thematic Apperception Test (TAT). By the late 1940s and early 1950s business and industrial organizations began adapting the early techniques for use in selecting and placing managers and executives. Assessment centers derived from the same beginnings (Highhouse, 2002 ), but by the 1960s, the techniques had begun to diverge (Bray & Grant, 1966 ).

In many respects the divergence was one of methodology rather than content. Indeed, many of the same constructs are measured in both techniques, and in current practice often the same or very similar methods are used. Although there are some significant areas in which individual psychological assessment and assessment centers overlap, primary points of distinction lie in two aspects. First, assessment centers are by definition associated with evaluation by multiple assessors (International Task Force on Assessment Center Guidelines, 2000 ). In contrast, Ryan and Sackett ( 1987 ) defined individual assessment for the purposes of their survey as “one psychologist making an assessment decision for a personnel-related purpose about one individual” (p. 456). Second, individual assessment relies on the knowledge, training, and experience of a psychologist to interpret and integrate an array of information in complex situations for which no statistical model is available or sufficient. [As Guion ( 1998 ) noted, assessments actually may be conducted by a wide variety of nonpsychologists; however, for purposes of this chapter, the definition will be limited to those conducted by a psychologist who has been trained to conduct assessments for use in workplace settings.] Assessment centers rely on the accumulation (either statistically or by consensus) of discrete, formal ratings of specific dimensions using defined and anchored rating scales with the intent to structure judgment by focusing on observations of specified behaviors. It should be noted that although it is still reasonable to differentiate these approaches on the individual nature of the assessment, the divergence has narrowed in some applications with the inclusion among the tools of individual assessment, multiple assessors and behaviorally based rating scales as part of the integrative process.

We begin our discussion by offering a framework for individual assessment predicated on a focus on the “whole person” approach to describing human behavioral tendencies arising from the complex interaction of background, experience, cognition, personality, and situational circumstances. We will discuss the thorny debate about how that holistic integration is accomplished. We consider the prevalence of individual assessment and some of the current forces shaping its growth. From there, we examine the purposes for which assessments are conducted and the ways in which those different purposes affect how individual assessments are accomplished and reported. The practical aspects of designing both the content and procedures for individual assessments must take into account simultaneously the organizational context and the objectives of the assessment program. Developing the assessment design into a practical, implementable program begins with work analysis to define the requirements and demands to be assessed and moves into selection or development of instrumentation and creation of reporting and control mechanisms to ensure legal, professional, and ethical compliance, which we discuss at some length. With a program developed, we return again to the issue of integration, which is at the heart of our framework for assessment, and which we discuss from the perspective of three models. That section concludes with a further discussion of reporting and feedback. We follow with a discussion of several special issues in assessment before concluding with five points that we think summarize the state of practice and science in individual assessment.

Frameworks for Individual Psychological Assessment

There are a number of alternative (or in some cases complementary) theoretical positions available to describe individual differences and personality functioning (cf. Hogan & Hogan, 1998 ). Because psychological assessment arose in part from clinical approaches utilizing psychodynamic conceptualizations, individual assessment is to some extent predicated on theories that postulate complex interactions of brain function, experience, cognition, personality characteristics, and situation, resulting in consistent behavioral patterns. Although neither deterministic nor inevitable, such patterns describe likely behavior across similar situations (behavioral tendencies). It is the integrative, holistic nature of these interactions that forms a theoretical background for individual psychological assessment. As Landy and Conte ( 2007 ) pointed out, “behavior is complex and people are whole” (p. 89).

The question that arises, then, for individual assessment is the extent to which assessors can accurately (validly) integrate varied information about individuals, their backgrounds, the anticipated position requirements, and organizational circumstances to make meaningful descriptions and predictions about them. There has been a dearth of research directly addressing the predictive capability of individual psychological assessment, likely due to the twin limitations of small sample sizes and incompletely or inadequately defined outcome criteria (Ryan & Sackett, 1998 ). As Guion indicated when discussing this issue:

The number of cases available for evaluating such a program [referring to assessment situations that do not include multiple assessees across a large number of positions] is usually much smaller; sometimes n = 1 for a given client. In such cases, the issue of whether clinical or statistical prediction is better is moot; clinical prediction is all that is available. (1998, p. 635)

However, the available research ranging from Meehl's ( 1954 ) seminal monograph to Highhouse ( 2008 ) has consistently pointed to the greater correlations of statistically (or even just mechanically) combined information with criterion measures over holistic judgmental methods. Guion ( 1998 ) identified six criticisms of individual psychological assessment, including (1) a failure to provide evidence of the job relevance of the constructs being assessed, (2) assessor and interassessor unreliability, (3) overreliance on a few highly salient variables, (4) lack of evidence for the job relevance of personality characteristics measured, (5) inadequate assessment of interpersonal skills, and (6) concerns about intrusion into personal information that violates assessee privacy. The first five of these contribute to weaknesses in this research. However, he went on to state that “[a]ll of these points can be answered by appropriate design” (p. 637).

As Ryan and Sackett ( 1998 ) pointed out, there are a number of complications surrounding the research on individual psychological assessment, including potential problems with two unpublished studies that they describe as providing some evidence for the criterion-related validity of assessments. They identified five issues that impact the conduct of research in this area and called for improved and different validation strategies to overcome them:

Clear definition of the predictor and for what purpose validity is being evaluated;

The impact of measurement considerations, such as restriction in range, sample representativeness, and potential criterion contamination on the results;

How assessment results are subsequently integrated into the organization's decision-making process;

How the assessment data are to be considered, for example, as dimension ratings, integrated judgments, narrative descriptions, or overall recommendations; and

The role and validity of predictions as separate from descriptions of the assessee.

Although a full account of the debate over the issues of individual assessments’ validity and the virtues and vices of holistic judgmental integration is beyond the scope of this chapter, a variety of responses to these criticisms have been offered. Ryan and Sackett ( 1987 ) pointed to a body of older research (largely published in the 1960s) that indicates positive relationships between assessment results and supervisory ratings. Prien, Shippmann, and Prien ( 2003 ) contended that the interactions between various personal characteristics, competencies, and the specific situation require a configural interpretation of the data. Moses and Eggebeen ( 1999 ) argued that “executive selection and succession planning is based on a sophisticated type of behavioral profile matching,” (p. 202). Similarly, Jeanneret and Silzer ( 1998 ) suggested that individual psychological assessment “involves making a series of judgments based on inferences and then integrating these judgments into descriptions” (p. 13). These inferences are causal (i.e., they influence behavior), imply positive or negative outcomes for the situation, and reflect the strength of the impact on predicted behavior.

In addition, the inferences imply multiple differing criteria across organizations, situations, and positions as well as within candidates. The “criterion problem” is particularly complex for the kinds of positions for which individual psychological assessments are usually conducted. “Success” may require achieving multiple outcomes such as financial goals, production efficiency, creative requirements, employee engagement, effective communications, and improved relationships with peers and others (including external constituencies, e.g., the public, stockholders, bargaining units, regulators, customers, and vendors).

Even critics of the judgmental approach acknowledge that robust and accurate statistical models of complex interactive (e.g., configural) or curvilinear relationships are unlikely because of the extremely high data demands. While pointing out that data from clinicians’ predictions of diagnostic categories indicated that linear models provide a statistically significant prediction even of configural interpretations, Wiggins ( 1973 ) noted that the complexity of the required statistical equations imposes constraints on finding configural models. Even with better statistical tools, the vexing issue of the availability of adequate sample sizes in the case of individual psychological assessment remains. Citing Hoffman ( 1968 ), Wiggins concluded that the failure to find such models “may be due to the fallibility of statistical methods rather than due to inherent lack of configurality [sic] in behavior” (1968, p. 63, quoted in Wiggins, 1973 , p. 177). Thus, the linear model may be powerful enough to produce adequate approximations in prediction of specific criteria, individual configural evaluation of assessment data is likely to be essential for descriptive purposes.

It has been noted that although assessors conducting individual psychological assessments seldom have well-developed statistical models for combining data, other less rigorous methods are available to standardize data integration (Highhouse, 2002 ). Inclusion of human judgment with mechanical combination processes may allow for improved prediction (Whitecotten, Sanders, & Norris, 1998 ; Ganzach, Kluger, & Klayman, 2000 ; Wunder, Thomas, & Luo, 2010 ). Even Meehl ( 1954 ) acknowledged what became known as the “broken-leg” cue at which clinical judgment excelled. Greater importance, however, may lie in the definition of the purpose of individual psychological assessments. To the extent that assessment is not focused on prediction of a single (or even a combination) criterion (such as success in a particular position), but rather on description of assessees’ abilities, competencies, personality characteristics, and behavioral and interpersonal tendencies, the role of judgmental interpretation and integration may have much greater value. Statistical models may excel at answering the question of “what” the assessee will do with respect to some specific criterion of interest, but they are likely to be less efficient, and possibly even unavailing, in answering the questions of “how” or “under what differing conditions.”

The descriptive function of individual psychological assessment supports many of the purposes that underlie its multiple uses (see the discussion below of assessment purposes). Effective description is predicated on a clear understanding of the role requirements expected of the assessee as well as sufficient knowledge and understanding of the psychometric qualities of the measures used to allow interpretation of the information in the role context. The purpose of description in this context is to provide information either to decision makers or to assessees. Some practitioners eschew providing explicit recommendations regarding hiring or promotion, preferring to provide decision makers with information intended to guide them in further information gathering and evaluation (Guion, 1998 ). It is important to recognize that the information provided is usually in the form of descriptions of behavioral tendencies, not predictions of what the assessee will do or say.

Scope/Use of Individual Psychological Assessment

The extent of application of individual psychological assessment is difficult, perhaps impossible, to estimate. The highly confidential nature of the information and, in many cases, of the process itself makes accumulation of data about its use rare. Because of the sensitive decisions being made, some organizations refuse to even discuss the selection processes they use, and many hold the identity of those chosen to participate in such processes as highly confidential. Moreover, as Guion ( 1998 ) noted, there are others besides psychologists conducting assessments of various kinds. Based on their 1986 survey of members of the Society for Industrial and Organizational Psychology (SIOP), Ryan and Sackett ( 1987 ), received 316 responses with 51.6% indicating that they perform individual psychological assessments. Jeanneret and Silzer ( 1998 ), using data from Howard's ( 1990 ) survey of SIOP members, reported that 48% of members working as external consultants and 26% of those working in business organizations perform such assessments. By their estimates, the external consultants were performing a median of two to three individual assessments each week (Jeanneret & Silzer, 1998 ). Thornton, Hollenbeck, and Johnson ( 2010 ) recently compiled informal estimates of the number of assessments conducted annually in the United States in 2007, with the results ranging from 15,000 to 51,000. Although lacking in precision, it should be clear that there are a great many professionals conducting a great many individual assessments. The practice of individual psychological assessment is extensive and impactful, especially so because of the tendency for assessments to be conducted for senior or executive management positions.

There do not seem to be any signs of the practice of individual psychological assessment shrinking. Highhouse ( 2002 ) likened individual assessment to Astin's ( 1961 ) description of psychotherapy as having been accorded “functional autonomy,” implying that its prevalence has given it acceptance unrelated to its research base. Regardless of your view with respect to the efficacy of such assessments, it does seem that the practice of individual psychological assessment has flourished and is likely to continue to do so. Indeed, new developments in practices may be instrumental both in improving its efficacy and in expanding its use.

With the availability of technological enhancements, individual assessment may be conducted both more efficiently and effectively. This trend has been growing for some time. From the mid-1990s on, psychometrically sound personality inventories have been available in online versions. Despite initial (and even some ongoing) concerns about assessment equivalence between paper-and-pencil and computerized tests, especially for speeded measures (Mead & Drasgow, 1993 ; Reynolds & Rupp, 2010 ), and proctored versus unproctored testing environments (Tippins, Beaty, Drasgow, Gibson, Pearlman, Segall, & Shepherd, 2006 ; Tippins, 2009 ; Wunder, Thomas, & Luo, 2010 ), the availability and use of technologically enabled assessment tools have expanded dramatically. Such tools are not limited to cognitive and personality tests. Video and interactive video have improved in quality and declined in costs with ever better platforms and software for creating, editing, and presenting such content. This modality is both highly engaging and able to be administered remotely at reduced costs.

Even the venerable in-basket exercise has been “upgraded” to computerized versions. Initially, simulations were modified to include electronic interface to improve their fidelity to today's workplace by adding telephone, fax, email, and even Web inputs. As the use of electronic devices has spread to the C-suite, simulations have began to include email platforms for responses, handheld devices for search and time management, word processors for preparing documents, spreadsheets for conducting analyses, and presentation software, all to match what managers (even senior managers) experience as part of their work world. Indeed, assessment simulations lacking such features have begun to be viewed by increasingly younger participants as unrealistic and quaint.

The movement toward technology has extended still further to placing the simulation elements into an online environment allowing it to be administered remotely. Indeed, there have begun developments to reduce assessment time by standardizing the review of in-basket responses to allow automated scoring algorithms to be developed. Even the interview has not been immune to the march of technology. Some assessors have long conducted interviews by telephone, and advances in video-conferencing quality and bandwidth have led to its use for conducting a variety of interactive simulations from role-plays to individual interviews.

The result has been that one of the hallmarks of individual psychological assessment, that of the high-touch encounter between the assessee and a psychologist, is decreasing due to the cost and time saving of technology. The extent to which this trend will continue and how it will affect the practice of individual assessments remain unknown at this point, but it seems unlikely to be reversed. Certainly, for those who favor standardization of administration and mechanization of integration, these trends would seem to provide considerable opportunity to improve the efficacy and validity of assessments. For those whose theoretical and experiential models place strong emphasis on personal contact and interaction with assessees, no doubt there will be a number of challenges to be addressed.

Organizational Purposes

Individual assessments are used for a wide variety of purposes within organizations. The content of the assessment (i.e., the constructs to be assessed and the tools to be used) will depend to a large degree on two factors: (1) the job requirements as determined by a job analysis and (2) the reason that the assessment is being conducted. Similarly, the nature, style, and content of the resulting report (both to the organization and to the assessee) will be to a large degree determined by the purpose underlying the assessment.

Selection/Promotion Decisions

A common (perhaps the most common) purpose of individual psychological assessment is to assist managers in making decisions about whom to hire or promote. Some psychologists offer specific recommendations about the pending employment action. Ryan and Sackett ( 1987 ) reported that approximately 59% of their respondents (when combined with a follow-up survey from an intentionally different population, this proportion was essentially the same at 60%, Ryan & Sackett, 1992 ) indicated that they provide a specific recommendation, rising to 68% for hiring decisions. Sometimes, a grouping modifier to indicate either the strength of the recommendation or the confidence the assessor has in it accompanies the recommendations. Thus, such recommendations may be couched as hire (or promote)/don't hire (or promote) and may be accompanied by terms such as “highly recommended,” “recommended,” “recommended with reservations,” and “not recommended,” or may be couched more obliquely as “red,” “yellow,” or “green.” Seldom are specific numerical scores provided, but the recommendations are occasionally presented as numbers on a predefined scale or (rarely) an estimated percent probability of success.

Other psychologists do not include specific action recommendations in their reports. (Note that there is within-assessor variation in that an individual psychologist may provide specific recommendations in some cases but not in others.) For these assessors, the description of the candidate and relative strengths and weaknesses are the focus of the assessment. Narrative reports come in a variety of forms (see below for more detail) and may include an overall narrative, dimensional (or competency) descriptions, lists of strengths/weaknesses, on-boarding guidance, developmental suggestions, and coaching recommendations. Some descriptive reports may be accompanied by a set of questions for the hiring manager to ask in a follow-up interview or questions intended to prod the manager to seek additional information to inform the decision (Guion, 1998 ).

Both styles of reporting are targeted at assessing the extent to which the candidate has characteristics that suggest she or he will display one or more types of fit.

Fit to Job/Role

It is a very common (possibly universal) practice for assessors to obtain information about the target position prior to the assessment. The means by which this analysis is accomplished varies greatly, spanning the full range from that obtained in interviews with managers about the position to use of structured job analysis methods. At the least, assessors should review the official description of the target position. The purpose of the assessment then is to ascertain the extent to which the assessee displays abilities and characteristics consistent with those required for the position or role.

Here, the term role is used to indicate that at the senior levels of an organization, it is possible, even likely, that the specific functional activities performed may to some extent be highly subject to idiosyncratic interactions of the incumbent, responsibilities, and required activities. It is not uncommon to find that the target position is ill-defined, undergoing change, or highly varied across incumbents. For example, the requirements for the position of plant manager may be different depending on (among other things) facility size, location, past history, safety record, and existence of bargaining unit agreements. In addition, the target may be a “role” as noted above, such as “executive,” “project manager,” or “sales representative,” rather than a particular position. The salient characteristics needed may vary across hiring or promotional events. This differentiation also occurs across organizations. The particular set of competencies and personal characteristics that may be highly successful for a leader in one organizational context may be less appropriate or even inappropriate in another (see the discussion of organizational fit below).

The psychologist must structure the assessment process to obtain data bearing on the abilities, competencies, and characteristics required by the position or role in the context in which the hiring manager must evaluate and decide between candidates. The resulting candidate description, therefore, must be responsive to the particular hiring or promotion event, and the same set of competencies and personal characteristics may not be required, or required in the same way, for every event. This need for flexibility in relating the description of the assessee to the particular target position or role argues against a rote or mechanistic approach to interpreting and integrating the assessment data.

Fit with Manager/Executive Leadership

When assessing candidates for senior-level positions, the extent of fit with the organization's existing leadership is not a trivial issue. In some cases, it is possible to conduct assessments of all members of the senior team, allowing the psychologist to examine similarities and differences and evaluate the interactions between differing sets of competencies and styles. For smaller organizations, assessors, either internal or external, may have long-term relationships that allow for more thorough and accurate analyses of the extent to which candidates for hire or promotion will be likely to adapt to or even complement the existing leadership team. More frequently, the psychologist will have only limited information derived from less extensive contact and perhaps self or other descriptions of style, culture, and functioning. In these cases, the descriptions of the assessees can serve to highlight problematic behavioral tendencies for consideration by decision makers.

It should be noted that issues of fit with senior leadership apply even to positions below the executive level. A candidate whose personal characteristics, values, motives, and interests indicate reluctance or inability to share the vision and strategy adopted by senior management may bring needed perspective to the organization or may engage in unhelpful or even damaging conflict. Part of the value of the in-depth nature of individual psychological assessment is that the possibility of such conflicts can be identified early allowing for discussion and consideration of the value versus risk of their impact on the organization's functioning.

Fit to Team/Peers

In addition to concerns regarding the candidate's fit with senior leadership, a question arises for most if not all management positions about the extent to which candidates will be willing and able to work effectively with those around them. For promotional candidates, the assessment process may benefit from inclusion of 360 data about them from their current positions. For external hires, the problem is more difficult. Careful description and interpretation of personal characteristics related to work style and interpersonal relationships can help to identify potential issues for further consideration by the organization. However, the efficacy of the assessment process for identifying such issues is likely to be directly related to the amount and depth of information that the assessor can obtain about the team's functional culture and the nature of the peer interactions required by the position.

Fit to Organization (Unit/Whole)

In considering the broader organizational fit, experience and training in organizational structures and functioning are of particular importance. Having knowledge of the organization's climate can be challenging if there is no direct participation with it over time. Researching the stated vision and mission of the larger organization as well as the particular goals of the target unit can be revealing. Additionally, information about organizational functioning may be obtained from an examination of the business model, reporting relationships, financial data, and public reputation. Interviews with organization leadership and management in the target function may also provide insight. For external assessors, it is not uncommon for this kind of information to be provided by human resource personnel responsible for the assessment process, and their insights can be very helpful. Often, however, multiple sources for these descriptions will prove to be valuable.

Performance Management and Development Decisions

A common use of individual assessment is to foster development for managers. It provides a wealth of information to assist in various developmental processes whether they are being carried out by assessees themselves, managers, internal development specialists, or the psychologist conducting the assessment.

Support Succession Plans and Identify High Potentials

Organizations do not remain static over time; leaders leave or retire; the organization grows or develops new capabilities; managers are promoted or transferred (Rothwell, 2001 ). Planning for these contingencies can be and often is guided by information regarding the individuals considered to be the organization's “bench strength” (Silzer & Davis, 2010 ; Silzer & Dowell, 2010 ). Are the right people in place, and have they had experiences and training to qualify them to fill the openings that will occur? Ryan and Sackett's ( 1992 ) survey respondents indicated that almost 46% of their clients used individual psychological assessment as part of succession planning. Many large organizations seek formally to identify employees with particularly high potential. Although this process is controversial [see Silzer & Church's ( 2009 ) discussion of the “perils”], it can be made more effective with supporting assessment data. Often programs that seek to identify early career high-potential employees are designed to provide needed experience and development to fill gaps or overcome potential problem areas. Individual psychological assessment can help to identify such areas and flag them for attention.

Foster Individual Growth

Even without special programs for identifying high-potential employees, many employers recognize the need for ongoing development for employees. Such opportunities may include organizationally sponsored training and education as well as direction to external resources. Assessments can assist employees in identifying their particular areas of strength and development need. When assessments are used for this purpose, a key component is that they be able to provide useful and actionable feedback. Preparing reports for use by individual employees requires particular sensitivity and care to avoid miscommunication, especially if the reports are provided in written form without other intervention by the psychologist (see our discussion below about reporting options).

Initiate or Expand Individual Coaching

Individual psychological assessment is often an initial step in or adjunct to individual coaching. Beginning a coaching assignment with an assessment provides information to both the coach and the participant. This practice can be particularly effective if 360 information is included among the assessment tools. Coaches may use the information as a starting point to begin the coaching relationship. Some tools used in the assessment process (e.g., role-plays and other simulations, 360 data) may also be used to assist participants in reviewing their progress toward established goals. The coaching process is not intended (or able) to result in changes in enduring personal characteristics such as those assessed by most personality inventories; however, indications of a participant's behavioral tendencies may provide guidance to using personal strengths to overcome problem areas for the participant.

Identify “How to Manage” or “How to Coach” Strategies

Psychologists with experience in coaching and knowledge of managerial functions and performance are able to provide recommendations that bear on the most likely effective approaches to dealing with assessees. For example, assessees who tend to be practical and concrete in their thinking styles, emotionally objective, and only modestly introspective usually are less likely to benefit from abstract discussions about the ways that their behaviors affect others. Such individuals, however, may be much more open to data driven approaches, such that including 360 information can be effective in helping them enhance their understanding of how others perceive them. Additionally, an assessment can provide information about a person's work style, such as the preference or propensity to work alone. For a person whose style is assessed to be independent, a stretch assignment might be to manage a project requiring creation of a multidisciplinary team.

Understand Problematic Behavior (e.g., Derailers)

When managers’ behaviors create problems for the organization or for themselves, it is often important to understand what motivates those behaviors and what assets the managers possess that could be brought to bear in addressing them. It is also important for the organization to evaluate the likelihood that the same or similar behaviors will arise again. In some cases it is the very strengths that have propelled a manager to higher organization levels that become a net negative with advancement. In other cases, the changing requirements for more senior positions may highlight areas of weakness that were previously unnoticed. Hogan and others (Hogan & Hogan, 2001 ; Burke, 2006 ; Dalal & Nolan, 2009 ) have identified a range of personality characteristics associated with behaviors that may derail managerial careers. Individual psychological assessments that include measures of such potential derailers along with simulations and interview data can identify problematic behavioral tendencies, provide clarity to both assessees and their managers, and offer strategies for developing alternative behavioral patterns, possibly drawn from other aspects of the person's overall personal repertoire of situational responses.

Support for Organizational Change

Initiate teams.

The process of intentionally creating enduring teams must consider a plethora of issues such as appropriate mix of education, experience, technical expertise, familiarity with the assignment or work process, and availability. For teams intended to function over extended timeframes or requiring exposure to unique or high-stress environments (e.g., remote assignments), individual assessment may assist both in selecting team members and in developing effective relationships among team members.

Support Restructuring Planning

Effective restructuring (including downsizing) of organizations requires attention to the nature of the expected (or needed) new organization and the requirements of the positions to be included in it with particular concern for how the resulting positions may be substantively different from their previous manifestations. Individual assessments can offer critical information in filling the new roles with employees and managers who have the needed mix of abilities and personal characteristics to meet the challenges of a changing organization. Additionally, assessments may offer assistance in working with existing employees whose roles may be changing and who must learn to work effectively in a new organizational structure, both from the standpoint of adapting to the changes and of internalizing and becoming engaged with them.

Individual Psychological Assessment Design

Every individual psychological assessment is conducted with certain design principles in mind. Some design parameters are readily apparent and easy to address; others are very complex and require careful planning. In any event, one design certainly does not fit all assessment circumstances. In reality creating the right assessment process is as important as implementing an assessment protocol.

By design, we mean the planning, building, and implementing of the assessment protocol. Design components include the following:

selecting appropriate individual attributes or competencies to be assessed;

preparing and disseminating communications about the assessment process, including informed consent by the assessee;

identifying or developing assessment methods and tools [e.g., internal or external assessor(s), tests, exercises, technology];

obtaining complementary information (e.g., What is the team like? What are the technology capabilities or the company? Is the organization currently collecting or using existing 360 data?);

determining the scope of feedback and reporting;

defining logistics to include who, when, where, and how, as well as administration of assessment tools, need for support staff, and facilities required.

As discussed below, various design parameters may influence one, several, or all of the above factors embedded within the overall assessment design. Moreover, these design issues are frequently decided jointly by representatives from both the organization (often human resources) and the assessor and may be constrained beyond the control of either.

In this section, we address assessment design from five broad perspectives: organizational context, assessment objectives, participants, sharing outcome information, and restrictions/limitations, with the latter typically imposed by the client organization. By necessity, certain of the design parameters overlap with the organizational purposes underlying the use(s) of the assessment, which have already been discussed.

Organizational Context

The organizational variables influencing assessment design extensively overlap with the assessments’ purposes (see above). Accordingly, we will mention the topics only by heading without extensive explanation. In large measure, the extent to which the following organizational variables influence the assessment design is determined by the extent these same organizational variables drive the purpose(s) of the assessment.

Business Needs/Objectives—critical needs of the business or other objectives of the organization that impact its hiring strategies and, in turn, the assessment components.

Culture/Climate/Values—fundamental organizational characteristics that require and enable the assessment to evaluate the “fit” between the assessees’ attributes and the organization's culture, climate, and values.

Mission/Business Strategies—strategic plans for business ventures such as entrance into new markets, expansion of existing markets, or developing a more global presence have consequential implications for assessment design.

Change Initiatives—similar to business strategies, but change focused more internally to the organization itself and how the employees are to be managed in the future.

Position Competencies—expected behaviors and capabilities needed for successful job performance.

Assessment History—extent that the organization's leaders have experience with assessment, and the influence of that experience on the design of subsequent assessment protocols and their uses.

Assessment Objectives

We discussed these variables above; we highlight them here only to reemphasize their influence on design of individual assessment processes.

Selection—(hire or promotion)—“fit” to:

boss/executive team

team/peers/subordinates

organization (see relevant organization context variables)

Talent Management/Development

succession planning

personal career enhancement

behavioral change or problem behavior “fix”

Combination of selection and development (e.g., promotion with career planning)

Near term or longer term effectiveness (e.g., building for immediate impact or enhancing bench strength for future opportunities).

Participants (Assessees/Assessors)

For reasons that are not always especially clear, assessment design can be influenced by whether participants are internal or external to the organization. In some cases, many characteristics are well known about internal candidates, allowing the assessment protocol to be more focused or, alternatively, to address a broader array of characteristics. Position level also dictates the “level” of the assessees (e.g., mid-manager; senior executive) and, frequently, organizational representatives choose to recognize these “level” differences in the assessment design. Additionally, just as the culture of the organization is important, so is consideration of participants’ cultural heritages and how they will be incorporated into the assessment administration and interpretation. It is also important to recognize that participants with disabilities may require accommodation during the administration of certain assessment tools. Finally, assessees must give their informed consent to participate in the assessment process.

Assessors may be internal or external to the organization, and there are advantages and disadvantages to the use of one or the other (or sometimes both). Most of the reasons for including participation of internal versus external assessors (assuming equivalent credentials) are organizational matters unique to the particular situation. However, the issues we discuss throughout this chapter apply to all assessors regardless of their organizational affiliations.

Sharing Assessment Outcome Information

Although we have discussed issues concerning the sharing of assessment information elsewhere in greater detail, we mention it here because it must be taken into consideration at the time of design. Of course, feedback can be provided orally, in writing, or both, and the methodology, extensiveness, and content become important components of the overall individual psychological assessment design. The issues involved include feedback to the assessee and the level of detail to be shared; they also include matters of confidentiality when the information is to be shared with others such as line managers or human resource (HR) professionals. As a practical issue, the nature, detail, components, and length of the report and other feedback are among the primary drivers of assessment costs because they have direct implications for the amount of time that the assessing psychologist must be personally involved in the process. Accordingly, decisions about feedback and reporting should be made in the design phase so that the interacting issues can be communicated clearly to management and participants at the outset.

Restrictions/Limitations/Opportunities

There are several different types of restrictions, limitations, and even opportunities that can be considered in the design phase. Technology, of course, has presented many new opportunities over the last two decades. Although most individual assessments are conducted one-on-one and in person with the assessor (especially during the assessment interview), technology offers more distal alternatives while also raising concerns about other issues such as test proctoring. Technology impacts not only the administration of many types of tests and exercises, but also the interview. Sometimes, clients place restrictions on the assessment composition (e.g., no cognitive tests; no personality tests; use only business-based exercises), and some clients have preferences for certain instruments (e.g., use the XYZ test of cognitive ability). There may be limitations on, or desire for, inclusion of other external sources of assessment information such as references or 360 data (internal and/or external providers). Finally, consideration must be given to where and how all assessment information will be retained. The assessing psychologist must recognize the importance of these issues and advise clients properly about their consequences. Whenever possible, the assessors should be prepared to offer recommendations regarding best practices and the most effective means for dealing with any restrictions or limitations that could have negative consequences for the assessment program.

An implicit design limitation lies in costs, a factor not often addressed with respect to individual psychological assessment (Moses, 2011 ). The range in costs for individual assessments is considerable; in current dollars, they may range from less than $1,000 to over $25,000 per assessee. Costs may reflect market conditions or the level of the position for which the assessment is being conducted. Additionally, an organization's history with the use of individual assessments may impact the amount it is willing to pay, with successful previous use being a justification for higher costs. However, the most important driver of assessment cost (as opposed to pricing) lies in the amount of professional time that a psychologist must devote to the processes of collecting information, interpreting and integrating it, and preparing and delivering a report. The assessment design process must consider these factors because it is almost never the case that “cost is no object.” Examining the factors impacting costs may serve to make this point more salient.

Information Gathering.

Clearly, standardized tests present the least expensive option for data acquisition. Combined with automated reporting formats, these tests can provide an array of reliable data, usually normed against substantial samples. The role of the psychologist is minimized in data collection and assisted in data interpretation. However, it is often the case that no single instrument will provide the full range of constructs needed to assess even moderately complex competencies. Thus, a battery of instruments is likely to be required, immediately increasing the amount of time required of participants in performing what many will consider a boring and rote task, undermining both acceptance of the process and the quality of data obtained. Indeed, for some senior-level positions and participants, completion of psychometric instruments may be viewed as unacceptable.

Interviews are perhaps the single most consistent source of information used by assessors. As we have noted elsewhere, interviews may take many forms from largely open-ended to highly structured. Interviews may be conducted only with participants or may be extended to superiors, peers, subordinates, and other stakeholders. In most cases the psychologist conducting the assessment will conduct these interviews personally (especially those with participants), either face-to-face or by telephone. Even when a psychologist does not personally conduct all of the interviews, they must be conducted by trained paraprofessionals, and the psychologist must review and integrate the obtained information, still requiring investment of professional time (sometimes even more time from multiple professionals).

As assessments become more interactive, the psychologist may participate in or observe a role-play or presentation or review responses from an in-basket or case study. Even if scored by someone else (or by an automated process), such interactive components usually produce qualitative information that increases integrative requirements and time.

Interpretation and Integration.

As the information obtained increases in complexity, the psychologist is faced with an increasing time burden to understand and make sense of it in terms of the person and the competencies that are being assessed. Addition of multiple assessors further increases the complexity of the integration process. Arguably, as the number of data sources increases arithmetically, the complexity of interpretation and integration increases geometrically due to the inherent increase in the number of variables, and their interactions to be considered. Even with rigorous procedures to organize and standardize the integration process, the amount of time spent by the psychologist can expand dramatically.

It is in report preparation and delivery that most assessing psychologists invest the greatest amount of time. The range of that time from editing an automated report to preparation of a detailed narrative description around competencies with developmental suggestions is extreme. Note that including oral reports to management or formal feedback to participants (and sometimes follow-up feedback or coaching) also increases this time requirement. In designing assessments, costs associated with these activities must be carefully considered to ensure a balance between accomplishing the purpose of the assessment and keeping costs reasonable.

To some extent underlying considerations regarding costs include those of data quality and data sufficiency. It seems clear that the richness of the information available to the psychologist increases with contact with the assessee, observation across multiple stimuli, and input from multiple observers. However, it is this very richness that increases the interpretive and integrative complexity and time requirements. Similarly, in-depth reports may answer more questions for hiring managers or enrich the development of coaching processes, but there is cost associated with the marginal increases in information. In this case, the concept of bounded rationality described by Simon ( 1997 ) may play a role in assessment design: what information is necessary and sufficient to fulfill the defined purpose of the assessment?

In summary, assessment design must consider numerous variables (some of which are interactive), and thus design has become more complex than in the past. Two decades ago the focus of assessment was on selection and considered relatively few variables for managerial jobs in the U.S. marketplace. Today, the same basic steps must be followed, but options are much greater, and the demands for the use of assessment outcomes are more varied. We are not only seeking “fit” to a position, but to an organization, a culture, a team or a manager using new and improved techniques. Consider, for example, the emergence of the objective to design an assessment process that will assist in the “fit” of an individual to an engaged workforce, or to select a manager who can create an engaging work place and manage an engaged work team. These, and other emerging trends, will have a bearing on how individual psychological assessments are designed in the future.

As a final point, it is important that the design be carefully communicated between the psychologist and the client (organization). This orientation is best accomplished by a written document that defines the design parameters as well as sets forth the terms and conditions of the entire assessment process. Moreover, if feasible, the design should incorporate a strategy for use of a client's assessment data in a confidential and professional manner in the support of research conducted to evaluate and improve assessment processes.

Individual Psychological Assessment Development and Implementation

In this section we discuss the development of an assessment process and its implementation. By assessment process we mean the selection or creation of assessment tools/instruments, the formation of an assessment battery, and the preparation and dissemination of relevant communications, including feedback and reports. The scope of implementation includes administration of the various components of the assessment battery, and completion of due diligence efforts to ensure ethical integrity and fairness in compliance with applicable codes, standards, and statutes. Development of assessment processes begins with an analysis of the work, position, or role, for if we are not clear on what we are assessing the individual to do, we will surely fail to hit the mark.

Work Analysis

The nature and scope of work analysis intended to support the design and conduct of one or more individual psychological assessments are usually different from the analysis necessary to build selection tests or create a job classification and compensation structure. In the work analysis needed for assessment, the focus is often on one position or role that will have only one incumbent, and the outcome is a selection or promotion decision. In such circumstances, identifying the right job requirements is essential. Alternatively, if the assessment is focused on individual development, there may be no specific position with defined responsibilities to target; rather, broader characteristics such as leadership or managerial style may be the primary guidance available. Silzer and Church ( 2009 ) offered an interesting trichotomy of competencies: foundational dimensions, growth dimensions, and career dimensions. Their model was developed to support assessment for leadership potential, but the model could apply to a broader range of assessment purposes. However, as Ryan and Sackett ( 1987 ) found, there is wide variability in the scope of information obtained about both the organization and the job in preparation for individual assessments.

In most work analyses conducted to guide and support an individual assessment, there are at least three types of information that should be obtained. First, the assessor needs information about the organization: its environment or climate, its business strategies, and its operating structures and interfaces. Relevant information typically can be obtained from executives or human resource professionals; organizational communications such as mission, vision, and value statements; organizational charts; and internal data such as employee surveys or customer feedback results.

Particularly when the assessment purpose is for selection or promotion, the data-gathering effort should focus on the position of interest. This analysis would include gaining an understanding of position responsibilities, requirements, and immediate work context, including the technical expertise requirements, the nature of problems expected to be encountered, and the complexity and level of decision making required. Sometimes even more clarity is required in terms of what we will refer to as role expectations. Information that goes beyond the stated responsibilities, and may be a function of specific reporting relationships, can be crucial in identifying relevant position requirements for complex jobs. It could include the recognition of specific work assignments; unusual travel demands, or any one of a host of particular, even unusual, expectations that otherwise may not be identified by standard work analysis. Sources of information at the position level may include position descriptions, managers, HR professionals, position incumbents (past or present), and possibly peers. It is also possible that the O*NET (Peterson et al., 2001 ) could offer useful information on both position content and requirements.

Finally, the work analysis should consider relevant team information. Such information may include communication pathways, formal and informal interaction cycles, internal and external relationships with other teams or customers, and team climate. Information sources could include managers, HR professionals, results from 360 feedback or employee surveys, and position incumbents.

The collection of work analysis information allows the assessing psychologist the opportunity to examine each of the organizational purposes we discussed above: fit to the organization, fit to a team or with peers, fit with a manager or executive leadership, and fit to a specific position/role. Assuming that there is no intent to build a unique assessment procedure (e.g., situational test, role-play, in-basket) for a particular individual assessment, then the critical data are derived from the position requirements and associated contextual data. This work analysis information guides the selection of one or more tools (including interview questions) to be used in the assessment process.

Much has been made of the term “competency” when discussing the gathering and modeling of work analysis information, especially when that analysis is focused on position behaviors and requirements (Hollenbeck, 2009 ). In many instances, competencies are used as the drivers and organizing variables for the administration of human resources activities including performance management, training/development, classification and pay programs, as well as selection and assessment. However, we have found no single competency model or language that is more useful than traditional terms such as responsibilities, generalized work activities, knowledge, skills, abilities, and other characteristics (KSAOs), and work context that would serve the work analysis purposes required to design an individual psychological assessment process. If it fits the organization's existing HR systems, then it is reasonable to report assessment outcomes using the structure offered by a company's competency model. Otherwise, there is no particular reason to let competencies define the design and content of an individual assessment process, except the extent to which their inclusion may enhance management acceptance of the process and results.

Select/Create Assessment Tools/Instruments

Based on the work analysis results, the assessing psychologist can select from a variety of assessment tools or instruments including ability tests, personality and interest measures, various exercises, self-descriptors, and interview protocols. Of course, published tests and measures are designed to assess a finite number of constructs, and some of these constructs may not be relevant to a specific assessment given the position responsibilities and requirements. In other instances, there may not be existing tools available to measure important position requirements. In such circumstances, the assessing psychologist may either build an instrument, or develop interview questions to evaluate the construct(s) of interest.

A typical battery for individual psychological assessment might include the following tools.

Ability Tests

One or more measures of cognitive abilities including problem solving, verbal, quantitative, or abstract reasoning, inductive thinking, and other measures of intellectual functioning are frequently components of individual assessments.

Personality/Interest Measures

One or more inventories of personality and/or interests are key elements of almost all assessments. Usually the personality constructs measured would incorporate the “Big-Five” (McCrae & Costa, 1997 ) as well as more specific dimensions. If an interest measure is deemed to be appropriate (often as part of a developmental assessment), Holland's six-factor taxonomy is a useful model for organizing vocational preferences (Holland, 1985 ).

One or more exercises that are usually simulations of work life events requiring the identified competencies (e.g., collaboration, communication, or initiative) may be included. The exercises themselves can be presented as in-basket actions, role-plays, case studies, or presentations. Some exercises require group interactions (leaderless group discussions, teamwork); these are not usually part of an individual assessment process for the obvious reason that only one person is being assessed at any given point in time. A compilation of the above types of exercises has been labeled an assessment center (Thornton, 1992 ), and in that context with multiple assessors they have been shown to demonstrate significant validity (Gaugler, Rosenthal, Thornton, & Bentsen, 1987 ).

Assessment Interviews

An interview (usually having a “loose” structure or semistructured) is a critical component of the individual assessment process and typically, if not always, is a key data collection tool. Issues revolve around the types of questions, the level of structure, and degree of formal scoring that occurs. Interview questions may be factual in nature (life history events), behavioral, situational, time bound (past, present, future), and/or “fit” oriented. Structure can be characterized as ranging from “very little” to “highly structured.” Scoring of responses may range from “yes” or “no” to quantitative rating scales regarding particular competencies, but obviously, some structure is required for most scoring schemes to be implemented. Ryan and Sackett ( 1987 ) reported that approximately 15% of assessors used a standardized interview format, 73% used a “loose” format, and 12% did not follow a standardized interview protocol. Although these data are almost 25 years old, there is little reason to suspect that interview practice in this context has dramatically changed (cf. Frisch, 1998 ).

Self-Description

It is often helpful to provide an opportunity for assessees to present their own evaluation of their strengths and weaknesses. Although responses to personality and interest tests and to interview questions are certainly forms of self-description, there are also some assessment processes that provide assessees the opportunity to describe strengths and weaknesses in their own words (Frisch, 1998 ). Such self-assessments often provide insight into the candidates’ attempts at impression management. Of even greater value, however, may be the extent to which this information offers a starting place for feedback. If the self-assessment is inconsistent with the test or 360 data and interview responses, the feedback conversation may begin by explicitly pointing out the discrepancies and engaging assesses in examining closely how they present themselves, and learning about aspects of their own personalities and behaviors of which they may be unaware. If assessees do not have résumés on record, the assessment process may also include a biographical questionnaire that documents education, work history, and sometimes other information such as interests.

External Information

So far we have considered information that is obtained directly from the assessee. However, there may be opportunities to collect data from others that can add to the knowledge of the assessor. In particular, information from peers, subordinates, and managers (360 data) is often part of assessment processes when conducted for developmental purposes. For external candidates, references can be contacted by the assessing psychologist, given permission from the assessee.

The Assessment Battery

Compilation of two or more tools into an assessment battery is typically guided by several considerations. First, the work analysis results will have documented several organizational, team and individual position parameters that represent constructs to be assessed. Unfortunately, there may not be explicit information on the relative importance or impact of each construct and therefore no guidance on how to weight them. There also may not be clear links between the constructs to be assessed and the tools or processes available to measure them. Consequently, the assessing psychologist may be faced with decisions about what to measure and how to measure it, having less than the perfect match for every variable.

A second consideration in selecting tools for a battery concerns their psychometric properties. For most, if not all, published tests intended for use in individual assessments there are psychometric data available. However, those data generally have not been gathered under the conditions typical of an individual assessment. As Ryan and Sackett ( 1998 ) noted when discussing the validity of individual assessment, “there is a need for evidence that the individual components of the process are appropriate and that the psychologist's integration of the information represents a valid inference” (p. 65).

With respect to reliability, Ryan and Sackett ( 1998 ) also found few data and reported that “Research on individual assessment reliability has not consistently reached positive conclusions” (p. 73). Their conclusion was likely a function of the lack of interrater agreement among assessors as opposed to the levels of internal consistency or test–retest reliability for published tests or for structured interviews.

A third consideration is the function of norms. When repeated individual assessments are conducted for one organization or for one or a few related job titles, it may become feasible to construct normative information that can be useful in evaluating assessment (test) results. Development of broader norms is also possible (e.g., senior management positions, operations management, real estate industry sales managers). Normative data can be useful in understanding certain categories of assessment data (e.g., 360 results) and in providing feedback (e.g., test scores on a measure of reasoning were at the 75th percentile for candidates in similar circumstances).

Frequently, but by no means always, the assessment battery includes several measures prepared and distributed by test publishers. However, in certain instances, effective procedures include developing and utilizing one or more customized instruments. Often these are client-specific or position-specific measures delivered in the form of role-plays, in-baskets, 360 surveys, or related materials that are focused on assessing one or more specific competencies that are particularly important to the effectiveness and acceptance of the assessment process and its purpose(s). The downside to developing such instruments is that doing so can be costly, time consuming, and require substantial time and data collection to develop psychometric support for them.

Preparing and Disseminating Communications

There are two levels of communication that should be prepared and disseminated about the assessment program: One level addresses the organization or at least those who will generally need to know even though they may not be directly involved as assessees or recipients of assessment results. The second, more detailed, level of communication is directed specifically to assessees and those who will receive the assessment results.

Some basic content in both levels of communication is very similar. It should include issues of confidentiality, assessment purpose and use of results, administrative guidelines, and other basic responsibilities of all involved parties. A review of issues including sample documents that can be used in support of the communication processes can be found in Jeanneret ( 1998 ) and Meyer ( 1998 ).

Implementation

Given that the components of the assessment protocol have been selected (or prepared in the instance of customized materials) and that a strategy for administration must be determined and executed, a checklist may be helpful for administrators who obviously should be properly trained in the implementation of assessment materials. Below, we discuss a few topics that are especially important to the implementation effort.

The length of the assessment is primarily determined by the scope of the assessment battery, but it is also important to give proper consideration to assessees’ time and how they might be expected to react during the course of the assessment day. Some assessments require more than one day; others may be implemented in one-half day depending on the number of measures and their time requirements for completion. We generally prefer to administer cognitive measures first (we assume that most people are fresher earlier in the day and would prefer to finish cognitively demanding tests before they become fatigued); next we administer exercises and/or personality/interest measures; finally, we conclude with exercises (if any) and the assessment interview. Typically, some feedback at the closure of the interview is provided to the assessee.

To shorten the length of time the assessee is “on-site,” some assessment materials may be provided in advance. There is little agreement on the acceptability of such processes, but it is clearly recognized that tests administered an “unproctored setting” must be interpreted in that context. If prework is useful from an administrative perspective, we recommend having assessees complete documents such as biographical forms, interest inventories, and similar instruments that would be less likely to be compromised in an unproctored setting and otherwise may be verifiable. However, we recognize that the data on unproctored personality testing are generally favorable (Ployhart, Weekley, Holtz, & Kemp, 2003 ; Chuah, Drasgow, & Roberts, 2006 ; Tippins et al., 2006 ; Tippins, 2009 ), though issues of ethical and legal risks remain a concern (Pearlman, 2009 ).

Technology is now dominating the administration of many measures and is even utilized by some assessors to complete the interview (e.g., structured interview questions, video conference). Although there is generally little reason to suspect that computer-administered tests lead to different conclusions than would paper-and-pencil tools, individual accommodations may be necessary to fairly administer certain test materials. Moreover, in our judgment a person-to-person interview is a valuable tool and presents the best alternative whenever available. It is the best situation for establishing rapport and permits more opportunities for the assessor to observe the behavior of the assessee. Additionally, some exercises (e.g., role-play) may require multiple interactions that are best accomplished in a person-to-person setting.

Most individual assessments involve just one interview with one psychologist, but it is possible to involve multiple assessors. For example, one assessor might play a role in an assessment exercise while a second conducts the general assessment interview and provides feedback of assessment results to the assessee. In such circumstances, the two assessors confer to exchange interpretations and draw conclusions about the assessee as would be the case in a standard assessment center.

Due Diligence and Ethical Issues

There are four specific topics that we think are of particular importance. They are the matters of informed consent, confidentiality, legal compliance, and fairness.

Informed Consent

Assessees should be fully informed at least orally (though we strongly urge that it be presented in writing) and told that they must give consent to participate in the assessment process. As part of this information, assessees are entitled to know who will have knowledge of the assessment results. Informed consent includes providing information to assessees regarding the purpose for the assessment and its possible consequences. Individual psychological assessments for the purposes we describe in this chapter (i.e., specifically excluding forensic assessments) should never imply or require mandatory participation (even though the situational demands—i.e., the assesses wants some outcome such as a job or promotion—may be strong). Assessees must have the option to withdraw at any time.

Furthermore, it is our position that by giving informed consent, assessees are due the courtesy of some feedback about the assessment results. However, we note that there is not universal agreement among assessors about their commitment to provide feedback. At the very least, assessees should know whether or not feedback will be provided and in what form before they consent to participate. Ryan and Sackett ( 1987 ) reported that across both internal and external assessors, almost 80% provided feedback to the assessees, and usually that feedback was provided only in oral form.

Confidentiality

There does appear to be almost universal agreement that assessment results and recommendations are highly confidential and their distribution should be restricted only to those with a need to have them, protected from broad or accidental disclosure, and destroyed using appropriately secure means when they are no longer relevant. Both assessors and the recipients of assessment results should be clear about their responsibilities for maintaining confidentiality. Access to assessment materials (including tests and scoring protocols, reports, and all assessment files) should be restricted to those with a need to know and maintained in a secure, controlled manner. In some jurisdictions, licensed psychologists may be able to assert a level of privilege with respect to their notes and assessment data—a protection not accorded to most nonpsychologists or any unlicensed psychologists. In general, absent legal mandate, assessment data remain the property of the psychologist whereas reports become the property of the commissioning organization.

Legal Compliance

Most legal compliance is governed by state statutes and boards responsible for the licensing of psychologists. It is our strongly held position that the conduct of individual assessments would be included in the definition of the practice of psychology in most states and thus under the purview of the state psychology licensing boards, and that any assessing I/O psychologist should be properly licensed. Not to do so is not only a violation of state statutes in most jurisdictions, but is also in our judgment a potential violation of the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct (2002).

It is the psychologist's responsibility to use only assessment instruments and procedures that are known from research or believed in good faith to be fair. By fair, we mean that the content of the measure or process must not somehow influence the performance of an assessee in ways not related to the specific purpose of the assessment. As noted in the Principles (Society for Industrial and Organizational Psychology, 2003 ):

There are multiple perspectives on fairness. There is agreement that issues of equitable treatment, predictive bias, and scrutiny for possible bias when subgroup differences are observed are important concerns in personnel selection; there is not, however, agreement that the term “fairness” can be uniquely defined in terms of any of these issues. (p. 68)

Assessment Data Integration

Although assessment data may be obtained from a variety of sources, at some point this wealth of information must be integrated into a consistent whole to describe the assessee with respect to the particular requirements and situation. The data may be equivocal or even contradictory. An individual's personal history may challenge the test results, or the person who scores poorly on a cognitive test may perform excellently on an in-basket exercise. In most cases, however, the data will paint a coherent picture of the assessee, which must then be examined in light of the criteria to be considered for the target position, in the client organization, in a defined situation.

Silzer and Jeanneret ( 2011 ) identified four stages of data integration in an individual assessment. The first stage requires integration of information within one assessment method (i.e., within a personality test or simulation). A second stage provides for integration of data across methods at the performance dimension or competency level (i.e., combine a personality test score with an in-basket score).

The third stage involves integration of information across performance dimensions or competencies (e.g., the assessee presents a high conscientiousness score but is not very detail oriented). The result of this third phase is typically an assessee profile across a limited set of performance dimensions or critical competencies. A final integration may or may not occur, depending on the assessment purpose. If it does occur, it requires the assessing psychologist to integrate all that is known from the assessment process into one final recommendation (i.e., hire; promote; place on a fast track). Silzer and Jeanneret ( 2011 ) assert that the fourth stage is the most difficult, and that assessors are less effective at making final summary recommendations than for any of the first three stages.

Among the initial considerations facing any psychologist conducting individual assessments is that of how the data are to be interpreted. Two key issues must be resolved: (1) selection, from among available group norms, one that is most relevant for the current situation and (2) deciding whether the data should be interpreted by comparison to the scores produced by others (normatively) or by comparison of scores within the individual assessee (ipsatively).

Raw scores on most measures carry little meaning by themselves and normative information provides the best basis for their evaluation (Meyer, 1998 ). In some instances, assessors will have available organization-specific or even position-specific normative data. More often, however, they will be constrained to use norms available from a test publisher. The difficulty, of course, is that normative samples compiled by test publishers may match the situation only tangentially or only in the broad sense of representing the general population (or some predefined portion of it such as “managers”) (Jeanneret & Silzer, 1998 ). Comparing the scores of a highly qualified candidate for a senior management position to either provides at best proximate information, especially with respect to cognitive abilities. When it comes to high to moderate fidelity simulations, there are likely to be no normative data at all, unless the organization has used the instrument long enough to accumulate sufficient specific data to construct meaningful norms. At best then, normative data often provide a rough cut of information that highlights those characteristics that are most likely to be particularly salient for an individual.

From a descriptive perspective what may be of at least as great an interest is the extent to which scores on different constructs or competencies are higher or lower relative to each other. For some assessees’ score patterns on personality measures may be generally lower. In such cases, even moderate scores that are markedly higher than others may suggest the individual's more likely propensities, even if they are near the mean of the normative population (note, however, that some personality measures treat scores near the middle of the scale range as not having characteristics at either pole of the scale). Additionally, within person evaluation is the sine qua non of configural interpretations. Meyer ( 1998 ) describes the process in this way:

Some information is important because the participant's scores can be compared to a relevant norm group; other information is important because one skill or ability can be compared to another skill and ability for that individual. This internal comparison is an ipsative interpretation. Using this approach, psychologists can interpret a single assessee's strengths and developmental opportunities in a more cogent manner. (p. 263)

Many assessors consider both types of interpretation, using normative data to estimate the strength of individual characteristics, usually with the expectation that those most different from the mean of the normative population are most likely to be those expressed in behavior. They then use ipsative comparisons to differentiate relatively more likely behavior or to moderate the interpretation of a dimension in light of the assessee's score on another dimension. To the extent that the assessor completes this interpretation consistently, this combination of information can produce rich and accurate descriptions of assessees. However, failure to consistently interpret scores is likely one of the primary reasons that holistic interpretations are less highly correlated with criteria than are mechanical combinations of data.

Using the Data—Three Models

It is possible that there are as many different ways of using assessment data as there are assessors. We have identified three general models for doing so that seem to capture most of the variations. All three involve the use of the assessor's training, experience, and judgment along a continuum of increasing structure to inform that judgment.

Descriptive/Qualitative Summaries

Perhaps the most commonly used model for integration is the creation of descriptions. Under this model, the psychologist interprets the psychological test data, incorporates other information from observations of assessees’ performance on various exercises and interviews, and prepares a summative description of the person's competencies and personal characteristics. The description is often organized around dimensions ranging from simply categorical (e.g., cognitive ability, work style, and interpersonal style) to well-defined competency models. In the description, the psychologist will usually discuss how the assessee's characteristics fit with the requirements of the position as well as make predictions about categories of behavior.

Such predictions are of necessity both general and prospective, indicating likely behavioral patterns consistent with the person's profile, rather than predictions of specific behaviors. For example, a managerial candidate who scores high on measures of energy, cognitive ability, and decisiveness might be expected to have a tendency to size up situations, reach reasonable conclusions, and be willing to make decisions and take action relatively quickly. This statement does not say that the person will always behave in this fashion, but rather that he or she is more likely to do so than not. There may be circumstances (say, for example, if the situation involves some aspect requiring technical expertise that the person does not possess) in which this individual would behave quite differently. Moreover, if this person also evidenced a high degree of conscientiousness and dutifulness, this description would need to be modified accordingly. It also might be modified by examination of how the person handled decision making on an in-basket exercise, e.g., whether she or he took immediate action or first sought additional information.

It should be clear from this example that such descriptions seek to leverage both the normative and ipsative information available by examining the overall pattern of the characteristics measured. Thus, the interpretation of a particular characteristic is influenced by the presence and interpretation of other characteristics. In this example, if the assessee had demonstrated more limited cognitive abilities, the psychologist might have cautioned that though willing to act quickly, the person may reach poorer conclusions or reach good ones less often.

This configural, judgmental approach is a hallmark of this type of assessment as the psychologist seeks to bring together disparate information to reflect the whole person, while still organizing it in a way that will have value and meaning to the client organization. It is fundamentally a descriptive model; though there is usually some predictive component, it is frequently at a holistic level rather than prediction of a particular behavior or outcome.

Structured Approaches

A modification of the holistic, descriptive model adds elements of structure to the process. As is the case with structured interviewing (Campion, Palmer, & Campion, 1997 ), there are a variety of dimensions along which the structure of assessments may be considered. One of the basic structural elements (and one strongly recommended to psychologists conducting individual assessments) is the clear assignment of the measured constructs to the competencies derived from the work analysis. Simply ensuring that the constructs as defined for the assessment instruments map appropriately and consistently onto the job-relevant competencies is a necessary but not sufficient element of providing evidence for the validity of the assessment process. A more substantial step would be to ensure that assessors use common interpretations of assessment data in general, including the meaning of tested constructs (and their interactions) as well as the information obtained from exercises and interviews. This level of structure serves to increase the likelihood that similar interpretations will be made across assessors or across assessments conducted by the same assessor. Perhaps the greatest structure in this regard is provided by interpretive software that creates a narrative report based on selection of descriptive statements drawn from a database predicated on obtained test scores. Such reports have been available for more than 30 years and have increased substantially in their sophistication. However, they do not allow for inclusion of information from multiple personality and ability tests, simulation exercises, or interviews. Many psychologists incorporate automated reports to inform the interpretive process without using them as the final assessment report.

Additional structure may be incorporated in the assessment process by increasing the level of formality associated with simulation exercises. Specifically capturing judgments in the form of ratings of well-defined behavioral dimensions may allow assessors to evaluate critically their own integrative process and the consistency of their outcomes. Finally, use of reporting graphs can assist in the standardization process by allowing assessors to visualize their integration against quantification of the underlying data.

Scored Approaches

The structure underlying simulation exercises such as in-baskets, role-plays, and case presentations can be increased by defining specific scoring rubrics for them. Such rubrics may be developed empirically by analyzing the responses and behaviors of a normative group (e.g., successful managers or sales personnel), or they may be developed a priori based on the work analyses that provide the bases for the simulations. In either case, they must specify the written and oral responses that reflect differing levels of competence on the dimensions being measured. Assessors can then apply the rubrics to the assessees’ performance (whether written responses or observations of behaviors) to score the dimensions measured by the simulations. If the situations and sample sizes permit, such scores may be subjected to formal analyses of reliability.

Conducting formal linkages of assessment measures to the competencies derived from the work analysis extends the structured approach described above by ensuring that the test constructs appropriately map on the work requirements. Formally linking not only personality scales to the competencies, but also the dimensions assessed by simulations and interviews, provides a basis for structuring the combination of quantitative results. Although it may not be possible to develop strong statistical models for doing so, a common and consistent (perhaps even mechanical) process for combining the obtained scores (that may include weighting or other considerations) will increase the structure of the assessments and thus their consistency across raters and assessees.

Even such structured methods, however, leave unanswered the need for interpretation of what such scores might mean. Of course, normative information may be used to place assessees in the range of performance on the competencies, and if criterion-related information is available, specific predictions of assessees’ standing on those criteria may be made with greater or lesser accuracy. In the end, though, the hiring manager, the senior executive, the human resources specialist, and even the assessees themselves are asking different questions. How will this person fit into my team, our organization, or the anticipated role? How is this person likely to respond to demands for immediate decisions about complex issues when all of the information is not available? Will this person be diligent in pursuing difficult goals under stressful conditions? How will this individual act toward peers and subordinates when under pressure? Can this person inspire and motivate others to achieve high standards and meet stretch goals? Can this person lead us to greater creativity, a new vision, or renewed vigor? Will I like and respect this person when I have to work with him or her day in and day out?

Clearly complete answers to these complex questions are not attainable; however individual assessments can provide a broad range of information and insight to inform decision makers who must ponder them. In some contexts, increasing the structure of assessments may aid assessors in providing clear, objective, and consistent information, but it is the role of the assessor in interpreting the obtained results (whatever the level of structure) in the context of the position and the organization that is crucial in providing value to clients. It is the accurate description of the person in terms relevant to the organization's (and in many cases the assessee's) needs that is the characteristic of individual psychological assessment that distinguishes it from the prediction of particular outcomes of the traditional selection model.

Feedback and Assessment Reports

Every assessment should incorporate provisions for feedback and reporting, even if the final decision is to provide no written report. Of course, the two terms feedback and report could mean the same outcome, but we will use the term feedback to refer to information provided to the assessee and report to mean information provided to one or more individuals other than the assessee. There are two possible combinations in our view: feedback to the assessee and reporting to one or more other individuals; or feedback to the assessee and no further reporting. We do not consider failure to provide feedback to the assessee an option. Rather, we believe feedback is required in accord with the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct (2002), and we agree with Hogan and Hogan ( 1998 ) that feedback is a moral obligation. Of course, just like participating in the assessment process itself, receiving feedback is voluntary and may be refused by the assessee. An excellent discussion regarding the communication of assessment results and all of its ramifications is presented by Meyer ( 1998 ; cf. Frisch, 1998 , and Jeanneret, 1998 ). A variety of issues arise with respect to feedback and reporting. We will address some of the more important ones below.

First, Do No Harm

Perhaps the primary rule governing feedback and reporting is to avoid any harm to the assessees’ psychological well-being. Compliance with this rule begins with developing and maintaining a confidential assessment process and protecting all assessment information in accord with the understandings held by the assessee and the client organization. Moreover, although the psychologist is expected to be truthful in all feedback and reporting, care must be taken to properly present what may be considered “negative information” in a manner that is not overly burdening on or harmful to the assessee. Finally, the psychologist is expected to be consistent in the feedback and reporting. Simply stated, the assessor should not tell the assessee one thing about the assessment results and then verbally state or write a report to management that says something quite different.

Oral and Written

Feedback and reports can be presented orally and/or in writing. Typically, assessee feedback has been oral and face-to-face, but telecommunication is becoming more widely used as an alternative delivery mode. The standard option for reporting to the organization is both oral and written, with some time lag in between. Again, technology is having an impact on report delivery, and electronic written reports are likely to become the norm, if they have not already done so.

Some organizations elect not to receive written reports about assessees, sometimes to reduce costs and sometimes to avoid the burden of storing and protecting them. A concern that arises when only oral reports are provided is that the psychologist cannot know or control whether the client attends to, understands, and integrates the complex interactions that underlie human behavior. That is, have they heard what they were told; have they heard all that they were told; and have they correctly interpreted what they were told? Psychologists are well-advised to maintain careful documentation of oral reports (and feedback for that matter) in case they should have to repeat or amend it later by expanding a verbal report in response to questions from more senior management or in follow-up discussions with the client that might include comparisons to other assessees.

Feedback and report content are usually a function of the assessment purpose. Some frequent content parameters include the following:

Predictive—(e.g., setting forth “fit” to an organization, team, or job; estimating success as a manager, providing behavioral tendencies related to how assessees generally may be expected to act in certain situations).

Descriptive—(e.g., presenting a profile of traits measured by several tests; describing behavioral patterns based on in-basket or role-play responses); the content may be presented in normative (when available) or ipsative formats.

Prescriptive—(e.g., suggesting one or more specific personal development actions; planning a career strategy).

The content also might include information based on reactions the assessee has to the feedback itself, which can broaden or create greater focus for the knowledge gained by both the assessor and assessee.

Assessment content is often organized in terms of job competencies that are consistent with other organizational documents (job descriptions, compensation factors, performance management metrics, etc.). An alternative is to organize the feedback and/or reporting in terms of personal characteristics. These are also strategies in which the feedback and/or report are focused on descriptive or prescriptive purpose and content.

Sometimes an assessment report will be used for more than one purpose, and the assessing psychologist may not even be aware of such multiple uses. As an example, an assessment may be completed for the acknowledged purpose of preparing a development plan. The report provided to both the assessee and the talent development department may focus on needs perhaps more than strengths. Subsequently, the written report becomes part of the documentation reviewed by a senior management team when making promotion decisions. Such an occurrence could be a disservice to both the assessee and the organization; the individual's strengths were not appropriately highlighted, and the organization does not have complete information in the report about those strengths that may influence the evaluation of the assessee's “fit” to a vacant position. Although there is no perfect solution to such a circumstance, it illustrates the need for the psychologist to negotiate carefully when establishing the assessment purpose and protocol, and to warn the organization of the problems and risk of them occurring. At the least, it is clear that the written report should state that it has been prepared for a specific intent and that it is ill advised to use the report for any other purpose.

A final consideration regarding feedback and report content is whether to inform the recipient with qualitative information alone (i.e., narrative description), or to provide quantitative data as well. Quantitative information could include percentiles, graphic presentations, or similar metrics that are easily understood, but not actual test scores. Frequently, both qualitative information and quantitative information are incorporated into feedback and organizational reports.

One recipient of feedback and reporting is, of course, the assessee. Other recipients are typically a function of the assessment purpose. If selection is the purpose, then staffing/HR professionals as well as hiring managers are likely recipients. For higher level selections, reporting could include executives and even members of a board of directors. Developmental assessment reports are usually prepared specifically for the assessee but may also be provided to the organization's talent development/succession planning professionals. Additionally, assessees’ managers and perhaps others along the chain of command may receive developmental assessment reports, especially if they are expected to act in a coaching role with respect to the assessee.

Delivery Timing

Feedback and reporting should be delivered to the assessee and organization as soon as possible once the assessment data gathering process is complete. Timing is of particular concern for the assessees, perhaps even more than for the organization, since concern over the assessment outcomes can weigh heavily on assessees’ minds. It also stands to reason that given the pace of organizational functioning, time is of the essence for all assessment feedback and reporting to be completed, i.e., in general, the sooner the better. In some instances, additional follow-up feedback and/or reports may be prepared if a continuing developmental relationship (e.g., coaching) takes place.

The life span of assessment reports is bounded, and those bounds are usually determined by the psychologist. We are not aware of any definitive research that would establish a relevant life span for reports.

Although most assessment reports are predicated on characteristics that are generally expected to be relatively enduring, assessors should be aware of two important points to be taken into consideration. First, the Uniform Guidelines on Employee Selection Procedures (UGESP; Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice, 1978 ) caution that the continued validity of selection procedures is related to the extent to which the job requirements underlying the assessment process remain unchanged. If assessments are conducted for the same or similar positions over time, periodic reviews of the work analysis should be undertaken. Moreover, an assessment report prepared focusing on the particular requirements of a specific position may have much less relevance for different positions, especially promotional ones.

Second, although the characteristics usually measured by assessments are enduring, they do change. The single most important source of such change is, of course, maturation. Thus, assessments conducted several years ago may no longer be accurate descriptions of the person today, and this effect may be greater for younger assessees. Especially regarding assessments conducted for development, it must be remembered that the purpose of the assessment is to assist the person in changing. To the extent that it is successful in this purpose, the duration of the accuracy of the assessment report will be reduced.

Unfortunately, organizations may continue to rely upon assessment reports beyond any “out-of-date” information set forth by the assessor (see Ryan & Sackett, 1998 ). Jeanneret ( 1998 ) suggested that after “a few years” the value of an assessment report should be reconsidered and perhaps updated with new information or discarded, and that this caution should be incorporated in the introduction to the assessment report.

Overinterpretation

A final note to this section on feedback and reporting is that assessors must constantly be aware of the possibility of overinterpretation by the recipients of the assessment information embedded in feedback and reports. Because of their unique and independent nature, assessment results are often accorded greater value than may be warranted. The outcomes are simply snapshots of current behavioral tendencies and expectations measured by instrumentation or interactions. The results must be interpreted in context with other information (e.g., current job performance; past history) and are not expected to stand completely alone. Finally, it is instructive to note the caution of Ryan and Sackett ( 1998 ) that reports usually do not provide recipients with information about areas of importance that were not addressed by the assessment process.

In summary, assessment results are not perfect; nor do they last forever. Accordingly, any feedback must include the necessary caveats to advise recipients of both the value and cautions associated with assessment information.

Special Issues in Individual Psychological Assessment

In the following discussion we introduce several topics of particular interest or problem for individual psychological assessment.

Preparing I/O Psychologists for Individual Psychological Assessment (IPA)

Given that IPA is a core competency of I/O psychology (Society for Industrial and Organizational Psychology, 1999 ; Jeanneret and Silzer, 2011 ) and an area of competence that should be developed in graduate-level I/O programs, it is of considerable concern that very few academic programs make any attempt to provide students with individual assessment capabilities (Silzer and Jeanneret, 2011 ) Moreover, if one or two academic courses devoted to individual assessment were offered by I/O doctoral programs, they would not be sufficient for a newly graduated I/O psychologist to be competent in performing such assessments. The protocol offered by Silzer and Jeanneret ( 2011 ) presents a workable training framework. Their recommendations include the following: (1) completion of a number of graduate courses in topics such as psychometric theories and measurements, assessment tools and methods (including interviewing and data integration), and selection strategies and personality theories, (2) 6- to 12-month supervised internships under practicing assessment psychologists, and (3) compliance with and completion of relevant statutory licensure requirements. We would add to those recommendations a continued process of education and interactions with other assessment psychologists during the early years of a psychologist's career to hone capabilities and obtain verification that their individual psychological assessment competency is, in fact, sound.

Measuring/Evaluating Success

There are two perspectives from which to view this topic: (1) the broad perspective as to how well, in general, individual psychological assessment is serving its purposes, and (2) a narrower perspective as to how well a specific individual assessment process has worked in responding to one or more specific purposes in a particular organizational setting. [Silzer and Jeanneret ( 2011 ) have suggested a rather comprehensive research and policy issue agenda that expands our recommendations presented below.] As this discussion continues it should become more apparent to the reader that although there are no singular or absolute answers, the principles of measurement and evaluation apply when trying to examine individual assessment from either the broad or narrow perspective.

Psychometric Qualities

Regarding reliability, we have few data on which to rely unless an instrument has been developed by a test vendor or has been used extensively by an organization to permit studies of one or more estimates of reliability. Thus, some reliability data may be available for commercial tests, and although the reliability data probably were not obtained under conditions associated with individual assessment, we are often comfortable with “transporting” that reliability to specific circumstances. Reliability information for custom indices (e.g., in-basket, role-playing) is usually nonexistent. The reliabilities of assessor ratings, interpretive descriptions of assessees, and conclusions are also unknown and rarely collected to the best of our knowledge. At best, we would expect these reliabilities to be no better than those usually obtained between raters for assessment centers. Gaugler et al. ( 1987 ) reported reliabilities ranging from 0.61 to 1.0 for assessment center ratings.

There has been little research conducted regarding the criterion-related validity of individual psychological assessments, especially in recent years (Howard & Thomas, 2010 ; Kaiser et al., 2008 ; Ryan & Sackett, 1998 ). Moreover, the validity questions are themselves broad: What are the validities of the instruments? What are the validities of the assessor ratings or interpretations? How valid are the conclusions and, if made, the recommendations? Perhaps most importantly, what are the criteria to be used to evaluate validity?

Because criterion-related validation requires sufficient sample sizes, it is often a reality that statistical evidence will never be available for the evaluation of many, if not most, individual assessment programs. Moreover, the issues regarding the selection and measurement of appropriate criteria have been given minimal attention and will no doubt continue to be a major barrier even if an assessment program has an adequate participation rate.

We alluded previously to the criterion problem in individual assessment, and it remains a vexing problem for research into the validity and utility of assessment. Given the purposes for which individual assessment is used (i.e., selection into highly complex, responsible positions; individual coaching and development; identification of future talent for succession planning, reorganization, or downsizing), the criteria of interest must necessarily be complex. Straight-forward categorizations such as “successful/unsuccessful,” “high-performing,” and “high potential” are insufficient to reflect the multiple concerns facing decision makers that include not only what such individuals do, but how they do it. For the most important criteria, substantial time may need to pass before results can reasonably be evaluated. In addition, the nature of the particular situation (and how it may change in the future) will affect what criteria will be important and how they will be assessed. We do not intend to discourage research into the individual assessment process; rather, we are recommending that it be done with a greater level of sophistication and creativity that acknowledges and addresses the inherent complexities involved. Indeed, it may be that qualitative models that capture the richness of both the descriptions (and thus the contingent predictions) arising from individual assessments and the varied outcomes that are sought represent viable research approaches.

At best, it would seem, we can strive for some level of content or construct validity evidence. We can look for linkages between knowledge, skills, abilities, and personal characteristics (KSAPs) (or competencies) and existing assessment measures, or between KSAPs and newly developed measures, especially if they represent even moderate fidelity work simulations (e.g., role-play, in-basket) (see Landon & Arvey, 2007 , for a more thorough discussion of models and methods for evaluating construct validity). We also can argue that assessee interviews are job related if they focus on job-required competencies and relevant work circumstances.

As with criterion-related validity, the data available from samples upon which to test fairness in a statistical sense are usually so small as to preclude most standard forms of analysis. Nevertheless, our analyses of assessment outcomes when we have a client for which we have conducted a sufficient number of comparable assessments have not resulted in consequential differences by race, ethnicity, or gender.

Qualitative Indicators

The psychometric questions described above are inquiries about whether the individual psychological assessment provided accurate and job-relevant information about the assessee. However, we can also ask whether we measured the right things. Did we identify and properly weight the important KSAOs (competencies)? Did we correctly determine the “fit” issues and link them to the assessment process as we completed its design? Did the results we derived comport with the organization's values? No doubt there are many other questions of this nature that would help evaluate the success of the individual assessment process on a qualitative basis.

A related strategy is to link the outcomes of the assessment process with its stated purposes. For example, if the assessment was designed to support a succession plan, then relevant evaluative questions should be asked about whether sufficient numbers of individuals with the right capabilities were identified to move into vacancies as they materialized. Comparisons of a similar nature could be made when the individual assessments were used to support developmental programs or to design and enable organizational change initiatives.

An additional evaluative strategy could involve a comparison between assessment outcomes and 360 information. It is conceivable that this comparison could be accomplished using some quantitative indices, but it is more likely that small numbers of assessees would restrict the comparison to a qualitative evaluation. Planning ahead for such an evaluation would be worthwhile, so that similar competencies or characteristics could be incorporated in both the individual psychological assessment and the 360 protocol.

There are many other potential indices that could be used for evaluative purposes. Certainly, formal or informal surveys of client and assessee satisfaction are possible. Other possibilities include reviews of developmental successes (or failures), success in performance improvement, team functioning (positive or negative), and attainment (or not) of human capital goals. All of these possibilities suggest that somehow individual psychological assessment should be linked to both individual behavior and organizational functioning for the evaluation to be complete.

In summary, there is minimal knowledge upon which to make evaluative judgments about individual psychological assessments. There seem to be either “believers” or “nonbelievers,” and individuals have found themselves in one camp or the other based on their own experiences and understanding of the research literature. To make a serious attempt at objectively evaluating an individual assessment program, the ideas (e.g., Does the assessment design match the competencies? Is the assessment program perceived as valuable? Is there some other program that would be more effective?) offered by Davidson ( 2010 ) should be especially useful.

Assessment for High-Risk Occupations

There are a number of situations in which psychologists are asked to conduct individual psychological assessments for purposes of screening out individuals unsuited by temperament or psychological health to perform in high-stress, high-trust/responsibility positions. Examples of these situations include screening of police officers and other security personnel, airline pilots, air traffic controllers, and nuclear power plant personnel. The purpose of assessment in these situations is different from other (traditional) purposes. Here, the purpose specifically includes searching for characteristics that may lead to behaviors that place others (often the public at large) at risk for harm. These types of assessment also place the psychologist in the position of making absolute recommendations of “accept” or “reject.”

Execution of these assessments may involve the use of clinical tools (in fact, in some cases, the particular tools may be mandated by law or regulation) such as the Minnesota Multiphasic Personality Inventory (MMPI) and “clinical” interview. Interpretations of such tools must be closely guided by the developers research and may involve both normative and within-person comparisons. Individual responses to particular items may become more important (e.g., the MMPI “critical items” that ask about clinically relevant behaviors), and the interview will include searching for examples or confirmation of aberrant behaviors (e.g., drug and alcohol abuse, family problems, or criminal activity). There must be particular sensitivity exercised in providing feedback both to the assessee and to the organization, especially if the recommendation is negative. Psychologists must also be keenly aware of the level of liability they are undertaking with respect to both false positives (who may subsequently act in ways that cause harm to others) and false negatives (who will be denied employment).

These assessments raise significant questions about who is qualified to conduct them. It is not clear that typical I/O psychologists have the requisite training and supervision to do so, although, surely they are capable of becoming qualified. However, the availability of appropriate requisite learning resources may be limited outside of a university clinical psychology program. Licensure as a psychologist is certainly a requirement to perform such assessments, meaning that those wishing to do so must meet all of the educational and supervision requirements imposed by state licensing boards. Moreover, even licensed I/O psychologists are not necessarily qualified to conduct this kind of assessment. Psychologists must evaluate their own training and expertise to ensure that they do not violate the ethical standards’ requirement that they not practice outside their areas of competence.

Opportunities to Test Hypotheses and Self-Correct

One of the strengths as well as difficulties of individual assessment lies in the ability of the assessor to consider the data available, draw tentative conclusions from them, and then evaluate the accuracy and importance of those conclusions during the assessment process. This iterative process is sometimes used in interviewing as interviewers evaluate individuals’ responses and adapt subsequent questions to provide confirmatory or conflicting information. The individual nature of psychological assessments affords the assessor opportunities to “build-up” a clearer (hopefully more accurate) description of the assessee. For example, an initial review of test information may suggest that the assessee is likely to be more comfortable continuing to gather information before being willing to make a decision. Analysis of the person's responses to an in-basket exercise may increase the likelihood of that conclusion, call it into question, or suggest an alternative interpretation of the now more complete dataset. This process also allows for the individual's own self-assessment to inform and deepen the overall, integrated picture of the person. This process, however, presents risks as well as opportunities. The unwary assessor may fall into a number of troublesome traps: early interpretations can become self-fulfilling if the assessor attends too closely to corroborating evidence from other sources; the self-correction process can become an illusion of the assessor's self-perceived unparalleled acumen rather than evidence of fallibility; and data from various sources may be accorded inappropriate differential weight derived from the assessor's hypotheses rather than from the work analysis or the assessees’ performance.

Cultural Issues

This topic deserves a chapter of its own to reflect the extent of research and the breadth of the issues that it raises. Several authors have summarized the influence of culture on assessment processes (both individual and group) and all agree that cultural issues greatly enhance the complexities for assessment and that they must be addressed in both research and practical settings (Caligiuri & Paul, 2010 ; Fulkerson, 1998 ; Nyfield & Baron, 2000 ; Ryan & Tippins, 2010 ). These same authors also emphasize that the need for assessments on a global basis with all of the attending cultural complexities will continue to grow.

Cultural complexities can be examined through a number of different windows. Some authors have argued for national or geographic-level cultural dimensions reflecting the normative values of a specified group. Trompenaars ( 1994 ) constructed a set of descriptors of individual differences as manifested in behavioral differences ascribed to cultures as a whole (cf. Fulkerson, 1998 ):

Universalist versus Particularist (rules apply equally to everyone versus rules must be subordinate to particular circumstances, such as relationships);

Individual versus Collective (action and responsibility reside in the individual versus the group—whether family, team, organization, or society—taking precedence);

Neutral versus Affective (value placed on self-control and objectivity versus value placed on spontaneity and feelings);

Specific versus Diffuse (relationships are specific to a particular role or circumstance versus relationships with less distinct boundaries between roles);

Ascription versus Achievement (status derives from position, family connections, wealth versus status is earned by individual effort);

Internal versus External (control over events and actions resides within the individual versus acceptance of inevitability of external environment and events that constrain or control actions);

Differences in time perceptions regarding past, present, and future (concern with tradition and established ways of doing things versus concern for the here and now versus focus on the future, goals, and objectives).

Trompenaars ( 1994 ) ascribed these characteristics to cultural level and assigned specific countries as exemplars of them. Fulkerson ( 1998 ) suggested that information obtained from measures used in cross-cultural assessment must be filtered by the assessor to account for cultural influences. Moreover, the criteria defining success may vary in the cross-cultural context.

Others (e.g., Ryan & Tippins, 2010 ) have cautioned against group-level “stereotypical pronouncements.” Rather, they focus attention on a variety of design issues that deserve close attention in cross-cultural milieu. Each of these issues must be successfully addressed for the assessment program to be viable. Among these issues are the following:

Job content and level (jobs with the same title may be different across international locations);

Economic and legal contexts are different;

Assessment traditions or familiarity with particular tools;

Characteristics of local assessment environment (such as the extent to which managers and assessees are familiar with the goals, processes, and tools of assessment);

Availability of suitable infrastructure and human resource expertise to support assessment programs;

Compliance with fair employment and privacy practices and laws;

Impact of differences in labor markets;

Equivalence of assessment tools and resulting score interpretations.

Ryan and Tippins ( 2010 ) also discuss issues in implementation that affect all assessment processes but have particular impact for cross-cultural assessment. Designers of assessment programs must balance needs to control cost, time, and resource demands against local practices and resource availability, which in turn implies decisions about the level of flexibility to be allowed in departure from standardized processes. The issue of flexibility in turn raises the question of the level and type of monitoring that are necessary to maintain the consistency and validity of the system. Designers also must consider how they will ensure the support or at least the cooperation of stakeholders in the global organization.

Assessment Challenges

Conducting individual psychological assessments in cross-cultural environments presents a number of particular challenges for which the responses may be substantially complex with little guidance. These and many of the other issues we have discussed must be viewed from three perspectives, each of which may yield different decisions. First, the psychologist must determine whether the assessment program will be a “home country” (e.g., the United States) process applied internationally. This approach has the advantages of standardization and consistency as well as often being familiar to corporate leadership, but it may present serious concerns with the substantive questions noted below and with acceptance in the global organization. Second, the psychologist must decide whether to eschew standardization in favor of modifying the assessment program to fit optimally each culture in which it will be introduced. Certainly, this approach is likely to increase acceptance, but it does so by sacrificing clear comparability of results and introducing a great deal of complexity, possibly rendering it impractical for implementation and limited in usefulness. Finally, a third approach may be to seek to develop a process that is either “culture-free” (whatever that may mean) or at least broadly culturally acceptable.

The choice of approach must take into account the purposes for which the assessment process will be used. Will the assessments be used to evaluate individuals from other cultures for positions in the “home culture,” or only for positions within their native cultures? Will the assessments be used to select among home culture candidates for ex-patriot assignments? Additionally, the perspective will impact and be impacted by such questions as: What assessment constructs are applicable across cultures? What assessment methods are acceptable across cultures? What HR systems within a country have an impact on assessment? What are the international implications of fair employment and privacy laws?

In addition, there are a number of issues that arise with respect to specific assessee characteristics. Do assessees meet the basic qualification requirements of the target positions? Are there issues of educational level or even literacy and numeracy among the potential assessees?

One area in which there is some considerable research is that of the impact of culture on the nature and prevalence of dissimulation on written assessment measures (Ryan & Tippins, 2010 ). What is the influence of values on responses? How open or truthful are the assessees? And to what degree is there a “trusting” relationship? Finally, it is recommended that a clear understanding of teamwork be developed by the assessor, since team environments are often a major component of global operations.

Internal versus External Assessment Programs

We are defining this topic in terms of the “residency” of the assessor. The internal assessor is an employee of the organization; the external assessor is not an employee of the organization, but is providing assessment services under some form of contractual relationship. We are not aware of any current data on the frequency with which assessors fit the internal versus the external categorization. Ryan and Sackett ( 1987 ) reported from their survey that about 25% of assessors worked in an internal capacity, a value similar to the estimate (26%) offered by Jeanneret and Silzer ( 1998 ).

In this discussion, we will examine the “pros” and “cons” of internal versus external assessors for the design and implementation of an individual psychological assessment program. There is no meaning ascribed to the order in which the “pros” and “cons” are presented. We are not taking length of assessment experience into consideration when deriving our list, but clearly tenure as an internal assessor could influence the “pros” and “cons” for the internal category.

Internal “Pros”

The assessor knows the organization, its business strategy, values, and culture. The assessor also has more information about the relevant “players,” including the manager, peers, team members, subordinates, and internal customers of the assessee. Additionally, the internal assessor has a better understanding of the relevant communication networks, informal groups and leaders, reward systems, and the human resource management process in general.

The internal assessor may well have the confidence of those who become associated with the assessment process. These individuals could include senior managers, human resource professionals, high potentials, and assessees in general. Moreover, the internal assessor may be able to track and evaluate the success of an assessment program and make adjustments to it as necessary. An internal assessor may be able to link the assessment process with other indices such as the results of a 360 process, performance appraisal program, bonus system, or other success indices.

Given their organizational and operational knowledge, the internal assessor may be able to design and implement effective development programs. This could include serving as a coach or facilitating a mentor relationship between a manager and an assessee. Again, it would be possible to track changes over time and make adjustments if deemed necessary.

Internal “Cons”

The assessor may be perceived as having knowledge and relationships that could be used in ways that were detrimental to assessees’ careers. In this regard, there may be a lack of trust on the part of assessees or others associated with the process as to the motivations of the assessor. More directly, the assessor may be perceived as “biased” in terms of what is reported, or the assessor may rely upon other information (not relevant to the assessment) in making assessment reports. It is also possible that the assessor's manager (or other superior) could exert some influence (either positive or negative) with regard to a particular assessee, leaving the assessor feeling compelled to support the boss's choice.

A somewhat different problem also could confront the internal assessor. Specifically, the assessor may be placed in a position whereby information is requested about an assessee that should otherwise remain confidential. In a similar manner, the security of assessment data could be compromised. Potentially, if some “negative” outcome is attributed to the assessment, the assessor may become “branded.” In turn, the assessor's effectiveness could be limited.

In summary, there are many positives associated with serving as an internal assessor. On the other hand “organizational politics” can be limiting and even detrimental to the effectiveness of an internal assessment program.

External “Pros”

The assessor is typically not subject to the organizational politics, although this is not always the case. Nevertheless, under the more ideal circumstances, the assessor can be objective without undue organizational influences controlling the assessment conclusions and recommendations. Therefore, the assessor may be more trustworthy in the eyes of the assessees and others, particularly with regard to confidentiality and organizational biases. The assessor also has greater control over the access to assessment results and may be better able to restrict their use.

In addition to their “unbiased” perspective, external assessors may be able to impose new or innovative strategies for assessee development or assignment. This objectivity also could induce new configurations of work groups, communications links, or mentoring relationships. External assessors may also bring a broader perspective informed by contact with other assessment processes, developmental strategies, and normative data that cut across many similar and different organizations.

Finally, it is likely that the use of an external assessor is more cost effective if the focus is primarily on the assessment process. On the other hand, internal assessors often have broader responsibilities than just conducting assessments, whereby the decision to use internal versus external assessors should not be based strictly on direct costs.

External “Cons”

In certain instances the assessor does not have the extensive organizational knowledge of business strategies, culture, “players,” informal groups, etc. that can lend insight or richness to what is learned and reported about the assessee. We do not mean to imply that all assessors lack this knowledge because surely some have long-term relationships with clients that allow them to develop this supporting knowledge. On the other hand, when there are only one or two assessments for a client, it is doubtful that the assessor has the opportunity or incentive to gather extensive organizational information.

Unless the assessment is tied to a 360 or some other informational source and without an ongoing presence such as through a coaching relationship, the external assessor typically does not have access to behaviors exhibited by the assessee over time. Thus, the assessor is constrained by the limited sample of information that can be obtained at a single point in time. Finally, and perhaps one of the most important “cons,” is the lack of access to success indicators that would allow the assessor opportunities to evaluate the assessment outcomes in terms of validity and utility.

Conclusions

We conclude this chapter by reflecting on five points that in our judgment seem to summarize the state of practice and science with respect to individual psychological assessment today. First, it should be abundantly clear that the demand for and practice of providing such assessments show no signs of abating; indeed, if anything, the prevalence may be growing in both volume and in new arenas. Whether this represents what Highhouse ( 2002 ) calls “functional autonomy” or an indication of the value it offers to organizations is unclear; however, it seems unlikely to us that decision makers would continue to seek out, rely on, and pay for the information that individual assessments provide if that value were indeed absent. Rather, it seems to us that the difficulties and uncertainties surrounding the decisions that senior managers must make with respect to the people selected, promoted, developed, and prepared for complex positions, and the impact that those decisions have on organizations, call for complex, subtle, and sophisticated approaches to help inform them. Individual psychological assessment offers one of those approaches.

Second, there appears to be a trend for the convergence of individual assessment and assessment center technologies. Although the approaches are still distinguishable, there is a growing overlap in the methodologies used to obtain assessment data (e.g., in-baskets, role-plays, case studies, and presentations). In some instances, there are even multiple assessors involved, for example, as part of an interview panel or as a separate role-player, requiring the use of a group process of various sorts (informal discussion, independent ratings, or consensus evaluations) as part of the final integration. These tools and techniques, however, continue to be used for the assessment of a single individual. It remains to be seen how widely they will be adopted for this purpose, but these additions to the process beyond tests and interviews seem likely to enhance both the rigor and structure (and perhaps validity) of individual assessments, while increasing our options for assessment design to match organizations’ purposes and objectives.

Third, the evidence from clinical studies and cognitive psychology still points toward the superiority of statistical over intuitive methods for the prediction of specific criteria; however, there seems to be a growing sense of the science that some combination of structured/statistical and judgmental approaches to assessment integration is more accurate. Moreover, these approaches provide a means to incorporate the unique value of psychological interpretation and insight within specific contexts. However, there remain some substantive questions about the research on statistical integration dealing, for example, with vexing issues of sample sizes for comparably assessed populations, restriction in range, and the types of “predictions” arising from assessment reports. In particular, in many cases sufficient attention has not been focused on the nature and comprehensiveness of the criteria, especially given the complex information typically provided by individual assessment reports. Individual psychological assessment remains one of the least effectively and frequently researched, yet widely used, selection tools available. We recommend that more sophisticated research paradigms be developed to account for these difficulties, perhaps using qualitative methods to capture the contingent nature of assessment predictions and the multidimensional nature of the outcomes of importance to hiring managers and organizations.

Fourth, technology has and will continue to shape individual psychological assessments and how they are conducted. The addition of technological elements to simulation exercises is almost certainly an unqualified positive in that work analyses have consistently shown the incursion of technology into managerial offices and executive suites. These elements thus increase the fidelity of the simulations as well as providing information about skills that are becoming more relevant for senior managers. The increasing use of technology for the administration of assessment tools, however, appears to be more of a mixed blessing. Certainly, remote (frequently online) administration of personality tests, in-baskets, and even interviews has the capability of extending the reach of assessments while keeping their costs reasonable. However, the many questions raised by unproctored “tests” (Tippins, 2009 ) apply equally to assessment tools. Of even greater concern for individual assessment may be the way in which such administration changes the dynamic of the assessment from very personal, “high touch” to remote, “high tech.” It is not clear yet how this change will impact the practice of assessment or its psychometrics, but it seems likely to us that it will have a greater influence on assessments conducted for developmental purposes than for selection.

Finally, expansion of individual psychological assessment to global settings and purposes presents new challenges, many of which have not yet been resolved. The challenges are both ones of language and ones of culture, which interact substantively with the underlying assessment purposes. Selection for promotion within a nonwestern culture is quite a different matter from selection for an expatriate assignment to another country, language, and culture. Growth and development may have very different meanings within cultures having non-Aristotelian heritages, with different philosophical and religious roots. Research continues on the extent to which the structure of personality can be understood at a universal level, but it is already clear that the cultural milieu in which a person lives and functions (both societally and organizationally) affects both the behaviors to which personality characteristics lead and the ways in which behavior is interpreted. Translations of competencies, dimensions, and personality assessment measures are to be approached cautiously and with a good deal of respect for what we do not know about behavior. These concerns, of course, are in addition to the practical matters of costs, timing, acceptance, and willing participation.

Individual psychological assessment persists as a means to obtain and interpret complex information about behavior with respect to complex positions and assignments. The flexibility to evaluate multiple types of “fit” (roles, organizations, context) allows individual assessment to respond to a range of concerns in both selection and development and thus be a valuable offering of I/O psychology to organizations. It has never been more urgent for us to fulfill this need effectively, for the demand to match highly qualified individuals with the requirements of those positions has never been greater.

American Psychological Association. ( 2002 ). Ethical principles of psychologists and code of conduct . Washington, DC: Author.

Google Scholar

Google Preview

Astin, A. W. ( 1961 ). The functional autonomy of psychotherapy.   American Psychologist , 16 , 75–78.

Bray, D. W., & Grant, D. L. ( 1966 ). The assessment center in the measurement of potential for business management.   Psychological Monographs , 80 (625), 1–27.

Burke, R. J. ( 2006 ). Why leaders fail: Exploring the darkside.   International Journal of Manpower , 27 (1), 91–100.

Caligiuri, P., & Paul, K. B. ( 2010 ). Selection in multinational organizations. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (pp. 781–799). New York: Taylor & Francis Group, LLC.

Campion, M. A., Palmer, D. K., & Campion, J. E. ( 1997 ). A review of structure in the selection interview.   Personnel Psychology, 50, 655–702.

Chuah, S. C., Drasgow, F., & Roberts, B. W. ( 2006 ). Personality assessment: Does the medium matter? No.   Journal of Research in Personality , 40 , 359–376.

Dalal, D. K., & Nolan, K. P. ( 2009 ). Using dark side personality traits to identify potential failure.   Industrial and Organizational Psychology, 2(4), 434–436.

Davidson, E. J. ( 2010 ). Strategic evaluation of the workplace assessment program. In D. H. Reynolds & J. C. Scott (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 729–756). San Francisco, CA: Jossey-Bass.

Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice. ( 1978 ). Uniform guidelines on employee selection procedures.   Federal Register, 43(166), 38295–38309.

Frisch, M. H., ( 1998 ). Designing the individual assessment process. In R. Jeanneret & R. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 135–177). San Francisco, CA: Jossey-Bass.

Fulkerson, J. R. ( 1998 ). Assessment across cultures. In R. Jeanneret & R. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 330–362). San Francisco, CA: Jossey-Bass.

Ganzach, Y., Kluger, A. N., & Klayman, N. ( 2000 ). Making decisions from an interview: Expert measurement and mechanical combination.   Personnel Psychology , 53 (1), 1–20.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., III, & Bentsen, C. ( 1987 ). Meta-analysis and assessment center validity [Monograph].   Journal of Applied Psychology , 72 , 493–511.

Guion, R. M. ( 1998 ). Assessment, measurement, and prediction for personnel decisions . Mahwah, NJ: Lawrence Erlbaum Associates.

Highhouse, S. ( 2002 ). Assessing the candidate as a whole: A historical and critical analysis of individual psychological assessment for personnel decision making.   Personnel Psychology , 55 (2), 363–396.

Highhouse, S. ( 2008 ). Stubborn reliance on intuition and subjectivity in employee selection.   Industrial and Organizational Psychology , 1 (3), 333–342.

Hoffman, P. J. ( 1968 ). Cue-consistency and configurality in human judgment. In B. Kleinmuntz (Ed.), Formal representation of human judgment (pp. 55–90). New York: Wiley.

Hogan, R. , & Hogan, J. ( 1998 ). Theoretical frameworks for assessment. In R. Jeanneret & R. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 27–53). San Francisco, CA: Jossey-Bass.

Hogan, R., & Hogan, J. ( 2001 ). Assessing leadership: A view from the dark side.   International Journal of Selection and Assessment, 9(1/2), 40–51.

Holland, J. ( 1985 ). Making vocational choices: A theory of careers (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

Hollenbeck, G. P. ( 2009 ). Executive selection—what's right . . . and what's wrong.   Industrial and Organizational Psychology , 2 (2), 130–143.

Howard, A. ( 1990 ). The multiple facets of industrial-organizational psychology: Membership survey results . Bowling Green, OH: Society for Industrial and Organizational Psychology.

Howard. A., & Thomas, J. W. ( 2010 ). Executive and managerial assessment. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment (pp. 395–436). San Francisco, CA: Jossey-Bass.

International Task Force on Assessment Center Guidelines ( 2000 ). Guidelines and ethical considerations for assessment center operations . Bridgeville, PA: Development Dimensions International, Inc.

Jeanneret, R. ( 1998 ). Ethical, legal, and professional issues for individual assessment. In R. Jeanneret & R. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 88–131). San Francisco, CA: Jossey-Bass.

Jeanneret, R., & Silzer, R. (Eds.). ( 1998 ). Individual psychological assessment: Predicting behavior in organizational settings . San Francisco, CA: Jossey-Bass.

Jeanneret, R. , & Silzer, R. ( 1998 ). An overview of individual psychological assessment. In R. Jeanneret & R. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 3–26). San Francisco, CA: Jossey-Bass.

Jeanneret, R. & Silzer, R. ( 2011 ). Individual psychological assessment: A core competency for industrial-organizational psychology.   Industrial and Organizational Psychology, 4, 342–351.

Kaiser, R. B., Hogan, R., & Craig, S. B. ( 2008 ). Leadership and the fate of organizations.   American Psychologist, 63, 96–110.

Landon, T. E., & Arvey, R. D. ( 2007 ). Practical construct validation for personnel selection. In S. M. McPhail (Ed.), Alternative validation strategies: Developing new and leveraging existing validation evidence (pp. 317–345). San Francisco, CA: Jossey-Bass.

Landy, F. L., & Conte, J. M. ( 2007 ). Work in the 21st century: An introduction to industrial and organizational psychology (2nd ed.). Malden, MA: Blackwell Publishing.

McCrae, R. R., & Costa, P. T., Jr. ( 1997 ). Personality trait structure as a human universal.   American Psychologist , 52 , 509–516.

Mead, A. D., & Drasgow, F. ( 1993 ). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis.   Psychological Bulletin, 114, 449–458.

Meehl, P. E. ( 1954 ). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence . Minneapolis, MN: University of Minnesota Press.

Meyer, P. ( 1998 ). Communicating results for impact. In R. Jeanneret & S. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 243–282). San Francisco, CA: Jossey-Bass.

Moses, J. ( 2011 ). Individual psychological assessment: You pay for what you get.   Industrial and Organizational Psychology, 4, 334–337.

Moses, J. L., & Eggebeen, S. L. ( 1999 ). Building room at the top: Selecting senior executives who can lead and succeed in the new world of work. In A. I. Kraut & A. K. Korman (Eds.), Evolving practices in human resource management (pp. 201–225). San Francisco, CA: Jossey-Bass.

Nyfield, G., & Baron, H. ( 2000 ). Cultural context in adapting selection practices across borders. In J. Kehoe (Ed.), Managing selection in changing organizations: Human resource strategies (pp 242–268). San Francisco, CA: Jossey-Bass.

Pearlman, K. ( 2009 ). Unproctored internet testing: Practical, legal, and ethical concerns.   Industrial and Organizational Psychology , 2 (1), 14–19.

Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., Fleishman, E. A., Levin, K. Y., Campion, M. A., Mayfield, M. S., Morgeson, F. P., Pearlman, K., Gowing, M. K., Lancaster, A. R., Silver, M. B., & Dye, D. M. ( 2001 ). Understanding work using the occupational information network (O*NET): Implications for practice and research.   Personnel Psychology , 54 , 451–492.

Ployhart, R. E., Weekley, J. A., Holtz, B. C., & Kemp, C. ( 2003 ) Web-based and paper-and-pencil testing of applicants in proctored settings: Are personality, biodata, and situational judgment tests comparable?   Personnel Psychology , 56 (3), 733–752.

Prien, E. P., Shippmann, J. S., & Prien, K. O. ( 2003 ). Individual assessment: As practiced in industry and consulting . Mahwah, NJ: Lawrence Erlbaum Associates.

Reynolds, D. H., & Rupp, D. E. ( 2010 ). Advances in technology-facilitated assessment. In D. H. Reynolds & J. C. Scott (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 609–641). San Francisco, CA: Jossey-Bass.

Rothwell, W. J. ( 2001 ). Effective succession planning (2nd ed.). New York: AMACOM.

Ryan, A. M., & Sackett, P. R. ( 1987 ). A survey of individual assessment practices by I/O psychologists.   Personnel Psychology , 40 (3), 455–488.

Ryan, A. M., & Sackett, P. R. ( 1992 ). Relationships between graduate training, professional affiliation, and individual psychological assessment practices for personnel decisions.   Personnel Psychology , 45 (2), 363–387.

Ryan, A. M., & Sackett, P. R. ( 1998 ). Individual assessment: The research base. In R. Jeanneret & S. Silzer (Eds.), Individual psychological assessment: Predicting behavior in organizational settings (pp. 54–87). San Francisco, CA: Jossey-Bass.

Ryan, A. M., & Tippins, N. T. ( 2010 ). Global applications of assessment. In D. H. Reynolds & J. C. Scott (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 577–606). San Francisco, CA: Jossey-Bass.

Silzer, R., & Church, A. H. ( 2009 ) The pearls and perils of identifying potential.   Industrial and Organizational Psychology , 2 (4), 377–412.

Silzer, R., & Davis, S. L. ( 2010 ). Assessing the potential of individuals: The prediction of future behavior. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment (pp. 495–532). San Francisco, CA: Jossey-Bass.

Silzer, R. F., & Dowell, B. E. (Eds.). ( 2010 ). Strategy-driven talent management: A leadership imperative . San Francisco, CA: Jossey-Bass.

Silzer, R. F., & Jeanneret, R. ( 2011 ). Individual psychological assessment: A practice and science in search of common ground.   Industrial and Organizational Psychology, 4, 270–296,

Simon, H. A. ( 1997 ). Administrative behavior (4th ed.). New York: The Free Press.

Society for Industrial and Organizational Psychology. ( 2003 ). Principles for the validation and use of personnel selection procedures . Bowling Green, OH: SIOP, Inc.

Society for Industrial and Organizational Psychology. ( 1999 ). Guidelines for education and training at the doctoral level in industrial/organizational psychology . Bowling Green, OH: SIOP, Inc.

Thornton, G. C. ( 1992 ). Assessment centers in human resource management . Reading, MA: Addison-Wesley.

Thornton, G. C., Hollenbeck, G. P., & Johnson S. K. ( 2010 ). Selecting leaders: Executives and high potentials. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (pp. 823–840). New York: Taylor & Francis Group, LLC.

Tippins, N. T. ( 2009 ). Internet alternatives to traditional proctored testing: Where are we now?   Industrial and Organizational Psychology , 2 (1), 2–10.

Tippins, N. T., Beaty, J., Drasgow, F., Gibson, W. M., Pearlman, K., Segall, D. O., & Shepherd, W. ( 2006 ). Unproctored internet testing in employment settings.   Personnel Psychology , 59 (1), 189–225.

Trompenaars, F. ( 1994 ). Riding the waves of culture: Understanding diversity in global business . London, England: The Economist Books.

Whitecotten, S. M., Sanders, D. E., & Norris K. B. ( 1998 ). Improving predictive accuracy with a combination of human intuition and mechanical decision aids.   Organizational Behavior and Human Decision Making Processes , 76 (3), 325–348.

Wiggins, J. S. ( 1973 ). Personality and prediction: Principles of personality assessment . Reading, MA: Addison-Wesley Publishing Company.

Wunder, R. S., Thomas, L. L., & Luo, Z. ( 2010 ). Administering assessments and decision-making. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (pp. 377–398). New York: Taylor & Francis Group, LLC.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Faculty Resources

Assignments.

icon of a pencil cup

The assignments in this course are openly licensed, and are available as-is, or can be modified to suit your students’ needs. Selected answer keys are available to faculty who adopt Waymaker, OHM, or Candela courses with paid support from Lumen Learning. This approach helps us protect the academic integrity of these materials by ensuring they are shared only with authorized and institution-affiliated faculty and staff.

If you import this course into your learning management system (Blackboard, Canvas, etc.), the assignments will automatically be loaded into the assignment tool, where they may be adjusted, or edited there. Assignments also come with rubrics and pre-assigned point values that may easily be edited or removed.

The assignments for Introductory Psychology are ideas and suggestions to use as you see appropriate. Some are larger assignments spanning several weeks, while others are smaller, less-time consuming tasks. You can view them below or throughout the course.

You can view them below or throughout the course.

Discussion Grading Rubric

The discussions in the course vary in their requirements and design, but this rubric below may be used and modified to facilitate grading.

Contribute!

Improve this page Learn More

  • Assignments with Solutions. Provided by : Lumen Learning. License : CC BY: Attribution
  • Pencil Cup. Authored by : IconfactoryTeam. Provided by : Noun Project. Located at : https://thenounproject.com/term/pencil-cup/628840/ . License : CC BY: Attribution

Footer Logo Lumen Waymaker

Logo for University of Central Florida Pressbooks

Faculty Resources

Assignments

The assignments in this course are openly licensed, and are available as-is, or can be modified to suit your students’ needs. Selected answer keys are available to faculty who adopt Waymaker, OHM, or Candela courses with paid support from Lumen Learning. This approach helps us protect the academic integrity of these materials by ensuring they are shared only with authorized and institution-affiliated faculty and staff.

If you import this course into your learning management system (Blackboard, Canvas, etc.), the assignments will automatically be loaded into the assignment tool, where they may be adjusted, or edited there. Assignments also come with rubrics and pre-assigned point values that may easily be edited or removed.

The assignments for Introductory Psychology are ideas and suggestions to use as you see appropriate. Some are larger assignments spanning several weeks, while others are smaller, less-time consuming tasks. You can view them below or throughout the course.

You can view them below or throughout the course.

CC licensed content, Original

  • Assignments with Solutions. Provided by : Lumen Learning. License : CC BY: Attribution

CC licensed content, Shared previously

  • Pencil Cup. Authored by : IconfactoryTeam. Provided by : Noun Project. Located at : https://thenounproject.com/term/pencil-cup/628840/ . License : CC BY: Attribution

General Psychology Copyright © by OpenStax and Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Psychological Testing

Introduction:

Psychological testing is a process of testing that uses a combination of techniques to help arrive at some hypotheses about a person and their behavior, personality and capabilities. Psychological testing is also referred to as psychological testing, or performing a psychological battery on a person. Psychological testing is nearly always performed by a licensed psychologist, or a psychology trainee (such as an intern). Psychologists are the only profession that is expertly trained to perform and interpret psychological tests.

Psychological testing should never be performed in a vacuum. A part of a thorough assessment of an individual is that they also undergo a full medical examination, to rule out the possibilities of a medical, disease or organic cause for the individual’s symptoms

The Components of Psychological testing:

Norm-Referenced Tests

A standardized psychological test is a task or set of tasks given under standard, set conditions. It is designed to assess some aspect of a person’s knowledge, skill or personality. A psychological test provides a scale of measurement for consistent individual differences regarding some psychological concept and serves to line up people according to that concept.

Tests can be thought of as yardsticks, but they are less efficient and reliable than actual yardsticks. A test yields one or more objectively obtained quantitative scores so that, as much as possible, each person is assessed in the same way. The intent is to provide a fair and equitable comparison among test takers.

Norm-references psychological tests are standardized on a clearly defined group, termed the norm group , and scaled so that each individual score reflects a rank within the norm group. Norm-referenced tests have been developed to assess many areas, including intelligence; reading, arithmetic, and spelling abilities; visual-motor skills; gross and fine motor skills; and adaptive behavior. Psychologists have a choice of many well-standardized and psychometrically sound tests with which to evaluate an individual.

Valuable information is gained through interviewing. When it’s for a child, interviews are conducted not only the child, but the parents, teachers and other individuals familiar with the child. Interviews are more open and less structured than formal testing and give those being interviewed an opportunity to convey information in their own words.

A formal clinical interview is often conducted with the individual before the start of any psychological assessment or testing. This interview can last anywhere from 30 to 60 minutes, and includes questions about the individual’s personal and childhood history, recent life experiences, work and school history, and family background.

Informal Assessment

Standardized norm-referenced tests may at times need to be supplemented with more informal assessment procedures, as such as projective tests or even career-testing or teacher-made tests. For example, in the case of a child, it may be valuable to obtain language samples from the child, test the child’s ability to profit from systematic cues, and evaluate the child’s reading skills under various conditions.

The realm of informal assessment is vast, but informal testing must be used more cautiously since the scientific validity of the assessment is less known.

 In this method clinically approved psychological test are conducted to identify and appraise the employee. A feedback is given to the employee and areas of improvement are identified.

Prevention And Control Of Human Trafficking

Social issue (causes and types), eighth graders who transferred to a new facility were less delinquent than their counterparts who stayed during the transition to high school, according to research, creating a favorable environment during an oral examination improves performance, scandium nitride – a binary iii-v indirect bandgap semiconductor, gliese 581c – a super-earth planet, community engagement, adverbs of quantity or degree, annual report 2014 of dhaka insurance limited, report on comparative performance analysis of union capital limited, latest post, metallization pressure, induction regulator, induction coil, a new global benchmark for cigs solar cells, crop plant asexual propagation becomes more feasible, inflatable building.

IMAGES

  1. 8 Sample Psychological Evaluation Templates to Download

    assignment about psychological assessment

  2. Unit-1

    assignment about psychological assessment

  3. 1 Basic Concepts Psych Assessment

    assignment about psychological assessment

  4. Handbook of Psychological Assessment; 1 Edition; ISBN: 9781118960646

    assignment about psychological assessment

  5. Psychological Report Assignment Instructions

    assignment about psychological assessment

  6. SOLUTION: Ch01 psychological testing and assessment

    assignment about psychological assessment

VIDEO

  1. #Assignment on Psychological Assessment

  2. Personality & psychological assessment , recorded

  3. Liberty Psychological Research Assignment

  4. How to do Psychological Assessment #psychology #psychologicalassessment #diagnosis

  5. Psychological Assessment Question No. 7

  6. Psychological Assessment

COMMENTS

  1. PDF APA Guidelines for Psychological Assessment and Evaluation

    C O M P E T E N C E. GUIDELINE 1. Psychologists who conduct psychological testing, assessment, and evaluation strive to develop and maintain their own competence. This includes competence with selection, use, interpretation, integration of findings, communication of results, and application of measures.

  2. How is Psychological Assessment Used?

    Psychological testing commonly includes intelligence testing, personality testing, and skills testing, among other areas. Psychological assessment is never focused on a single test score or number.

  3. Psychological Assessment

    Psychological Assessment. Psychological assessment is the process of collecting information to develop an overall understanding of someone—their past experiences, present problems, and future goals. ... Measurement is traditionally defined as the assignment of categories or scale values to such aspects of individuals as traits, attributes ...

  4. Module 3: Clinical Assessment, Diagnosis, and Treatment

    3rd edition as of July 2023. Module Overview. Module 3 covers the issues of clinical assessment, diagnosis, and treatment. We will define assessment and then describe key issues such as reliability, validity, standardization, and specific methods that are used. In terms of clinical diagnosis, we will discuss the two main classification systems ...

  5. Psychological Assessment and Testing

    Various undergraduate and graduate programs offer core and/or elective courses to educate students about psychological assessment. The content included in these courses varies significantly depending on the level of the degree (associate, bachelor's, master's, or doctorate), the course level (introductory or advanced), and the degree program (e.g., BS in Psychology or Organizational ...

  6. Psychological Assessment Scales And Measures

    Introduction. Mental health professionals use a variety of instruments to assess mental health and wellbeing. Common purposes for psychological testing include: screening for the presence or absence of common mental health conditions; making a formal diagnosis of a mental health condition; assessment of changes in symptom severity; and monitoring client outcomes across the course of therapy.

  7. Introduction to Psychological Assessment

    Abstract. Psychological testing and assessment are important in virtually every aspect of professional psychology. Assessment has widespread application in health, educational, occupational, forensic, research, and other settings. This chapter provides a historical and theoretical introduction to psychological testing, discussing basic ...

  8. Assessment in School Psychology

    Introduction. Assessment in school psychology plays an indispensable role in shaping the educational trajectories and personal development of students. Rooted in the early days of psychology and education, the journey of assessment mirrors the evolution of both disciplines, reflecting changing paradigms, advancements in understanding, and ...

  9. Psychological Assessment: What Is It & More I Psych Central

    Psychological evaluations are primarily used to help make an accurate diagnosis and ultimately, determine the best treatment options, if needed. Some of the mental health conditions evaluations ...

  10. Psychological Assessment

    Psychological assessment is a complex, integrative, and conceptual activity that involves deriving inferences from multiple sources of information to achieve a comprehensive understanding of a client or client system. It involves the ability to measure and formulate degree of need and mental status, develop psychological profiles in response to ...

  11. PDF Unit 1 Introduction to Assessment: Definition, Description and

    1.7 MEANING OF PSYCHOLOGICAL TESTING. Psychological testing is a field characterised by the use of samples of behaviour in order to assess psychological construct(s), such as cognitive and emotional functioning, about a given individual. The technical term for the science behind psychological testing is psychometrics.

  12. Psychological Assessment: Meaning, Purpose, and How it Helps

    A psychological assessment is a process used to test an individual's mental health and emotional well-being. It used to identify problems and potential concerns, as well as to recommend treatments or therapies. In this blog post, we will discuss the purpose of psychological assessments, how it'll be use, and some tips for using them ...

  13. How to Write a Biopsychosocial Assessment (With Template ...

    When the 5 P's are completed, a biopsychosocial assessment moves on to the final touches. This includes the Mental Status Exam and attaching any relevant psychological testing or outcome measures that were given. Finally, a summary, which consists of the most pertinent information from the 5 P's, is written.

  14. PSY640 Week Four Psychological Assessment Report-Violet-PC

    Evaluating Objective and Projective Asessments Prior to beginning work on this assignment, review Chapters 8 and 9 in your textbook. In this assignment, you. Skip to document. University; High School ... PSY640 Week Four Psychological Assessment Report. Patient's Name: Ms. S. Date of Evaluation: 10/01/ Date of Birth: 01/01/1991 Age: 29 years ...

  15. Individual Psychological Assessment

    We concentrate on six major themes that organize both the scientific and practical knowledge regarding individual psychological assessment. After providing a definition and associated contextual background, we discuss the various organizational purposes for which individual assessments can be used, followed by a description of how assessments ...

  16. PYC4807

    44. Stuvia 1067593 pyc4807 139 mcqs and answers. Mandatory assignments 100% (4) 51. Psychological Assessment Summaries pdf. Summaries 100% (3) 8. Psych 2022031615345277. Practice materials 100% (1)

  17. Assignments

    Assignment: Social Psychology —Designing a Study in Social Psychology. Create a shortened research proposal for a study in social psychology (or one that tests common proverbs). *larger assignment, possibly the largest assignment. Could be broken into multiple parts and given advanced notice. Personality.

  18. Assignments

    Assignments. The assignments in this course are openly licensed, and are available as-is, or can be modified to suit your students' needs. Selected answer keys are available to faculty who adopt Waymaker, OHM, or Candela courses with paid support from Lumen Learning. This approach helps us protect the academic integrity of these materials by ...

  19. Final Psychological Report Assignment

    Psychological Report Assignment. Maame Afrifa School of Behavioral Sciences, Liberty University. ... The assessments included in this psychological report includes IPIP-NEO, Beck's Depression Inventory, Beck's Anxiety Inventory, and Marital Satisfaction Inventory-Revised (MSI-R). As a result, Mr. Denver is a evaluated as a good candidate ...

  20. Psychological Testing

    A formal clinical interview is often conducted with the individual before the start of any psychological assessment or testing. This interview can last anywhere from 30 to 60 minutes, and includes questions about the individual's personal and childhood history, recent life experiences, work and school history, and family background.