Which of the following statements is correct about the influence of culture on intelligence tests?

Intelligence Testing

E. Johnson, in Encyclopedia of Applied Ethics (Second Edition), 2012

Introduction

Intelligence testing refers to the theory and practice of measuring people’s performance on various diagnostic instruments (intelligence tests) as a tool for predicting future behavior and life prospects or as a tool for identifying interventions (e.g., educational programs). The interchangeability of ‘intelligence’ and ‘IQ’ in popular parlance creates an ambiguity, with IQ referring sometimes to a score on a test and sometimes to the characteristic (intelligence) that is the supposed cause of the score.

The history of IQ tests and their interpretation falls into four periods: (1) a prehistory from approximately 1865 to 1904, (2) a period of emergence and development from 1905 until after World War II, (3) a period of intense social scrutiny from the late 1960s to the early 1980s, and (4) a resurgence of social controversy in the mid-1990s and after.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123739322002866

Intelligence Testing

Wilma C.M. Resing, in Encyclopedia of Social Measurement, 2005

Intelligence A, B, and C

If we want to understand and comprehend discussions about intelligence and intelligence testing, it is good to keep in mind the three-way split in intelligence—A, B, and C—made by Vernon in 1967. Intelligence A is the genetically determined disposition or potential to act intelligently that cannot be influenced by culture, environmental stimulation, education, or learning experiences. It is a postulate; it cannot be observed nor measured. Intelligence B is the result of the interaction between this genetic disposition (A), environmental influences, and learning experience. It is culturally bound and can be observed and estimated. It is the visible intelligence, the cognitive abilities that a person has at a particular moment in time that are the result of earlier learning experiences. It changes during an individual's life span. Intelligence C is what intelligence tests—created to estimate intelligence B as well as possible—measure. Intelligence C is the measured intelligence of a person, in terms of the intelligence quotient (IQ); it is restricted to the specific test that is used.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985000839

DIFFERENCES IN LEARNING AND NEURODEVELOPMENTAL FUNCTION IN SCHOOL-AGE CHILDREN

Melvin D. Levine, in Developmental-Behavioral Pediatrics (Fourth Edition), 2009

Additional Assessment Components

Intelligence testing can be helpful. Although an overall IQ may be misleading, the results of specific subtests often are helpful in providing evidence for one or more specific neurodevelopmental dysfunctions. Psychoeducational testing can yield highly relevant data, especially when such testing includes error analyses that pinpoint discrete breakdowns in reading, spelling, writing, and mathematics (see Chapter 82). A psychoeducational specialist, making use of input from multiple sources, can help formulate specific recommendations for regular and special educational teachers.

A mental health professional is valuable in identifying family-based issues complicating or aggravating neurodevelopmental dysfunctions. That professional also can diagnose any specific psychiatric condition contributing to the clinical picture in a child with academic problems.

Other “à la carte” assessments may be called for in individual cases. These would include a more detailed assessment from a speech and language pathologist, a social worker, a pediatric neurologist, or an occupational therapist.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781416033707000559

Lead Poisoning☆

R.L. Canfield, T.A. Jusko, in Encyclopedia of Infant and Early Childhood Development (Second Edition), 2008

Specific Cognitive and Neuropsychological Functions

Intelligence test scores reflect a sort of average of a person's performance on multiple individual subtests, each assessing competence in a general domain of cognitive functioning. Although IQ scores are reliably associated with children's BLLs, they reveal little about the specific types of cognitive functions that are affected. If, as is found with many toxins, lead selectively affects some biochemical or neurophysiological processes but spares others, then the cognitive functions most dependent on those more vulnerable processes would be disproportionately damaged. Consequently, a better understanding of the types of cognitive processes affected by lead could uncover more sensitive measures for indicating the presence of adverse effects of lead, and it might also open avenues to possible behavioral or neuropharmacological interventions for lead-exposed children.

Researchers have used two main methods to identify what some have called a behavioral signature of lead toxicity. One method has been to examine patterns of performance across the individual subtests used to derive an overall IQ score. A potentially stronger methodology has been to use clinical neuropsychological tests initially developed to characterize the pattern of behavioral deficits associated with localized brain damage. Studies using these methods are suggestive of several broad areas of cognitive function that are especially vulnerable to lead exposure—visual–motor integration, attention, and executive functions.

Visual–motor integration involves the ability to coordinate fine motor skills with visual–spatial perception in order to draw and copy geometric forms, assemble puzzles, and arrange blocks to create a specified configuration. Reviewing patterns of subtest performance across studies suggests that lead-related deficits in the overall IQ score are influenced by a particularly strong inverse association between BLL and performance on block-design tasks. The block-design subtest is also the most consistent among all subtests in showing a statistically significant association with BLL. Studies using neuropsychological tasks designed specifically to assess visual–spatial abilities and visual–motor integration and have produced a consistent set of findings; namely, that BLL is inversely related to fine motor skills, eye–hand coordination, and standardized testing of the ability to copy geometric forms. Although these results are mostly consistent, in one longitudinal study visual–spatial abilities were most impaired at age 5 years, whereas extensive neuropsychological testing conducted at age 10 years revealed no clear deficit in visual–spatial skills.

Attention, a second area of possible special vulnerability, involves cognitive functions for sustaining performance over time, and for selecting particular stimuli for intensive cognitive processing. Some studies have used index scores based on combinations of IQ subtests and neuropsychological tests thought to reflect sustained and focused attention. Although sustained attention is generally unrelated to lead exposure, more highly exposed subjects in these studies showed poorer performance on tasks requiring focused attention.

Executive functions enable planning and goal-oriented cognition based largely on the ability to coordinate the activities of more specialized cognitive functions, such as memory, attention, and visual–spatial processing, and to flexibly apply these cognitive functions to meet changing task demands. Importantly, the line between certain aspects of attention and executive functioning is not clearly drawn. For example, focused attention involves the controlled use of attention and therefore is sometimes included as a component of executive functioning. Attentional flexibility is more generally considered to reflect executive functioning and involves the ability to shift the focus of attention between different sources of information or to consider alternative problem-solving strategies.

Two prospective studies have examined children's performance on classical measures of executive functioning, arriving at similar but not identical conclusions. In one study of 10-year-old children, BLL during childhood was associated with an increase in several measures of perseverative behavior. For example, although childhood BLL was unrelated to how quickly children learned a simple rule, children with higher BLLs appeared to be less cognitively flexible—after the rule changed they were more likely to continue making the previous, but now incorrect, response. Cognitive rigidity was also seen in a study in which 5-year-old children completed a computerized test battery. Lead exposure was unrelated to the ability to sustain attention on a simple task, but children with higher BLLs made more errors when they were required to shift to a new rule. These children also showed planning deficits when solving multistep problems. Children with higher BLLs made impulsive choices on the first step of the problem, which made subsequent steps more difficult, thereby causing them to make more errors. Finally, a study of young adults given a large battery of neuropsychological tests showed that tooth lead levels were inversely related to performance on groups of tasks that assessed focused attention and attentional flexibility (although not sustained attention).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128165126000914

Interpreting pediatric intelligence tests: a framework from evidence-based medicine

Andrew J. Freeman, Yen-Ling Chen, in Handbook of Psychological Assessment (Fourth Edition), 2019

Framework for interpreting intelligence tests

As modern intelligence tests developed, so did the debate on how to interpret the results of the tests. Interpretation of intelligence tests consists at the most basic level of the assessor choosing the relative weight to place on two dimensions: qualitative–quantitative and idiographic–nomothetic (Kamphaus, Winsor, Rowe, & Kim, 2012; Kranzler & Floyd, 2013). First, an assessor must determine how heavily to weigh qualitative or quantitative interpretations. Qualitative interpretation focuses on the process of testing or how an individual completes the test. Quantitative interpretation focuses on the empirical scores of an individual as the summary piece of interpretative information. Second, an assessor must determine how idiographic or nomothetic in interpretation to be. Idiographic interpretation focuses on the uniqueness of the person as identified by within person performance over time or across abilities. Nomothetic interpretation focuses on how a person performs relative to a more general population represented in the test’s norm-group. In general, interpretation of children’s intelligence test will rely on using all the available blends of the qualitative–quantitative and idiographic–nomothetic dimensions—qualitative-idiographic, qualitative-nomothetic, quantitative-idiographic, and quantitative-nomothetic. However, substantial debate remains in how to weigh the information gathered from the different approaches. Therefore, a framework for the evaluation of evidence from children’s intelligence tests is necessary.

Evidence-based medicine (EBM) provides a general framework for the evaluation and management of the information overload that occurs in the context of assessment. EBM focuses on the utility of a test and not the tests theoretical merits. EBM defines the utility of a test as the extent to which testing improves outcomes relative to the current best alternative, which could include either a different test or no testing at all (Bossuyt, Reitsma, Linnet, & Moons, 2012). EBM calls on test interpretation to add value to an individual that outweighs any potential harms such as misdiagnosis, recommendations for ineffective treatments, or the cost of testing (Straus, Glasziou, Richardson, & Haynes, 2010). In the context of psychological assessment, utility can be defined by the “Three Ps” (Youngstrom, 2008, 2013). Prediction is the concurrent or prospective association between the test outcome and a criterion of importance such as diagnosis, competence, or lack of effort. One example of a prediction question is: “In children, would low processing speed scores compared to normal processing speed lead to an accurate diagnosis of ADHD?” Prescription refers more narrowly to the test’s ability to change the course of treatment or select a specific treatment. An example of a prescription question is: “In children, do children with low processing speed benefit from extended test-taking time compared to normal time limits for completing school exams?” Process represents the outcome’s ability to inform about progress over the course of treatment and quantify meaningful outcomes. One example of a process question is: “In children, will scores on an intelligence test reflect improvement after a traumatic brain injury?” Applying the 3Ps to children’s intelligence test interpretation provides the strongest and most persuasive reasons for completing an assessment (Meehl, 1997) because they directly link assessment to treatment and prognosis. Therefore, each of the four methods for interpreting intelligence tests will be evaluated in the context of the “Three Ps” of assessment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128022030000031

Tying it all together

Michelle M. Martel, in The Clinical Guide to Assessment and Treatment of Childhood Learning and Attention Problems, 2020

Intellectual functioning

Kaufman Brief Intelligence Test, Second Edition

The Kaufman Brief Intelligence Test (KBIT-2) is a short test of general intellectual ability and verbal and nonverbal reasoning. On the KBIT-2, Chase performed in the average range, earning an IQ composite score of 90, ranking him at the 25th percentile when compared with other children of his age. Chances are 90 out of 100 that his IQ falls within a range from 84 to 97. There were no significant differences between his verbal and nonverbal reasoning. He performed in the average range (SS=96; 39th percentile) on verbal tasks assessing verbal knowledge and ability to solve riddles. He performed slightly, but not significantly, worse in nonverbal reasoning, again performing in the average range (SS=87; 19th percentile) on a task assessing his ability to complete incomplete patterns in a series of pictures.

Overall, Chase’s intellectual functioning fell in the average range compared to other individuals his age.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128157558000095

ASSESSMENT OF INTELLIGENCE

Helen Tager-Flusberg, Daniela Plesa-Skwerer, in Developmental-Behavioral Pediatrics (Fourth Edition), 2009

SUMMARY

The origins of intelligence testing date back to Alfred Binet’s early work on developing measures to assess children’s potential for academic success. This chapter summarizes current theoretical views on how to define intelligence. Although controversy persists, there is general agreement that intelligence includes the capacity to learn from experience, the capacity to adapt to the environment, and the ability to use metacognitive or problem-solving skills. Current models of intelligence are presented and methods for assessing children’s intelligence using comprehensive test batteries are surveyed. The chapter ends with a discussion of the factors that influence test performance, individual variation in intelligence, and historical changes in test scores.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978141603370700081X

Intellectual Functioning, Assessment of

L. Atkinson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Psychometric Assessment

Psychometric tests have dominated intelligence testing for a century. The defining feature of this approach is its empirical foundation; ‘psychometric’ simply refers to the quantitative assessment of psychological states/abilities. While quantitative assessment rests on a massive measurement technology, its theoretical foundations are shallow, as reflected in its origins. The earliest tests that influenced contemporary intellectual measures directly emerged from studies by Alfred Binet and colleagues in France (Cronbach 1984). In 1904, Binet was directed to devise a means of distinguishing educable from noneducable students in the relatively new universal education system. Having investigated cranial, facial, palmar, and handwriting indices, Binet discovered the direct measure of complex intellectual tasks involving judgment, comprehension, and reasoning most successful in distinguishing among pupils. Based on these pragmatic beginnings, Binet defined intelligence as the capacity to adopt and sustain a direction, make adaptations for the purpose of attaining a desired end, and monitor performance self-correctively. With little elaboration, this definition still directs the psychometric paradigm.

Typically, modern psychometric tests consist of varied subtests that tap diverse aspects of the loosely defined intelligence construct. For example, scales may include subtests that sample a broad range of knowledge (e.g., the names of objects, dates, historical and geographical facts) and require the examinee to assemble colored blocks such that their pattern resembles a prespecified design (Sattler 1992). Again, choice of subtests is not driven by theoretical prescription. Subtests are selected because they work—in combination, they serve to rank individuals according to how much they know and how good they are at solving certain problems. The pragmatic selection of subtests is based on Binet's conception of intelligence as a general or undifferentiated ability (g), so that, in principle, the tasks that tap g are interchangeable.

At the heart of psychometric testing lies norm referencing (Sattler 1992). Norm referenced tests are developed by administering items in a standardized manner to a representative sample of the population in question. The norm sample is considered ‘representative’ insofar as it is stratified within age groups for variables that might influence performance differentially, such as sex, geographic region, ethnic status, community size, etc. Scores are scaled such that each individual's derived score represents a relative standing within the norm or standardization group. In this sense, psychometric testing is an empirical endeavor in its purest sense: as a comparative construct, there is little need to theorize about the exact nature of intelligence.

As mentioned, most modern psychometric tests include varied tasks. The original intention was to ensure that g was comprehensively surveyed. With time, however, clinicians came to exploit the multitask construction of intelligence tests to make intra-individual distinctions (Kaufman 1990). By looking at the variability among subtests or groups of subtests, assessors hypothesized about relative intellectual strengths and weaknesses. For example, a particular respondent might prove better on tests of memory than on tasks involving conceptualization. It is important to note, though, that the analysis of intra-individual differences developed after the fact; such comparisons are driven by the practicalities of what subtests are available, rather than by a detailed theory about the structure of intelligence.

The empirical base of the psychometric effort implies both weakness and strength. With respect to its limitations, attempts to interpret intra-individual differences based on a selection of subtests that were pragmatically chosen have not been validated empirically (Reschly 1997). Furthermore, the atheoretical approach to task selection has resulted in restricted and incomplete sampling of the intelligence domain (Chen and Gardner 1997). For example, musical and interpersonal abilities are neglected. Instead, there is an emphasis on skills acquired though academic learning, a prized outcome in mainstream Western societies. Therefore, critics object to the fact that psychometric tests measure little more than achievement; they assess what an examinee has learned, not the examinee's potential to learn.

Related to this issue, and magnified by the practice of defining individual intelligence with reference to a norm group, questions have arisen about bias due to (sub)cultural, ethnic, life-experience, and motivational differences. This becomes a social issue when examinees from minority groups are compared to a norm sample whose context, values, and learning experiences are different from their own (Suzuki and Valencia 1997). Testing thereby betrays its original purpose of providing objective data on an individual's intellectual functioning and comes, instead, to discriminate against atypical examinees.

Another difficulty with psychometric tests is that although they usually correlate highly among themselves, this is not always the case (Daniel 1997). Correlations may be influenced by what tasks are included and how they are weighted. Perhaps a greater problem lies in the fact that even where test scores do correlate highly, the same individual may earn discrepant scores on different instruments due to the fact that tests are normed on different standardization groups.

A crucial criticism of psychometric tests is that recommendations derived from these instruments have not been shown to enhance remediation for the examinees (Reschly 1997). Again, this can be attributed to the fact that the content of these scales has not been selected according to any theory of intelligence, brain functioning, or pedagogy.

In other respects, psychometric testing has met with success. Although test tasks are selected pragmatically, they cluster in remarkably similar ways across tests and studies, giving insight into the structure of intelligence. Based on statistical methods that group subtests into clusters according to underlying commonalities (factor analysis), three strata of intelligence have been identified (Carrol 1997). At the highest stratum is a general factor, g. This factor subsumes a second stratum of broad factors, including ‘fluid’ and ‘crystallized’ intelligence. (Fluid intelligence involves the ability to cope with novelty and think flexibly. Crystallized intelligence involves the storage and use of declarative knowledge such as vocabulary or information.) Subsumed under each broad factor is a set of narrow abilities, such as ‘induction’ and ‘reading comprehension.’ Knowledge of these distinct but interdependent strata can guide construction of new psychometric instrumentation.

Another strength of the psychometric approach derives from its emphasis on quantitative methods; psychometricians strive to ensure that their tests are reliable and valid predictors of performance (Sattler 1992). ‘Reliability’ refers to consistency of measurement; the more reliable a measure, the less error involved in estimates derived from it. Many psychometric tests boast extremely high internal reliability (the degree to which each component score of the test correlates with the full test score) and short-term ‘test-retest’ reliability (an index of stability derived by administering the test to the same group of individuals more than once). Furthermore, the long-term stability of IQ has proven impressive, with good predictions over a 20-year time-span. The validity of these tests, too, has proven strong. ‘Validity’ refers to the extent to which a test measures what it was designed to measure. Intelligence test scores correlate with amount of schooling, quality of work produced in school, occupational status, and performance in the work situation (although the strength of the latter prediction is controversial), both concurrently and predictively. To summarize, although there are serious limitations to psychometric measurement, the approach yields reliable and valid estimates of intellectual functioning. Psychometric tests are accurate classifiers and predictors when used with care in circumscribed contexts.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767012912

Intelligence: History of the Concept

J. Carson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 The Nature of Intelligence

Concomitant with the rise of intelligence testing came a series of debates over the characteristics of the object being measured. In 1904, British psychologist Charles Spearman used early test data to argue for the unitary nature of intelligence, explaining performance on mental tests in terms of general intelligence (g) and specific abilities (s). While numerous researchers—including Karl Pearson, Goddard, and Terman—accepted his analysis, others were skeptical, insisting instead that intelligence was composed of a number of primary independent abilities. Edward L. Thorndike in the United States was among the first to articulate this position, and he was soon joined during the 1920s by two statistically sophisticated psychologists, L. L. Thurstone in the US and Godfrey Thomson in the UK. In the period after World War II, psychologists continued to put forward a range of interpretations of the composition of intelligence: while Hans Eysenk remained convinced of the reality of g, for example, Philip E. Vernon proposed a hierarchical model of intelligence that inter-linked specific skills, general abilities, and overall intelligence, and Joy P. Guilford contended that intelligence was composed of as many as 150 independent factors. Later influential additions to these models included Howard Gardner's theory of the existence of seven discrete types of intelligence and Robert J. Sternberg's triarchic conception of intelligence.

Overshadowing all of the arguments over intelligence, however, has undoubtedly been the nature–nurture question. Figures such as Galton, Spearman, Pearson, and Terman argued early in the twentieth century strenuously for the nature position, with Galton producing studies on identical twins that have served as a model for investigations into the relative weights of heredity and environment. During the 1900s to 1930s, when eugenics was at the apex of its popularity, arguments in favor of intelligence as an inheritable biological entity ran strong and were used to justify calls for immigration restriction and for the sterilization of the mentally ‘unfit,’ as well as for the creation of multi-tracked secondary schools. At the same time, however, anthropologists were beginning to put renewed emphasis on culture as the primary determinant of human behavior, claims strengthened during the middle of the twentieth century by studies carried out especially at the Iowa Child Welfare Research Station, where IQ was found to change depending on nutrition and educational environment. The post-war period continued to see the debate pressed from both sides, with increasingly sophisticated twin studies showing high IQ correlations between identical twins separated at birth, while at the same time other researchers were teasing out ever more complicated connections between intelligence development and such factors as nutrition, family child-rearing practices, socio-economic status, and quality of education received.

While few experts would deny the influence of both genes and environment, the vociferous debate in the mid-1990s over The Bell Curve (Herrnstein and Murray 1994), with its claims that IQ is hereditary and a prime determinant of an individual's future, indicates that broad disagreements persist about intelligence and show little likelihood of quick resolution. What they reveal as well is that the language of native intelligence has remained a powerful vehicle for discussions of a range of social issues, from the organization of the educational system to the value of affirmative action programs to the just allocation of social resources.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767001285

Terman, Lewis

Henry L. Minton, in Encyclopedia of Social Measurement, 2005

Debating the Testing Critics

As one of the leading advocates of intelligence testing, critics of the testing movement often challenged Terman. These challenges began in the early 1920s when the results of the army testing became widely disseminated. The influential journalist Walter Lippmann wrote a series of highly critical articles about the army tests in the New Republic. Lippmann singled out Terman because of his development of the Stanford–Binet and asserted that there was no scientific basis for the claim made by Terman and the other army psychologists that the tests measured native ability. Terman responded in the New Republic by dismissing Lippmann as a nonexpert in testing who should thus stay out of issues that he was uninformed about. Lippmann, in fact, was quite technically sophisticated in many of his criticisms, but Terman chose to be evasive in his response to the points that Lippmann raised, such as an environmental interpretation of the correlation between tested intelligence and social class.

During the 1920s, Terman also engaged in a series of published debates about testing with psychologist William C. Bagley, another critic of the hereditarian view of intelligence. In an attempt to settle issues, Terman took on the task of chairing a committee that organized an edited book on the nature–nurture debate. In this monograph, published in 1928, leading advocates of each position marshaled evidence and arguments, but as in previous exchanges, nothing was resolved. In 1940, Terman was once again drawn into the nature–nurture debate, this time challenged by a team of environmentalist advocates at the University of Iowa led by George D. Stoddard. Stoddard campaigned for the limited use of intelligence tests because they were subject to environmental influences that compromised their usefulness in making long-term predictions. Terman was concerned that Stoddard's position against mass testing would threaten his career objective of establishing a meritocracy based on IQ differences. As in previous instances, the 1940 debate led to an impasse. No changes took place in the widespread use of intelligence tests in the schools. It would not be until the 1960s, as a consequence of the civil rights movement, that mass testing was seriously challenged. Terman did modify his position to some extent. In the 1930s, mindful of the racial propaganda of Nazi Germany, he resigned his long-standing membership in the American Eugenics Society. After World War II, although he still held to his democratic ideal of a meritocracy, he no longer endorsed a hereditarian explanation of race differences, and he acknowledged that among the gifted, home environment was associated with degree of success.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985003066

What is the impact of cultural factors on IQ tests?

According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings.

Which of the following statement shows the importance of intelligence and intelligence test?

Intelligence is a hereditary trait that involves mental activities such as memory and reasoning. Intelligence is multi-dimensional involving several abilities not entirely measurable by intelligence tests. Intelligence is the ability to think convergently.

What does it mean to say that intelligence tests are culturally biased?

Intelligence tests contain cultural bias—they contain a strong bias that is in favor White, middle class groups; for example: (a) the tests measure knowledge and content that are more familiar to White, middle class Page 22 6 students than to diverse students; (b) the language on these tests is more familiar to White, ...

What is meant by a culture reduced intelligence test quizlet?

One major score, and four sub scores of intelligence. Culture-reduced testing. Type of IQ test that reduces the disadvantage of non-english speaking people. Impossible for a test to be completely unbiased. Example of Culture-reduced test.