Many people have very strong views about the role of standardized tests in education. Some believe they provide an unbiased way to determine an individual’s cognitive skills as well as the quality of a school or district. Others believe that scores from standardized tests are capricious, do not represent what students know, and are misleading when used for accountability purposes. Many educational psychologists and testing experts have nuanced views and make distinctions between the information standardized tests can provide about students’ performances, and how the tests results are interpreted and used. In this more complex view, many of the problems associated with standardized tests arise from their high stakes use, such as determining selection into educational programs, eligibility for graduation or for licensure, or publicizing the "success" of a school as high vs. low performing.
Are Standardized Tests Biased?Edit
In a multicultural society one crucial question is, are standardized tests biased against certain social class, racial, or ethnic groups? This question is much more complicated than it seems because bias has a variety of meanings. An everyday meaning of the term often refers to the fairness of standardized test results for predicting future performance of disadvantaged students who may have previously had few educational resources. For example, should Dwayne, a high school student who worked hard but had limited educational opportunities because of the poor schools in his neighborhood and few educational resources in his home, be denied graduation from high school because of his score on one test? It was not his fault that he did not have the educational resources, and if given a change with a change his environment (e.g., by going to college), his performance may improve significantly. Viewed this way, standardized test scores reflect societal inequalities and can therefore punish students who are less privileged. The scores can also be interpreted erroneously as reflecting students' fixed inherited capacities.
Educational researchers, however, typically consider bias in three more technical ways: item content and format; accuracy of predictions, and stereotype threat. These ideas have special meanings not necessarily shared in everyday conversations about testing.
Item Content and FormatEdit
Test items may be harder for some groups than others. An example of social class bias in a multiple choice item asked students the meaning of the term field. The students were asked to read the initial sentence in italics and then select the response that had the same meaning of field (Popham 2004, p. 24):
- My Dad’s field is computer graphics
- A. The pitcher could field his position
- B. We prepared the field by plowing it
- C. The doctor examined my field of vision
- D. What field will you enter after college?
Children of professionals are more likely to understand this meaning of field as doctors, journalists and lawyers have “fields” whereas cashiers and maintenance workers have jobs so their children are less likely to know this meaning of field. (The correct answer is supposed to be D).
Testing companies try to minimize these kinds of content problems by having test developers from a variety of backgrounds review items and also examining statistically if certain groups find some items easier or harder. However, problems do exist and a recent analyses of the verbal SAT tests indicated that Whites tend to scores better on easy items whereas African Americans, Hispanic Americans and Asian Americans score better on hard items (Freedle, 2002). While these differences are not large, they can influence test scores. Researchers think that the easy items involving words that are used in every day conversation may have subtly different meanings in different subcultures whereas the hard words (e.g., vehemence, sycophant) are not used in every conversation and so do not have these variations in meaning. Test format can also influence test performance. Females typically score better at essay questions and when the SAT recently added an essay component, the females overall SAT verbal scores improved relative to males (Hoover, 2006).
Accuracy of PredictionsEdit
Standardized tests are used among other criteria to determine who will be admitted to selective colleges. This practice is justified by predictive validity evidence – i.e., that scores on the ACT or SAT are used to predict first year college grades. Recent studies have demonstrated that the predictions for Black and Latino students are less accurate than for White students and that predictors for female students are less accurate than male students (Young, 2004). However, perhaps surprisingly the test scores tend to slightly over predict success in college for Black and Latino students, i.e., these students are likely to attain lower freshman grade point averages than predicted by their test scores. In contrast, test scores tend to slightly under predict success in college for female students, i.e., these students are likely to attain higher freshman grade point averages than predicted by their test scores. Researchers are not sure why there are differences in how accurately the SAT and ACT test predict freshman grades
Groups that are negatively stereotyped in some area, such as women’s performance in mathematics, are in danger of stereotype threat, i.e., concerns that others will view them through the negative or stereotyped lens (Aronson & Steele, 2005). Studies have suggested that test performance of stereotyped groups (e.g., African-Americans, Latinos, women) declines when it is emphasized to those taking the test that (1) the test is high stakes, (2) measures intelligence or math, or (3) they are reminded of their ethnicity, race or gender (e.g., by asking them before the test to complete a brief demographic questionnaire). Even if individuals believe they are competent, stereotype threat can reduce working memory capacity because individuals are trying to suppress thee negative stereotypes. Stereotype threat seems particularly strong for those individuals who desire to perform well. Standardized test scores of individuals from stereotyped groups may significantly underestimate actual their competence in low-stakes testing situations.
Do Teachers Teach to the Tests?Edit
There is evidence that schools and teachers adjust the curriculum so it reflects what is on the tests and also prepare students for the format and types of items on the test. Several surveys of elementary school teachers indicated that more time was spend on mathematics and reading and less on social studies and sciences in 2004 than 1990 (Jerald, 2006). Principals in high minority enrollment schools in four states reported in 2003 that had reduced time spend on the arts. Recent research in cognitive science suggests that reading comprehension in a subject (e.g. science or social studies) requires that students understand a lot of vocabulary and background knowledge in that subject (Recht & Leslie, 1988). This means that even if students gain good reading skills they will find learning science and social studies difficult if little time has been spend on these subjects.
Taking a test with an unfamiliar format can be difficult so teachers help students prepare for specific test formats and items (e.g., double negatives in multiple choice items; constructed response). (At the beginning of this chapter, for example, a middle school teacher, Erin, and a Principal, Dr Mucci described the test preparation emphasis in their schools.) There is growing concern that the amount of test preparation that is now occurring in schools is excessive and students are not being educated but trained to do tests (Popham, 2004).
Do Students and Educators Cheat?Edit
It is difficult to obtain good data on how widespread cheating is but we know that students taking tests cheat and others, including test administrators, help them cheat (Cizek, 2003). Steps to prevent cheating by students include protecting the security of tests, making sure students understand the administration procedures, preventing students from bringing in their notes or unapproved electronic devices as well as looking at each others' answers. Some teachers and principals have been caught using unethical test preparation practices such as giving actual test items to students just before the tests, giving students more time than is allowed, answering students’ questions about the test items, and actually changing students’ answers (Popham, 2006). Concerns in Texas about cheating led to the creation of an independent task force in August 2006 with 15 staff members from the Texas Education Agency assigned investigate test improprieties. (Jacobson, 2006). While the pressure on schools and teachers to have their student perform well is large, these practices are clearly unethical and have lead to school personnel being fired from their jobs.
- Popham, W. J. (2004). America’s “failing” schools. How parents and teachers can cope with No Child Left Behind. New York: Routledge Falmer.
- Freedle, R. O. (2003). Correcting the SAT’s ethnic and social–class bias: A method for reestimating SAT scores. Harvard Educational Review, 73(1), 1-42.
- Hoover, E. (2006, October 21). SAT scores see largest dip in 31 years. Chronicle of Higher Education, 53(10), A1.
- Young, J. W. (2004). Differential validity and prediction: Race and sex differences in college admissions testing. In R. Zwick (Ed). Rethinking the SAT: The future of standardized testing in university admissions (pp. 289-301). New York: Routledge Falmer.
- Aronson, J., & Steele, C. M. (2005). Stereotypes and the Fragility of Academic Competence, Motivation, and Self-Concept. In A. J. Elliott & C. S. Dweck (Eds.). Handbook of competence and motivation. (pp.436-456) Guilford Publications, New York.
- Jerald, C. D (2006,August). The Hidden costs of curriculum narrowing: Issue brief. Washington DC: The Center for Comprehensive School Reform and Improvement. Accessed November 21, 2006 from []
- Recht, D. R. & Leslie, L. (1988). Effect of prior knowledge on good and poor readers’ memory of text. Journal of Educational Psychology, 80, 16-20.
- Popham, W. J. (2006). Educator cheating on No Child Left Behind Tests. Educational Week, 25(32) 32-33.
- Cizek, G. J. (2003). Detecting and preventing classroom cheating: Promoting integrity in assessment. Corwin Press, Thousand Oaks, CA.
- Jacobson, L. (2006). Probing Test irregularities: Texas launches inquiry into cheating on exams. Education Week, 28(1), 28