No. 2 Pencil Wars: Disputing the Authority of Standard Tests
New study gives ammunition to those who say the college-admission exams are a poor gauge of students' ability
As millions of high school seniors breathlessly await word from the college of their choice, the use of standardized test scores has been issued yet another important challenge.
The often heavy reliance on the scores by college-admissions offices has come under assault in recent years by those who argue that the test is not a neutral measure of students' capabilities. While there has been declining reliance on the tests by some colleges, they are still a dominant factor in college admissions.
Opponents of the test have found new ammunition for their battle in the form of a new report. "Bewitched, Bothered, and Bewildering: The Use and Misuse of State SAT and ACT Scores."
Published in the spring issue of the Harvard Education Review, the study argues that results are markedly affected by such factors as school districts' per-pupil expenditures, how much education the student's family has had, and - in establishing a state's rank in average test scores - what percentage of its students took the test. It also concluded that gender and race have little influence on test results.
The findings, the work of sociologists Brian Powell of Indiana University and Lala Carr Steelman of the University of South Carolina, are likely to have an effect not only on college-admissions officers, but in high school classrooms and among education policymakers.
Standardized test scores can be the most influential factor in determining whether a student is accepted to a college. "The most common reason students are turned down is a poor test score," says Louis Castenell, dean of the College of Education at the University of Cincinnati.
"The report has significant meaning for individuals applying for college," says Professor Castenell. "It will help level the playing field for certain groups in society. Many people today assume the tests are an equal measure of students' ability, and they are not," he states. "Even testmakers tell you it measures achievement around a body of knowledge, not innate intelligence."
The two researchers have been arguing this point for over a decade. Twelve years ago, they did a landmark study - definitive in the eyes of many scholars - arguing that SAT scores were being misinterpreted. Many politicians and others at the time were confidently citing raw SAT scores to "prove" that the amount of money spent on schools didn't matter, pointing out that many of the states ranking high in SAT scores spent little money per pupil. Powell and Steelman refuted that argument, providing data that they said traced the differences to other factors.
Scores taken at face value
"These scores are too often taken at face value," the new report states. "The vast variation between states in the different groups of students taking the test" is often overlooked, it says.
And those earlier claims by politicians and others are still being made. "There is this view out there that the less money you spend on education the better," Mr. Powell says. "Some point out states that spend little money and then say, behold, they have higher SAT scores. Pundits cite these states as 'proof' that money makes no difference." But, he notes, "very few people in those states take the exam," and those who do tend to be a select group of more academically advanced students.
Howard Everson, senior research scientist at the College Board in New York, the group sponsoring and administering the tests, acknowledges that the test has limitations. The group warns against ranking by states.
"We agree it's not a true test in that sense," Mr. Everson says. "In any state with a large educational system that does not require SATs for admission, you're going to have fewer taking SATs - maybe the top 5 percent in the class - and they have ambitions to go outside the state to more competitive schools."
In 1994, the test's name was changed from Scholastic Aptitude Test to Scholastic Assessment Test, partly as a concession to those who charged that the title implied a measurement of innate intelligence. But that hasn't prevented a continued misunderstanding of the tests, according to many observers.
"The test is put together by experts, with advice from parents and teachers and others, based on a false premise: that all kids are being taught more or less the same basic things in the same way," Castenell says.
But the fact is, they are not. Some schools in low-income areas face basic problems of supply - like available textbooks - unheard of in more affluent districts. "If they don't come until until November of the school year," Castenell says, "that means your time to study for tests is reduced, because the teachers themselves haven't had the material."
He also cites the fact that the math section of the SAT and ACT "assume you've had so many years of exposure to various math principles. Yet it's not uncommon for urban schools to do only a small proportion of those. So in certain urban areas you may have people scoring in the 20th percentile, but they've never had trigonometry. So how can you say a kid isn't a good learner on that basis? And someone who has the misfortune of being in a school that did not cover English literature, say, is penalized on a test as being way below average intelligence, when he or she is not."
Students in poorly funded areas are at a disadvantage, Powell agrees. As a result, he says, some colleges are moving away from the test. "Some small schools are saying, look, we can figure out if a person deserves to be accepted without the SAT. "
Skirting the tough questions
Some say that many colleges tend to overemphasize the tests in part because it avoids thorny questions. For example, Castenell says, schools don't have to decide if a grade-point average from school A is better than one from school B.
Dean Whitla, Harvard University's director of research and evaluation, says the report is important. "It's going to be very valuable in helping a lot of people avoid misusing test scores," he says.