Clara Chaplin had studied. She was ready. A junior at Bethlehem Central High School in Delmar, New York, she was scheduled to take the SAT on March 14, 2020. Then the pandemic hit, and the test was canceled.
The April SAT was canceled too. All through the spring and summer and into the fall, every test date she signed up for was either full or canceled. As she submitted her college applications on November 1, she still didn’t know how she’d score on the SAT she finally would manage to take on November 7.
Many students never made it through the test-center door; the pandemic left much of the high school class of 2021 without an SAT or ACT score to submit. Facing test access challenges and changing application requirements, about half did not submit scores with their applications, according to Robert Schaeffer, executive director of the nonprofit National Center for Fair & Open Testing in Boston. This didn’t bar them from applying to the nation’s most selective colleges as it would have in any other year: Starting in spring 2020, in a trickle that became a deluge, the nation’s most selective colleges and universities responded to the situation by dropping the standardized test score requirement for applicants.
Liberal arts colleges, technical institutes, historically black institutions, Ivies — more than 600 schools switched to test-optional for the 2020-21 application season, and dozens refused to consider test scores at all.
“That is a tectonic change for many schools,” says Rob Franek, editor in chief of the Princeton Review, a test-prep company based in New York City.
The pandemic sped up changes that were already afoot; even before Covid, more than 1,000 colleges had made the tests optional. Many had been turned off by the way the tests perpetuate socioeconomic disparities, limiting their ability to recruit a diverse freshman class. Some groups of students, including those who are Black or Hispanic, non-native English speakers, or low-income, regularly score lower than others. And students with learning disabilities struggle to get the accommodations they need, such as extra time, to perform their best.
Ironically, some early proponents of testing had hoped it would level the playing field, by measuring all students with the same yardstick no matter their background. That goal was never fully realized, but the tests persist because they do correlate to some extent with college grade point averages, offering schools an easy way to predict which students will perform well once they matriculate.
The benefits and risks of testing — real and perceived — have fueled an ongoing, roiling debate among educational scholars, admissions officers and college counselors, and the year of canceled tests gave both sides plenty to chew on. “The debate out there is particularly divisive right now,” says Matthew Pietrefatta, CEO and founder of Academic Approach, a test-prep and tutoring company in Chicago.
As the pandemic wanes, some advocates for equity in higher ed hope that schools realize they never needed the scores to begin with. The virus, Schaeffer says, may have made the point better than three decades of research indicating the feasibility of test-free admissions.
But others, including test-prep tutors and many educators, are apprehensive about the loss of a tool to measure all students the same way. Standardized tests, they say, differ from high-school grades, which vary from school to school and are often inflated. “There is a place for testing in higher ed,” says Jennifer Wilson, who has years of experience as a private test-prep tutor in Oakland, California.
In a post-Covid world, the challenge is to figure out what, precisely, that place should be.
An evolving yardstick
Testing in US college admissions goes back more than a century, and issues of race and inequity dogged the process from the get-go.
During the late 1800s, elite universities held their own exams to assess applicants’ grasp of college prep material. To bring order to the admissions process, leaders of elite universities banded together to develop a common test, to be used by multiple leading universities. This produced the first College Board exams in 1901, taken by fewer than 1,000 applicants. The tests covered nine areas, including history, languages, math and physical sciences.
In the 1920s, the focus of admissions tests shifted from assessing learned material to gauging innate ability, or aptitude. The idea for many, Schaeffer says, was to find those young men who had smarts but couldn’t afford a prep-school education. That led to the 1926 debut of the College Board’s original Scholastic Aptitude Test, which was spearheaded by Princeton University psychologist Carl Brigham. Across-the-board equality wasn’t exactly the goal. Brigham, who also sat on the advisory council of the American Eugenics Society, had recently assessed the IQs of military recruits during World War I, and opined that immigration and racial integration were dragging down American intelligence. (Brigham later recanted this opinion and broke with the eugenics movement.)
The SAT was widely taken up in the years following World War II as a way to identify scholarly aptitude among returning soldiers seeking to use the GI Bill for their studies. Then, in the 1950s, University of Iowa professor of education E.F. Lindquist argued that it would be better to assess what students learned in school, not some nebulous “aptitude.” He designed the ACT, first administered in 1959, to match Iowa high school curricula.
Today, the ACT includes multiple-choice sections on English, math, reading and science, based on nationwide standards and curricula. The SAT, which is split into two parts covering math and reading and writing, has also adopted the strategy of assessing skills students learn in school, and admissions officers have come to consider SAT and ACT scores interchangeable.
Until the pandemic, scores from one test or the other were required by more than half of US four-year institutions. Among the high school class of 2019, more than 2 million students took the SAT and almost 1.8 million took the ACT. Along with grades and courses taken, test scores topped the list of factors important to admission offices in pre-pandemic times, and were often used as a convenient cutoff: At some universities, candidates below a certain score weren’t even considered.
What are we really measuring?
The very endurance of the test market speaks to the SAT’s and ACT’s perceived value for higher education. People in the industry say the tests address college-relevant skills in reading, writing and math. “Can you edit your own writing? Can you write compelling, clear, cogent arguments? This is about a larger set of skills you’re going to need for college and career,” says Pietrefatta of the test-prep company Academic Approach.
Not that universities take the tests’ value for granted. Many schools have assessed what testing truly gives them, generally finding that higher scores correlate with higher first-year college GPAs and with college graduation rates. The University of California, a behemoth in higher ed with more than 280,000 students in its 10-campus system, has considered, and reconsidered, the value of testing over the past two decades. In the most recent analysis, completed in January 2020, a faculty team found that both high school GPA and test scores predicted college GPA to a similar degree, but considered together, they did even better. Concluding that the test scores added value without discriminating against otherwise-qualified applicants, in April 2020 UC’s Academic Senate, made up of faculty, voted 51-0 (with one abstention) to reinstate the testing requirement once the pandemic subsides.
But later that spring, UC’s governing board unanimously overruled the faculty, making the tests optional due in large part to their perceived discriminatory nature. A lawsuit brought by students with disabilities and minority students later drove UC to ignore all test scores going forward.
Even if test scores can predict college grades, admissions officers are looking for more than that. They seek young adults who will use their education to contribute to society by tackling important challenges, be they climate change, pollution or pandemics. That requires creativity, problem-solving, insight, self-discipline and teamwork — which are not necessarily taught in schools or gauged by standardized tests.
There are ways to test for those qualities, says Bob Sternberg, a psychologist now at Cornell University in Ithaca, New York. In a 2006 study sponsored by the College Board, maker of the SAT, he and his colleagues tried to predict college GPAs better than the SAT alone can do by adding assessments of analytical, practical and creative skills. To measure creativity, for example, they asked students to provide captions for New Yorker-style cartoons and to write short stories based on titles such as “The Octopus’s Sneakers.” They found that by adding the extra assessments, the researchers doubled their ability to predict college GPA. Student scores on the additional test materials were also less likely to correlate with race and ethnicity than the standard SAT.
Sternberg put these ideas into practice in a previous position he held, as dean of arts and sciences at Tufts University, by adding additional, optional questions to the university’s application form. “When you use tests like this, you find kids who are really adaptively intelligent in a broader sense, but who are not necessarily the highest on the SAT,” he says. And when those students came to the university, he adds, generally “they did great.”
The real problem with testing
The question at the heart of the testing debate is whether relying heavily on the SAT and ACT keeps many students who would do well at college, particularly those from disadvantaged populations, from ever getting a shot. The 2020 UC faculty report found that demographic factors such as ethnicity and parental income also influenced test scores. “If you want to know where people’s zip codes are, use the SAT,” says Laura Kazan, college advisor for the iLead Exploration charter school in Acton, California.
When poor, Black or brown students score lower, it’s not exactly the tests’ fault, says Eric Grodsky, a sociologist at the University of Wisconsin–Madison who analyzed the links between standardized testing and socioeconomic status in the Annual Review of Sociology. That’s because scores reflect disparities in students’ lives before testing. Wealthy students, for example, might have benefited from parents who had more time to read to them as toddlers, all the way through to being able to afford to take both tests, multiple times, to obtain the best score.
Other kids might not even be aware they’re supposed to take a test or that it’s something they can prepare for, says James Layman, director of the Association of Washington Student Leaders, headquartered in Randle, Washington. Students from poorer schools tell him they often don’t hear about test prep or other opportunities, or they lack the time to take advantage of them because they’re busy with jobs or caring for younger siblings. To try to level the field, in 2016 the College Board teamed up with the nonprofit Khan Academy to offer free online SAT prep materials, but even that requires an Internet connection at home and the time and space to take advantage of the program.
Thus, the disparities reflected in test scores result not from a failure of the tests so much as a failure to create a just educational system, Grodsky says. “We don’t do a good job of serving all our kids.” And if test scores determine one’s future opportunities, using them can perpetuate those inequities.
That suggests that admissions officers should, perhaps, turn to high-school grades. But those are fraught with their own set of issues, such as inflation. In one example, a recent study tracked algebra grades at North Carolina schools for a decade and reported that more than one-third of students who got a B in Algebra weren’t even rated “proficient” in the subject on a state test. Moreover, between 2005 and 2016, average GPAs at wealthy schools rose by 0.27 points, compared to just 0.17 points at less affluent schools.
Of course, wealth and demographics also influence access to other pre-college resources, such as advanced coursework and extracurriculars. But ranking applicants by test scores is particularly likely to put people of certain races on the top or the bottom of the list, argued Saul Geiser, UC Berkeley sociologist and former director of admissions research for the UC system, in a 2017 article.
Clearly, the tests aren’t all good, or all bad. There’s a lot of nuance, says Pietrefatta: The tests offer value in terms of the skills they assess and the predictions they make, even as they remain unfair to certain groups of people who haven’t been positioned to master those skills. This leaves colleges that value both diversity and well-prepared freshmen trying to strike a delicate, perhaps impossible, balance between the two.
Building a class, test-free: Admissions in Covid times
The pandemic forced a number of universities to rebalance their approach to admissions, leaving them no choice but to experiment with ditching standardized tests. And the results weren’t so bad.
Name-brand schools like Harvard experienced a massive spike in applications. The UC system saw applications for fall 2021 admission balloon by 15 percent over those for 2020. At UC Berkeley and UCLA, applications from Black students rose by nearly 50 percent, while applications from Latinos were up by about a third.
To choose among all those college hopefuls, many institutions took a holistic approach — looking at factors such as rigor of high school curriculum, extracurriculars, essays and special circumstances — to fill in the gaps left by missing test scores.
Take the case of Wayne State University in Detroit, where before Covid, high school GPA and standardized test scores were used as a cutoff to hack 18,000 applications down to a number the university’s eight admissions counselors could manage. “It was just easier,” says senior director of admissions Ericka M. Jackson.
In 2020, Jackson’s team changed tack. They made test scores optional and asked applicants for more materials, including short essays, lists of activities and evaluation by a high school guidance counselor. Assessing the extra material required assistance from temporary staff and other departments, but it was an eye-opening experience, Jackson says. “I literally am sometimes in tears reading the essays from students, what they’ve overcome … the GPA can’t tell you that.”
Many students were thrilled that they didn’t have to take standardized tests. At the iLead Exploration charter school, last year’s college hopefuls included several who may not have even applied in a normal year, Kazan says. “There were so many people that came to me, so happy and so excited, and so eager to apply to college, when before they were in fear of the test.” And when the admissions letters came in, she adds, the students had “phenomenal” success. Seniors were admitted to top schools including UCLA, USC and NYU.
The road ahead
Kazan has high hopes for the senior class of ’22, too, and won’t be pressuring anyone to sign up for a standardized test, even if exam dates are more accessible as the pandemic wanes. That’s because many institutions plan to see how test-optional admissions go, for a year or more, before reconsidering the value of the tests. More than 1,500 of them have already committed to a test-optional policy for the upcoming admissions season.
For hints of what’s to come if they continue along that road, admissions officers can look to schools that have been test-optional for years, even decades.
Bates College in Lewiston, Maine, dropped the SAT requirement in 1984, asking for alternative test scores instead, before making all testing optional in 1990. In 2011, Bates took a look back at more than two decades of test-optional admissions, and how enrollees fared after they came to college. Dropping the test requirement led to an increase in the diversity of Bates’s applicants, with major growth in enrollment of students of color, international attendees and people with learning disabilities. Once those students reached college, the achievement difference between students who submitted test scores and those who didn’t was “negligible,” says Leigh Weisenburger, Bates’s vice president for enrollment and dean of admission and financial aid. Those who submitted test scores earned an average GPA of 3.16 at Bates, versus 3.13 for non-submitters. The difference in graduation rates was just one percent.
The landscape will be forever shifted by the events of the pandemic, says Jim Jump, academic dean and director of college counseling at St. Christopher’s School in Richmond, Virginia. “The toothpaste is not going back in the tube.” One big factor, he says, is the fact that the University of California won’t look at test scores anymore. That means many California students won’t bother to take a standardized test, Jump says, making it hard for schools hoping to recruit Californians to require them.
There will, of course, be holdouts, he adds: The most elite, selective schools may be immune to that pressure. And universities that receive lots of applications might go back to a test-score cutoff to bring the pile of applications down to a manageable number, saving on the time and effort that holistic admissions entail.
The ultimate solution to the dilemma may lie in flexibility. “I think it should be optional from now on,” says Chaplin, who was fully satisfied with her SAT score after she finally managed to take the test, and is headed for highly ranked Bucknell University in Lewisburg, Pennsylvania. This would allow strong test-takers to shine but also let applicants showcase other strengths.
Students at the Association of Washington Student Leaders agree, Layman says — they don’t think test scores truly reflect who they are.
“There are other ways,” they tell him, “for colleges to get to know us, and us them.”
This article is part of Reset: The Science of Crisis & Recovery, an ongoing Knowable Magazine series exploring how the world is navigating the coronavirus pandemic, its consequences and the way forward. Reset is supported by a grant from the Alfred P. Sloan Foundation.