Three years ago about this time of year, headlines read “Scores from botched June SAT testing released as controversy escalates” (Valerie Strauss, WaPost). Back then, the College Board sent out and administered misprinted test booklets, and when the botched (old-style, pre-revision) tests had two sections discarded, the College Board claimed that the truncated scores were still reliable and there was no cause for concern. The public, for good reason, did not buy that argument; some parents even went to court in class-action lawsuits.
Fast-forward to today’s headlines: “An ‘Easy’ SAT and Terrible Scores” (Scott Jaschik, Inside Higher Ed). So what is going on? To get a quick feel of what students are feeling as they consider their new scores, and how these scores relate to their detailed score reports showing what questions they missed, browse this subreddit. Students are seeing that on June’s math test, even if they have done better (that is, earned more questions correct), their scores did not go up much; or even worse, they do better and their scores actually plummet. Here are a couple examples that clearly show why students are shocked:
“March, I got 4 wrong and it was 780. 5 wrong is 690? College Board needs to get off their high horse.”
Students are reporting that one wrong is a 770, two wrong a 750, three wrong is 720, and four a 700. For comparison, on the April 2017 SAT, students could miss one and still get an 800, and one wrong would only drop them to 790. How about for a score of 700? They could miss as many as ten! Certainly something is amiss on June’s math scores.
The official explanation is that this test was comparably “easier” than past tests; there is less margin for error and score intervals are greater. This is because tests like the SAT and ACT are standardized, so procedures need to be followed to normalize scores between different tests to make their scores comparable. But this process and the vast differences in scales that result from it raise a host of other questions.
One big one is, shouldn’t the College Board be doing everything it can, at the front end, to make each test iteration more equal in overall difficulty? The College Board will likely end up defending their after-the-fact equalizing procedures, but it’s likely that not everyone will be convinced of that claim. And an aggravating factor is that specific questions regularly get thrown out of the scoring due to various types of “problems” discovered with them after the fact (this happened when I took the SAT in May 2016). There are reports that the June test had several questions thrown out, and it will be interesting to see whether these tossed questions seem to factor into this seemingly extreme scale scoring.
Students, parents, and educators have a lot at stake in SAT and ACT scores, and they have a right to clear and convincing explanations when questions about those scores arise. If you have questions about your June SAT scores, or about scores from past tests, contact us at Ivy Ed and we can help you make sense of them. We can’t answer for the College Board about this test’s apparently erratic math scores, however, and we too look forward to hearing how they respond to this latest controversy.