By Jon Tarella

SPOILER: They don’t exist, at least not in an absolute sense.

Students and parents often ask us which test date administration of the ACT or SAT they should be taking. They hear through the proverbial grapevine that, for whatever reason, one administration is easier than another.  We emphasize that there is no factual or empirical basis for this, but some still choose to listen to the stories out there.  In a narrow sense, they could be right: if the tests are actually the same, then there is no harm in taking the one you think will be easier as long as you haven’t taken it for granted and still prepare.

However, let me be clear: there is no “easiest” ACT or SAT test date.  There is no reason for ACT, Inc. or the College Board to make some of their administrations, particularly in a systematic, predictable way, easier than others.  The whole point of standardized testing is to provide a basis for college admissions offices to compare the proficiency levels of students from different regions and backgrounds.  The reliability of ACT or SAT scores as a metric for comparing skill levels is contingent upon a given score having the same meaning for any student who has that score.  Why would the test maker sabotage that goal by consistently making a particular date’s test the “easy” or “hard” one, thereby compromising college admissions officers’ ability to make apples-to-apples comparisons of scores?  The ACT and College Board folks would also need to set aside time and resources (read, MONEY) to determine in advance which tests were easier or harder.  And we don’t think they want to spend more money doing that.

Now, is it true that some ACT math or SAT reading sections, for example, are easier than others?  Absolutely. Is that intentional?  I highly doubt it, but I guess it’s conceivable.  Does it matter?  Considering that standardized test scores are scaled based upon percentiles, absolutely not.  It is in the best interest of the test maker to make a score of 26 (ACT), say, or 1100 (SAT), on one test mean the exact same thing as a score of 26 or an 1100 on another.  However, it’s impossible to objectively guarantee that all sections of a given type are equally difficult.  That’s why the test makers use percentiles (basically a curve) to determine what raw number of correct answers is associated with a particular scale score on a given test.  As you can see on the ACT’s distribution chart, a 90th percentile score in science, for example, is always scaled as a 27.  On a tougher test, that means you could probably get a few more questions wrong to beat out 90% of your peers; on an easier one, you’ll have less room for error.  Variation is possible between different tests in terms of how raw scores relate to scale scores.  Not all sections are created equal, but the percentile scaling goes a long way in counterbalancing that concern.

“Hey, but doesn’t that mean that if you take the test among a weaker batch of students, you’ll get better scale scores?”  Technically, of course it does, but you’ll be going down a not terribly rewarding rabbit hole if you try to figure out which test administration is the right one in that regard.  For example, you may think that the December test of either test is a bad one to take because it consists almost entirely of high-scoring juniors.  While that’s true, the vast majority of them will be taking an official exam for the first time and therefore probably will not perform as well as they do when taking it the second or third time.  I could make an argument like this for every administration of the ACT or SAT, so there is no sound science behind any of this.

Bottom line:  pick a test date (you can seek the help of our test prep experts if you need guidance) and then develop a study plan for that test date.  Don’t listen to the rumor mill because you’ll hear contradictions and myths.  There’s no substitute for hard work. Since you’ll likely be taking the test at least twice, you’ll be taking it on different test dates anyways.

Jon Tarella is a math and science tutor at Ivy Ed.