Here’s a pop quiz: according to the measurements used in the new Common Core Standards, which of these books would be complex enough for a ninth grader?
a. Huckleberry Finn
b. To Kill a Mockingbird
c. Jane Eyre
d. Sports Illustrated for Kids' Awesome Athletes!
The only correct answer is “d,” since all the others have a “Lexile” score so low that they are deemed most appropriate for fourth, fifth, or sixth graders. This idea might seem ridiculous, but it’s based on a metric that is transforming the way American schools teach reading.
Lexiles were developed in the 1980s by Malbert Smith and A. Jackson Stenner, the President and CEO of the MetaMetrics corporation, who decided that education, unlike science, lacked “what philosophers of science call unification of measurement,” and aimed to demonstrate that “common scales, like Fahrenheit and Celsius, could be built for reading.” Their final product is a proprietary algorithm that analyzes sentence length and vocabulary to assign a “Lexile” score from 0 to over 1,600 for the most complex texts. And now the new Common Core State Standards, the U.S. education initative that aims to standardize school curricula, have adopted Lexiles to determine what books are appropriate for students in each grade level. Publishers have also taken note: more than 200 now submit their books for measurement, and various apps and websites match students precisely to books on their personal Lexile level.
Last week the Thomas B. Fordham Institute issued a report on the Common Core standards that should give parents, teachers, and lovers of literature serious pause: it found that “many youngsters are not yet working with appropriately complex language in their schoolbooks,” and mainstream media outlets quickly jumped on board to sound the alarm: teachers are failing to assign the challenging books demanded by the Common Core.
But missing from this debate is the question of whether the idea of the Lexile makes sense at all. When Huckleberry Finn isn’t complex enough for our high-school students, I can’t help wondering if we need to change the way we conceptualize literary complexity. I’m an English professor, and I live in Iowa City, a UNESCO World City of Literature, but according to MetaMetrix my bookish hometown might as well go play patty cake. On my way to work I pass the House on Van Buren Street where Kurt Vonnegut began Slaughterhouse Five—but with a score of only 870, this book is only a fourth-grade read. By these standards Mr. Popper’s Penguins (weighing in at a respectable 910) is deemed more complex.
I also pass St. Mary’s Catholic Church, where Flannery O’Connor sought grace but failed to find the vocabulary needed to push her Collected Stories above the sixth-grade level. I arrive at my office in the same grim building that motivated Raymond Carver to abscond and hold his classes at the nearest bar; his Cathedral scores a puny 590, about the same as Curious George Gets a Medal.
To be fair, both the creators of the Common Core and MetaMetrix admit these standards can’t stand as the final measure of complexity. As the Common Core Standards Initiative officially puts it, “until widely available quantitative tools can better account for factors recognized as making such texts challenging, including multiple levels of meaning and mature themes, preference should likely be given to qualitative measures of text complexity when evaluating narrative fiction intended for students in grade 6 and above.” But even here, the final goal is a more complex algorithm; qualitative measurement fills in as a flawed stopgap.
Few would oppose giving teachers better tools to challenge students, but this approach seems badly flawed. One alternative would be to trust teachers themselves to determine the moral and aesthetic complexities that engage students as individuals. After all, human expertise is the center of the humanities. Lexile scoring is the intellectual equivalent of a thermometer: perfect for cooking turkeys, but not for encouraging moral growth.
Any attempt to quantify literary complexity surely mistakes the fundamental experience of literature. No one has described that experience better than William Empson, whose Seven Types of Ambiguity wrote the book on literary complexity. A mathematician by training, Empson was no touchy-feely humanist, but he understood that the greatest literary language rarely made “a parade of its complexity.” He particularly admired Shakespeare’s description of trees as “Bare ruined choirs, where late the sweet birds sang,” which he explained contained “no pun, double syntax, or dubiety of feeling”:
but the comparison holds for many reasons; because ruined monastery choirs are places in which to sing, because they involve sitting in a row, because they are made of wood, are carved into knots and so forth.... These reasons, and many more relating the simile to its place in the Sonnet, must all combine to give the line its beauty, and there is a sort of ambiguity in not knowing which of them to hold most clearly in mind.
I try to teach my students to balance such complexities. But many of the smartest and best have learned the Lexile model too well. They’ve long been rewarded for getting “the point” of language that makes “a parade of its complexity,” and they’ve not been shown that our capacity to manage ambiguity without reducing it enables us to be thinkers rather than mere ideologues.
It’s this kind of thinking that makes us “humans” rather than mere “machines.” At least, as I pass his house on the way back home, I think that this is what Vonnegut would have said, although he ironically lets an alien voice the sentiment in Slaughterhouse Five: “What we love in our books are the depths of many marvelous moments seen all at one time.” Comprehension and quantifiable results have their place. But we’ll be fooling ourselves about what we’ve achieved unless teachers also have the freedom to lead students to a discovery of the marvelous, and sometimes incomprehensible, experiences of literary pleasure.
Blaine Greteman is a professor of English at the University of Iowa.