If you are not familiar with Renaissance Learning and their flagship product, Accelerated Reader, count yourself lucky.
Accelerated Reading bills itself as a reading program, but it would be more accurate a huge library of reading quizzes, with a reading level assessment component thrown in. That's it. It doesn't teach children how to read; it just puts them in a computerized Skinner box that feeds them points instead of pellets for performing some simple tasks repeatedly.
Pick a book (but only one on the approved AR list). Read it. As soon as you've read it, you can take the computer quiz and earn points. AR is a great demonstration of the Law of Unintended Consequences as well as Campbell's Law, because it ends up teaching students all sorts of unproductive attitudes about reading while twisting the very reading process itself. Only read books on the approved list. Don't read long books-- it will take you too long to get to your next quiz to earn points. If you're lagging in points, pick short books that are easy for you. Because the AR quizzes are largely recalling questions, learn what superficial features of the book to read for and skip everything else. And while AR doesn't explicitly encourage it, this is a program that lends itself easily to other layers of abuse, like classroom prizes for hitting certain point goals. Remember kids-- there is no intrinsic reward or joy in reading. You read only so that somebody will give you a prize.
While AR has been adopted in a huge number of classrooms, it's not hard to find folks who do not love it. Look at some articles like "3 Reasons I Loathe Accelerated Reader" or "Accelerated Reader: Undermining Literacy While Undermining Library Budgets" or "Accelerated Reader Is Not a Reading Program" or "The 18 Reasons Not To Use Accelerated Reader." Or read Alfie Kohn's "A Closer Look at Reading Incentive Programs." So, a wide consensus that the Accelerated Reading program gets some very basic things wrong about reading.
But while AR sells itself to parents and schools as a reading program, it also does a huge amount of work as a data mining operation. Annually the Renaissance people scrape together the data that they have mined through AR and they issue a report. You can get at this year's report by way of this website.
The eyebrow raising headline from this year's report is that a mere 4.7 minutes of reading per day separate the reading stars from the reading goats. Or, as US News headlined it, "Just a Few More Minutes Daily May Help Struggling Readers Catch Up." Why, that's incredible. So incredible that one might conclude that such a finding is actually bunk.
Now, we can first put some blame on the media's regular issues with reporting sciency stories. US News simply ran a story from the Hechinger Report, and when Hechinger originally ran it, they accompanied it with much more restrained heading "Mining online data on struggling readers who catch up: A tiny difference in daily reading habits is associated with giant improvements." But what does the report actually say?
I think it's possible that the main finding of this study is that Renaissance is a very silly business. I'm no research scientist, but here are several reasons that I'm pretty sure that this "research" doesn't have anything useful to tell us.
1) Renaissance thinks reading is word encounter.
The first chunk of the report is devoted to "an analysis of reading practice." I have made fun of the Common Core approach of treating reading as a set of contextless skills, free-floating abilities that are unrelated to the content. But Renaissance doesn't see any skills involved in reading at all. Here's their breakdown of reading practice:
* the more time you practice reading, the more vocabulary words you encounter
* students who spend more time on our test-preppy program do better on SAB and PARCC tests
* students get out of the bottom quartile by encountering more words
* setting goals to read more leads to reading more
They repeatedly interpret stats in terms of "number of words," as if simply battering a student with a dictionary would automatically improve reading.
2) Renaissance thinks PARCC and SBA are benchmarks of college and career readiness
There is no evidence to support this. Also, while this assumption pops up in the report, there's a vagueness surrounding the idea of "success." Are they also using success at their own program as proof of growing student reading swellness? Because that would be lazy and unsupportable, and argument that the more students do AR activities, the better they get at AR activities.
No, if you want to prove that AR stuff makes students better at reading, you'll need a separate independent measure. And there's no reason to think that the SBA or PARCC constitute valid, reliable measures of reading abilities.
Bottom line: when Renaissance says that students "experienced higher reading achievement," there's no reason to believe that the phrase means anything.
3) About the time spent.
Much ado is made in the report about the amount of time a student spends on independent reading, but I cannot find anything to indicate how they are arriving at these numbers. How exactly do they know that Chris read fifteen minutes every day but Pat read thirty. There are only a few possible answers, and they all raise huge questions.
In Jill Barshaw's Hechinger piece, the phrase "an average of 19 minutes a day on the software"crops up. But surely the independent reading time isn't based on time on the computer-- not when so much independent reading occurs elsewhere.
The student's minutes reading could be self-reported, or parent-reported. But how can we possibly trust those numbers? How many parents or children would accurately report, "Chris hasn't read a single minute all week."
Or those numbers could be based on independent reading time as scheduled by the teacher in the classroom, in which case we're really talking about how a student reads (or doesn't) in a very specific environment that is neither chosen nor controlled by the student. Can we really assume that Chris reading in his comfy chair at home is the same as Chris reading in an uncomfortable school chair next to the window?
Nor is there any way that any of these techniques would consider the quality of reading-- intensely engaged with the text versus staring in the general direction of the page versus skimming quickly for basic facts likely to be on a multiple choice quiz about the text.
The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet. Which means that anybody who wants to tell me that Chris spent nineteen minutes reading (not twenty, and not eighteen) is being ridiculous.
(Update: The AR twitter account directed me to a clarification on this point of sorts. The truth is actually worse than any of my guesses.)
4) Correlation and causation
Barshay quotes University of Michigan professor Nell Duke, who points out what should not need to be pointed out-- correlation is not causation and "we cannot tell from this study whether the extra five minutes a day is causing kids to make dramatic improvements." So it may be
that stronger readers spend more time reading. So we don’t know if extra reading practice causes growth, or if students naturally want to read a few minutes more a day after they become better readers. “It is possible that some other factor, such as increased parental involvement, caused both,” the reading growth, and the desire to read more, she wrote.
But "discovering" that students who like to read tend to read more often and are better at it-- well, that's not exactly big headline material.
5) Non-random subjects
In her coverage of last year's report, Barshay noted a caveat. The AR program is not distributed uniformly across the country, and in fact seems to skew rural. So while some demographic characteristics do at least superficially match the national student demographics, it is not a perfect match, and so not a random, representative sampling.
So what can we conclude
Some students, who may or may not be representative of all students, read for some amount of time that we can't really substantiate tend to read at some level of achievement that we can't really verify.
A few things we can learn
The data mining that goes into this report does generate some interesting lists of reading materials. John Green is the king of high school readers, and all the YA dystopic novels are still huge, mixed in with the classics like Frankensein, MacBeth, the Crucible, and Huck Finn. Scanning the lists also gives you an idea of how well Renaissance's proprietary reading level software ATOS works. For instance, the Crucible scores a lowly 4.9-- lower than the Fault in our Stars (5.5) or Frankenstein (12.4) but still higher than Of Mice and Men (4.5). Most of the Diary of a Wimpy Kid books come in in the mid-5.somethings. So if the wimpy kid books are too tough for your students, hit them with Lord of the Flies which is a mere 5.0 even.
Also, while Renaissance shares the David Coleman-infused Common Core love of non-fiction ("The majority of texts students encounter as they progress through college or move into the workforce are nonfiction"), the AR non-fiction collection is strictly articles. So I guess there are no book length non-fiction texts to be read in the Accelerated Reader 360 world.
Is the reading tough enough?
Renaissance is concerned about its discovery that high school students are reading work that doesn't rank highly enough on the ATOS scale. By which they mean "not up to the level of college and career texts." It is possible this is true. It is also possible that the ATOS scale, the scale that thinks The Catcher in the Rye is a 4.7, is messed up. Just saying.
The final big question
Does the Accelerated Reader program do any good?
Findings from prior research have detected a tipping point around a comprehension level of about 85% (i.e., students averaging 85% or higher on Accelerated Reader 360 quizzes taken after reading a book or article). Students who maintain this level of success over a quarter, semester, or school year are likely to experience above-average achievement growth.
Remember that "student achievement" means "standardized test score." So what we have is proof that students who do well on the AR battery of multiple choice questions also do well on the battery of PARCC and SBA standardized test questions. So at least we have another correlation, and at most we have proof that AR is effective test prep.
Oddly enough, there is nothing in the report about how AR influences joy, independence, excitement, or lifelong enthusiasm for reading. Nor does it address the use of reading to learn things. Granted, that would all be hard to prove conclusively with research, but then, this report is 64 pages of unsupported, hard-to-prove assertions, so why not throw in one more? The fact that the folks at Renaissance Learning found some results important enough to fake but other results not even worth mentioning-- that tells us as much about their priorities and their program as all their pages of bogus research.