Pages

Tuesday, November 17, 2015

Accelerated Reader's Ridiculous Research

If you are not familiar with Renaissance Learning and their flagship product, Accelerated Reader, count yourself lucky.

Accelerated Reading bills itself as a reading program, but it would be more accurate to call it a huge library of reading quizzes, with a reading level assessment component thrown in. That's it. It doesn't teach children how to read; it just puts them in a computerized Skinner box that feeds them points instead of pellets for performing some simple tasks repeatedly.

Pick a book (but only one on the approved AR list). Read it. As soon as you've read it, you can take the computer quiz and earn points. AR is a great demonstration of the Law of Unintended Consequences as well as Campbell's Law, because it ends up teaching students all sorts of unproductive attitudes about reading while twisting the very reading process itself. Only read books on the approved list. Don't read long books-- it will take you too long to get to your next quiz to earn points. If you're lagging in points, pick short books that are easy for you. Because the AR quizzes are largely recalling questions, learn what superficial features of the book to read for and skip everything else. And while AR doesn't explicitly encourage it, this is a program that lends itself easily to other layers of abuse, like classroom prizes for hitting certain point goals. Remember kids-- there is no intrinsic reward or joy in reading. You read only so that somebody will give you a prize.

While AR has been adopted in a huge number of classrooms, it's not hard to find folks who do not love it. Look at some articles like "3  Reasons I Loathe Accelerated Reader" or "Accelerated Reader: Undermining Literacy While Undermining Library Budgets" or "Accelerated Reader Is Not a Reading Program" or "The 18 Reasons Not To Use Accelerated Reader." Or read Alfie Kohn's "A Closer Look at Reading Incentive Programs." So, a wide consensus that the Accelerated Reading program gets some very basic things wrong about reading.

But while AR sells itself to parents and schools as a reading program, it also does a huge amount of work as a data mining operation. Annually the Renaissance people scrape together the data that they have mined through AR and they issue a report. You can get at this year's report by way of this website.

The eyebrow raising headline from this year's report is that a mere 4.7 minutes of reading per day separate the reading stars from the reading goats. Or, as US News headlined it, "Just a Few More Minutes Daily May Help Struggling Readers Catch Up." Why, that's incredible. So incredible that one might conclude that such a finding is actually bunk.

Now, we can first put some blame on the media's regular issues with reporting sciency stories. US News simply ran a story from the Hechinger Report, and when Hechinger originally ran it, they accompanied it with much more restrained heading "Mining online data on struggling readers who catch up: A tiny difference in daily reading habits is associated with giant improvements." But what does the report actually say?

I think it's possible that the main finding of this study is that Renaissance is a very silly business. I'm no research scientist, but here are several reasons that I'm pretty sure that this "research" doesn't have anything useful to tell us.

1) Renaissance thinks reading is word encounter.

The first chunk of the report is devoted to "an analysis of reading practice." I have made fun of the Common Core approach of treating reading as a set of contextless skills, free-floating abilities that are unrelated to the content. But Renaissance doesn't see any skills involved in reading at all. Here's their breakdown of reading practice:

* the more time you practice reading, the more vocabulary words you encounter
* students who spend more time on our test-preppy program do better on SAB and PARCC tests
* students get out of the bottom quartile by encountering more words
* setting goals to read more leads to reading more

They repeatedly interpret stats in terms of "number of words," as if simply battering a student with a dictionary would automatically improve reading. 

2) Renaissance thinks PARCC and SBA are benchmarks of college and career readiness

There is no evidence to support this. Also, while this assumption pops up in the report, there's a vagueness surrounding the idea of "success." Are they also using success at their own program as proof of growing student reading swellness? Because that would be lazy and unsupportable, and argument that the more students do AR activities, the better they get at AR activities.

No, if you want to prove that AR stuff makes students better at reading, you'll need a separate independent measure. And there's no reason to think that the SBA or PARCC constitute valid, reliable measures of reading abilities.

Bottom line: when Renaissance says that students "experienced higher reading achievement," there's no reason to believe that the phrase means anything.

3) About the time spent.

Much ado is made in the report about the amount of time a student spends on independent reading, but I cannot find anything to indicate how they are arriving at these numbers. How exactly do they know that Chris read fifteen minutes every day but Pat read thirty. There are only a few possible answers, and they all raise huge questions.

In Jill Barshaw's Hechinger piece, the phrase "an average of 19 minutes a day on the software"crops up. But surely the independent reading time isn't based on time on the computer-- not when so much independent reading occurs elsewhere.

The student's minutes reading could be self-reported, or parent-reported. But how can we possibly trust those numbers? How many parents or children would accurately report, "Chris hasn't read a single minute all week."

Or those numbers could be based on independent reading time as scheduled by the teacher in the classroom, in which case we're really talking about how a student reads (or doesn't) in a very specific environment that is neither chosen nor controlled by the student. Can we really assume that Chris reading in his comfy chair at home is the same as Chris reading in an uncomfortable school chair next to the window?

Nor is there any way that any of these techniques would consider the quality of reading-- intensely engaged with the text versus staring in the general direction of the page versus skimming quickly for basic facts likely to be on a multiple choice quiz about the text. 

The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet. Which means that anybody who wants to tell me that Chris spent nineteen minutes reading (not twenty, and not eighteen) is being ridiculous.

(Update: The AR twitter account directed me to a clarification on this point of sorts. The truth is actually worse than any of my guesses.)

4) Correlation and causation

Barshay quotes University of Michigan professor Nell Duke, who points out what should not need to be pointed out-- correlation is not causation and "we cannot tell from this study whether the extra five minutes a day is causing kids to make dramatic improvements." So it may be

that stronger readers spend more time reading. So we don’t know if extra reading practice causes growth, or if students naturally want to read a few minutes more a day after they become better readers. “It is possible that some other factor, such as increased parental involvement, caused both,” the reading growth, and the desire to read more, she wrote.

But "discovering" that students who like to read tend to read more often and are better at it-- well, that's not exactly big headline material.

5) Non-random subjects

In her coverage of last year's report, Barshay noted a caveat. The AR program is not distributed uniformly across the country, and in fact seems to skew rural. So while some demographic characteristics do at least superficially match the national student demographics, it is not a perfect match, and so not a random, representative sampling.

So what can we conclude

Some students, who may or may not be representative of all students, read for some amount of time that we can't really substantiate tend to read at some level of achievement that we can't really verify. 

A few things we can learn

The data mining that goes into this report does generate some interesting lists of reading materials. John Green is the king of high school readers, and all the YA dystopic novels are still huge, mixed in with the classics like Frankensein, MacBeth, the Crucible, and Huck Finn. Scanning the lists also gives you an idea of how well Renaissance's proprietary reading level software ATOS works. For instance, the Crucible scores a lowly 4.9-- lower than the Fault in our Stars (5.5) or Frankenstein (12.4) but still higher than Of Mice and Men (4.5). Most of the Diary of a Wimpy Kid books come in in the mid-5.somethings. So if the wimpy kid books are too tough for your students, hit them with Lord of the Flies which is a mere 5.0 even.

Also, while Renaissance shares the David Coleman-infused Common Core love of non-fiction ("The majority of texts students encounter as they progress through college or move into the workforce are nonfiction"), the AR non-fiction collection is strictly articles. So I guess there are no book length non-fiction texts to be read in the Accelerated Reader 360 world.

Is the reading tough enough?

Renaissance is concerned about its discovery that high school students are reading work that doesn't rank highly enough on the ATOS scale. By which they mean "not up to the level of college and career texts." It is possible this is true. It is also possible that the ATOS scale, the scale that thinks The Catcher in the Rye is a 4.7, is messed up. Just saying.

The final big question 

Does the Accelerated Reader program do any good?

Findings from prior research have detected a tipping point around a comprehension level of about 85% (i.e., students averaging 85% or higher on Accelerated Reader 360 quizzes taken after reading a book or article). Students who maintain this level of success over a quarter, semester, or school year are likely to experience above-average achievement growth.

Remember that "student achievement" means "standardized test score." So what we have is proof that students who do well on the AR battery of multiple choice questions also do well on the battery of PARCC and SBA standardized test questions. So at least we have another correlation, and at most we have proof that AR is effective test prep.

Oddly enough, there is nothing in the report about how AR influences joy, independence, excitement, or lifelong enthusiasm for reading. Nor does it address the use of reading to learn things. Granted, that would all be hard to prove conclusively with research, but then, this report is 64 pages of unsupported, hard-to-prove assertions, so why not throw in one more? The fact that the folks at Renaissance Learning found some results important enough to fake but other results not even worth mentioning-- that tells us as much about their priorities and their program as all their pages of bogus research.

9 comments:

  1. "The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet."
    Not far off !!!
    Have you seen this:
    http://www.projectlisten.org/photo/
    In 2012, two students wear BrainBands that log EEG data about their brain states while they use the Reading Tutor.(photo caption)

    ReplyDelete
  2. My loathing for Accelerated Reader knows no bounds: http://blogs.edweek.org/teachers/teacher_in_a_strange_land/2012/11/racing_striving_accelerating_winning_yada_yada_and_reading.html

    The same goes for all the competitive reading programs and commercial scams.

    ReplyDelete
  3. I may be able to answer your question about how they calculated time spent reading (although I sincerely hope I am wrong, because this is probably even worse than you could have possibly imagined). I taught in a school that required AR, so I'm very familiar with the system.

    We set many goals for students: number of points earned per trimester, average book level, average percent correct on quizzes, etc. Our goals for average book level were supposed to be based on the STAR test, another Renaissance Learning product. This is a very standardized, adaptive computer based test which in my experience seems to mostly test breadth of vocabulary knowledge (although it also includes questions for my 5th graders on things like rhyme schemes and elements of drama). It consists entirely of isolated multiple choice items with no context, rhyme or reason, and my English language learners always do terribly on it.

    The STAR test spits out a Grade Level Equivalent for each student: supposedly the grade level at which the student can read (although again, it's really mostly a vocabulary test). This is what we are supposed to use to set students' goals. One of the things this test does is calculate how fast students read the isolated passages for each question based on how fast they clicked on a multiple choice answer and from this, CALCULATE A WORDS PER MINUTE (WPM) score!! I know, this makes absolutely no sense and has little relationship to their actual reading speed. And even if it were valid, it would be measuring the time they spend reading, processing the question, and choosing an answer. Not the time spent reading.

    But AR goes even further. On the score reports we get from AR, there is a column of "total minutes spent reading." The program knows the total word count for every book in their database. So (I'm assuming) they use the word counts, the number of quizzes the student takes per day, and the individual student's WPM score to calculate number of minutes spent reading! I'm assuming this must be how they calculate it, because I can't think of any other way.

    If this is the way the study calculated the "extra 4.7 minutes spent reading" (and I have no idea if it is, since I haven't read it), this statistic is obviously completely meaningless.

    ReplyDelete
    Replies
    1. Wow, and thanks for clarifying this pile of cowflops.

      Delete
    2. I know it's been awhile since this was posted, but I'm replying anyway...

      I teach 7th grade and my students have figured out that the worse they do one the STAR test, the lower their 6 week goal is. So I have a ton of students with K and 1st grade levels, so they only need 3 or so points every 6 weeks. I don't care if they get points or not because I think the whole program is nonsense, but I frequently get yelled at by my department head, who thinks she's my boss and buys into AR, about disadvantaging students.

      Delete
  4. This comment has been removed by the author.

    ReplyDelete
  5. I don't have experience with "Accelerated Reader", but I have tried out "Accelerated Math", put out by the same company, since we were required to use it a my school (Alice Deal JHS/MS in Washington DC). I thought it was worth trying out, maybe 8-9 years ago, so I did. My students learned next to nothing. The questions were stupid, incredibly repetitive, and not helpful to either the students or to the teacher. Eventually I de used I wasn't going to subject my students to the program, and earned an official reprimand for telling my department chair that AM was a "pile of shit". I should have said it was a "heap of cow manure" and avoided the reprimand.

    ReplyDelete
  6. My daughter won the AR trophy at our elementary school in 3rd grade. I was so surprised I didn't even take a picture. Other parents thought I was actually going into the system and making sure she had more points than other kids because as a teacher I had access. As if I had time to do that. What I did make time to do was read with my children and teach them to love reading. It was quality time for us to read a book together. I guess the parents who were waking their children up early to drag them to the library and take AR tests would never understand.

    ReplyDelete
  7. I have no problem with AR. It's fine.

    ReplyDelete