At The Hechinger Report, Suzanne Simons wants to complain about English Language Arts instruction in middle and high schools. It's a familiar kind of mess, but I promise a tiny twist at the end, which might almost make up for the length.
Simons is the Chief Literacy and Languages Officer at Carnegie Learning. Before that she worked at The Equity Lab, before that National Geographic Education, before that Literacy Design Collaborative, before that American Reading Company, before that adjust professor at Drexel, and she's done some consulting. All that since 2007. She has a couple of M. Ed.s and a doctorate in education leadership from the University of Pennsylvania. Her LinkedIn profile does not list any classroom work.
I'm not going to suggest that classroom teachers have nothing to learn from academics and edu-biz operators. But what Simons is both familiar and unhelpful.
Her main complaint is that "too many students are working on below-grade-level tasks using below-grade-level texts." This, she claims, will not be "preparing students for life after high school. Is it any wonder that reading scores haven't improved in 30 years."
I'm always puzzled by the idea that test scores should rise in perpetuity, like the stock market. Why, exactly, should that be? There are almost thirty years between my two oldest children and my two youngest-- should I expect that my young children will be smarter than the older ones? Mind you, I will never argue that teachers should ever, ever say, "Well, that's enough, I don't need to teach any better, harder, or more than I have so far." But the notion that every year's students should outperform the year before them treats students like assembly line toasters and not actual human beings.
To bolster her insistence on the value of grade-level materials, she uses an unfortunate source: the Opportunity Myth, a piece of faux research from TNTP, some slick baloney I've addressed here. It's a lot of silliness, but the key point here is that it doesn't actually support--or even address-- her point, which is that "grade-level tasks and texts should be the start — not the finish — to strong instruction." It focuses strictly on "proving" that many students get instruction with materials below grade level.
Simons also trots out the NAEP results (from 2019) showing 37% of 12th graders are "academically prepared for college in reading." By that she means that they have scored either proficient or advanced. But there is research missing here, like the 2007 study from NCES that showed that half of the students scoring a lowly Basic on the NAEP went on to complete a college degree (Bachelor's or higher). She also cites a report that employers think young people lack proper language skills.
Reformsters are great at defining problems, sometimes accurately and sometimes not so much. But does Simons have a solution.
She points to a study done by Learning Design Collaborative, an outfit that sells standards-based curriculum, professional development, and some other programs. Their CEO is John Katzman, founder of Noodle, the Princeton Review, and 2U. He sits on all sorts of boards, including the boards of the National Association of Independent Schools, the Woodrow Wilson Foundation, and the National Alliance of Public Charter Schools.
They have a board of senior advisors. Suzanne Simons sits on that board.
Let's talk about the study. It's a big, fat 240 pages, and I'm not going through it with a fine toothed comb. But here are some things that jump out.
There were two cohorts of schools involved. In the first cohort, two thirds of the teachers dropped out after the first year, and half of the remaining teachers dropped out after the second year. Cohort 2 didn't do much better. So, the primary effect of the study was that people stopped using the LDC model. Given that the original sampling was heavily elementary, this left them with a very tiny sample of middle and high school students--the very students that Simons is writing about in her piece.
The results are taken from SBA tests (you remember these Big Standardized Tests from back in the day) and then pushed through some magical math model that compares the students in the study with students not getting the LDC treatment.
Cohort 2 showed some "significant" results. These are presented as a gain of "four to nine months of learning," which is an academic baloney method of rendering test score gains (.25 of a standard deviation = 1 year). Because if we said X makes scores on a single large standardized test go up, people would not much care, but if we say they gained a year of learning--well, somehow that meaningless phrase strikes some folks as compelling. However, my own rule of thumb is that anyone who talks about days/months/years of learning is trying to sell something.
This study and the product it's pushing falls in the Standardized Closed Loop model of learning. It works like this:
Pat runs a group of fashion schools, and Pat personally believes that you are never fully dressed without a smile. Pat tests students and finds that only about half of them qualify as well dressed. So Pat trains the school's teachers to understand that you're never fully dressed without a smile, and the teachers implement the Smile Design Curriculum. They teach students various types of smiles they can perform and practice performing them and especially drive home that performing these smiles will be needed to score well on the Well Dressed Test.
Test time comes and--voila!--the scores go up! 9 months of fashion learning gained!
Set the standard. Train to the standard. Test to the standard. What's missing, of course, is any objective proof that you are fully dressed only if you wear a smile. What we have actually set here is a fairly limited proxy for being fully dressed. Students who forgot to wear pants still test as fully dressed because they are smiling. Students who are impeccably dressed, but bad at smiling, test as not fully dressed.
The Standardized Closed Loop model can be bolstered by blowing lots of smoke. Use a lot of jargon that's not very clear but sounds important. Stress that your system is standards-based, but don't talk about where the standards came from or what they are based on. Worked great for Common Core!
LDC manages all of this. And they've won awards.
But back to the article. What problems does Simons diagnose?
The culture of low expectations. Simons will trot out The Opportunity Myth again, claiming that students are being assigned below-grade-level work, because--
Teachers are not assigning grade-level tasks and texts (even though, she points out, the Common Core came out in 2010). These two subjects--expectations and grade-level texts--often bring non-teachers to the fore (like Common Core author David Coleman). Actual classroom teachers know there is a delicate balance here, a sweet spot you have to locate. Students need to experience success, but not be bored. Push them above their frustration level, and they will simply shut down, decide they're "not good at this s#$!" and it'll take you weeks to get them back. Standards fans have this habit of insisting that you get students to read on grade level by just, you know, insisting real hard.
The "reading on grade level" also skips over the whole matter of prior content knowledge. What "grade level" a student reads on is partly a factor of what the text is about. A student with love and knowledge of baseball will demonstrate a higher reading level for a text about baseball than he will for a text about Macedonian economic theory.
Simons also points out that this use of below-level text has increased since the pandemic. Well, duh. A teacher's job is to meet students where they are, and where many students have been since the pandemic has been not where students of that grade typically are.
Simons also faults teacher professional development. Well, yes. And also curriculum programs are weak and claim to be standards aligned when they really aren't, though how teachers are supposed to distinguish between faux and real standards alignment is not clear. I believe that she knows of an organization that can help, though I give her points for not specifically plugging LDC by name.
So to turn things around we should...?
Start with grade level tasks on day 1, not by day 180. Which leads one to ask--is there a difference between grade level on day 1 compared to day 180? How about grade level on day 180 of last year compared to day 1 of this year? Is grade level slightly different on every day of the 180?
Grade-level thinking is not a destination; it requires daily practice. Teachers (and curricula) need to assume that every student can read, think and write about rich and complex ideas using complex texts. Teachers and curriculum programs can target instruction to meet individual needs while engaging all learners in the same rigorous grade-level texts and tasks.
Yes, but what does that actually mean? And if every student is using the same text and doing the same task, exactly how does one "target" individual instruction? And have reformsters been trying to make "rigor" happen longer than "fetch" and if so, can we quit. Like many teachers, I spent many cumulative hours in PD listening to some presenter try to explain, clearly, what they meant by rigor. "No, it's not the same as 'hard.' No, it's not necessarily a higher reading level. No, it's not 'easy' with a lot of assigned tasks piled on top."
Shift from "what students consume to what they produce." Which is just an update of the old Common Core reformster focus on "deliverables." Let's focus on "outputs" and not "inputs." An oldie but a goodie, but if true, why do we care whether the texts are on grade level or not?
And of course standards training for teachers so that they "can deepen their understanding of the standards and be able to recognize students’ demonstrations of specific standards."
Research demonstrates that when a student is given grade-level tasks driven from grade-level standards, and their teacher is trained to teach those standards, both will rise to the challenge.
Is this supposed to refer to the LDC research? Because the large majority of teachers did not rise to the challenge at all. Is there any other research that could be used here?
So what have we got?
It's the ghost of Common Core. If you wondered whether that old "standards based" concept was still around, here's a whole organization promoting it. Swell.
However
One aspect of LDC's program (barely hinted at by Simons) is worthwhile. They focus on authentic writing. Write like a historian or scientist and, well, "like members of the academic and professional disciplines they will one day inhabit." Now, I don't know how well their materials actually deliver on this promise, nor do I know what they propose for students whose future disciplines will be blue collar work, but I will stand and applaud anyone who champions writing as authentic communication rather than a student performance of writing-like activities for an audience of nobody.
So that's the twist. In the midst of all this refried Common Core bean and baloney, there is something that could conceivably be quite swell. Okay, so I looked at one of their rubrics and wasn't overwhelmed, but still, it gives me hope that even these folks who have wandered so far into the weeds can still find something beautiful out in the swamp.