The folks at PARCC have set cut scores. You just don't need to know what they are.
The one published cut score is the one that draws the line between levels 3 and 4 ("not quite good enough" and "okee dokee"). That's set at 750 on a scale of 650 to 850. The other levels of cut scores, the projected percentages of students falling within the various troughs-- that's all secret for the time being.
There are three takeaways here for the general public.
There are no standards here
When you set an actual standard, an actual line that marks the difference between, say, ready for college and not ready for college, you set it before you do the measuring.
In my classroom, the grading scale is set before the students even take the test. In fact, before I even design the test. 70% is our lowest passing grade, and so I design a test on which someone would have to display the bare minimum of skill and comprehension to get a 70%.
The PARCC folk are saying that they will draw a line between college ready and not college ready-- but not before the test has been taken. How does that even make sense. How do you give a test saying, "This will show whether you're ready for college or not, but at this moment, we don't really know how much skill and knowledge you have to have to be ready for college."
This is the opposite of having standards. Standards mean setting the bar at six feet and saying, "You have to clear this bar to be considered a good jumper." This is saying, "We don't know what a good jump height would be, but we are going to judge you on whether you're a good jumper or not, but we're not going to put the bar up until after you jump."
Why are we setting cut scores now? Do we know the difference between a student who is college ready and one who is not? Is there some reason to believe that changes from year to year?
We have just about reached the point where the only way PARCC could be less transparent would be for them to require students to take the test blindfolded in a dark room on computers with the monitors turned off. This has to be the worst service ever provided by a government contractor.
This is why I bust a small gasket every time somebody tries to justify these tests because they provide such useful feedback to districts and classroom teachers. PARCC is providing the most useless, data-free feedback imaginable-- and the school year has already started.
Says PARCC, "Some of your students have scored a varying levels on a test that may or may not have put them on a certain level. You can't know about the questions they answered, which ones they got wrong, or what specific deficiencies they have. And we won't even tell you the simple rating (grade) we're giving them for a while yet. But go ahead and take this gaping hole where data is supposed to be, and use it to inform your instruction."
Meanwhile, PARCC is parcelling out information on a need-to-know basis, and nobody needs to know.
PARCC yielded to pressure and coughed up a bit more information, including the rest of the cut scores. Mercedes Schneider has the full story over at her blog.