Saturday, September 29, 2018

Personalized Digital Test Prep Lab Rats

If you want to see the worst of what Personalized [sic] Learning can look like, let me take you straight to horse's mouth. We're not going to look at a critique, but at an actual sales pitch, and see how on the ground, Personalized [sic] Learning is about computer-delivered test prep. It is about extending the tyranny of the test via digitized instruction.


Here's the pitch. At first glance it appears to be just another article at EdSurge-- and it's meant to-- but in fact is what's now called "sponsored content" aka "advertising meant to be mistaken for an actual legit article. "A Back-to-School, Personalized Learning Toolkit" is by Ryan Hagedorn, COO at Edmentum, where his career has been in marketing and sales (he earned a degree in marketing from Hofstra in 2004). Edmentume is, of course, a leading provider of online Learning Stuff, and the tool kit is ten pieces of PR materials. In their own presentation, these folks tell us what they think the main point, purpose, and strength of Personalized [sic] Learning is supposed to be.

Let's look through the kit.

1) A how-to guide.

It's a 16-page booklet, and it identifies personalization with creating a student profile, assessing their current status, and setting a path. While it acknowledges "learning modalities" and the need to "appeal to student interests," the guide comes back to the idea of a "path." As we'll see again and again, the personalized learning is really personalized pacing. All students follow the same track-- they just get to go at their own speed.

There is a creepiness factor here. After suggesting a wide assortment of academic and personal data to be collected about each child, the guide suggests you make the solution to making the student profiles "manageable" is to "leverage online solutions." Then make the profiles accessible and get students to contribute information. Everyone can grab  shovel so the data mining will go more easily! Your Data Overlords thank you.

2) Video- Personalized [sic] Learning at Oregon High School

The promotional video talks to people in various roles at the school, including an actual teacher plus a principal, student, online/blended facilitator, parent, district online/blended coordinator. There are computer-porn shots of laptops being opened, shots of students sitting silently together at a table in front of their screens. The student talks about moving along the path at her own speed-- but not about determining the path. And we get a favorite Edmentum quote about not teaching to the middle any more. We get lip service about the importance of teacher-student relationship, but all the students in the video are at computers (pricey Apple laptops), with a teacher occasionally appearing at the student's shoulder.

The district coordinator talks about getting the pacing right. Pace pace pace. And being in line with the standards, because the (Common Core) standards are a necessary part of this "revolution."

3) Workbook- Online Curriculum Guide

This is a link to yet more "resources" like blog posts about evaluating ed tech and Marzano's best teaching techniques. The workbook portion talks about how online resources can be used as text, complete course, or credit recovery scam program. It offers helpful teaching advice like "clearly present goal/objective for each assignment" and "provide students with all materials needed to complete an assignment" and "remember to breathe"-- okay, not that last one, but the "tips" here seem aimed at someone who slept through all of their college teacher prep classes.

There are also tips for running Personalized [sic] Learning like "monitor student work" and "allow students to progress through assignments at their own pace" and "treat all students equally." Also "allow students to ask questions." It is hard to escape the notion that this program is being designed to be run by aides with no actual classroom experience, because this is basic basic stuff.

Then there are some tips for evaluating the online provider of your curriculum, thereby completing the picture of a program designed to be administered  by a "school" full of people who are either a) dopes or b) not actual educators.

4) Blog- Five Steps To Differentiated Instruction

Assess student, align to standards, create path, monito students, repeat. This is supposed to be a "deep dive" into how it's done. We may have some disagreement about what "deep" means.

5) Workbook- Blended Learning: Fundamentals of the Planning Process

This handy workbook defines blended learning as learning that occurs at least partly on line, in a physical location other than the student's home, and the "modalities along each student's learning path within a course or subject are connected to provide an integrated learning experience" which means-- well, I'm not sure. Edmentum might be rejecting micro-competency badges here, or they might just be flexing their gobbledeegook muscles.

They recognize four models-- rotation, flex, a la carte and enriched virtual. Rotation means moving through labs or stations, or using a flipped classroom. Flex is more a 1:1 computer situation. A la carte involves taking all of some courses on line. "Enriched virtual" means "virtual." Cyberschool.

It offers a blended learning project timeline for implementation. My favorite checkpoint on that is "Staff Onboarding."

6) Webinar-- Blended Learning: Which Model Works for You

Hard pass. The description indicates they actually focus on rotational and flex models.

7) Video- Blended Learning at Red Mill Elementary

Heavy once again on the personal pacing, but the most striking feature is how heavily it pushes the idea that Edmentum's stuff helps the school get better scores on the Big Standardized Test. Expected because one of the "teaching" programs is Study Island, which is a really terrible program designed to do computerized test prep. I don't want to go off on a Study Island rant, but if my children were spending any amount of time on Study Island at school, I'd be making phone calls. But the teacher and principal in the video are excited that students know "exactly what will be on the test" and that reports from the programs tell them what standards need to be additionally taught in order for the test prep to be effective. To be clear, "test prep" is my words, not theirs-- but that's what we're talking about. Computer programs used as test prep and test coaching-- not actual education.

The video also underlines the data backpack personal record that will follow the students on and on. The principal repeats the "not teaching to the middle" line and we talk about how a child who is below level is left to work at their own pace. What the constant discussion of pace continues to ignore is the age old question-- what do you do with a child whose pace will get them through six grade in eight years?

8) Workbook-- Virtual Learning: Exploring the Options for Expanded Opportunity.

The workbook promises to help districts decide if complete virtual implementation is best, if they should become a cyber school. Let me save you some time-- it isn't. Ever. Virtual schools have proven consistently to be disastrous for everyone except the companies making money from them.

9) Blog-- Universal Design for Learning: Powering Personalized Experiences for All Students

Universal Design for Learning (UDL) is thrown out as another idea for personalization, except that it also purports to be about designing curriculum that will work for everyone. "One size does not fit all," says UDL, "unless you make one size the UDL way, in which case one size will totally fit everyone." (UDL fans can argue with me in the comments). UDL also throws around "brain science": and some of the discredited ideas about learning styles.

10) Blog-- How To Evaluate EdTech Tools that Support Teaching and Learning

"As the traditional classroom evolves into an environment rich in technology..." opens this blurb, demonstrating how simple word choices like "rich" can help sell your product. Incidentally, none of these blog links are to an actual blog-- the links all lead to ad copy on an authorless corporate website of Edmentum's. I'm not sure whether this tech-based company doesn't understand what a blog is or if they just hope their audience doesn't know.

But this ad copy says that good edtech is standards aligned, because standards standards standards. To dive, even non-deeply into the edtech world is to realize how much they needed and use The Standards as a marketing tool.

The tech should engage students, by which they seem to mean that the tools should assess students a lot and use the data  "to construct learning activities at the student's zone of proximal development" with just the right level of challenge. Do you know how this really works? Students learn that doing too well on the pre-assessment gets you hard work to do, but if you deliberately throw the pre-assessment, you can get yourself a break.

The tools should support teachers and inform instruction. Do they give lots of reports. And toold should "leverage technology for effective assessment."

There's more, but what we really ought to notice is that a conversation that started out about personalized learning is now entirely about buying computer software products to teach and assess the students.

How To Use This Post

The next time a Personalized [sic] Learning disciple tells you how it's not at all about computerized learning or data mining or perpetuating the Common Core, just send them to this post. See, I agree with folks who say that personalized learning-- actual personalized learning-- doesn't have to be about all this digital baloney. But the ideals of personalized learning are not what's happening-- this is what's happening. Slavish alignment to the Core standards (even if they're operating under an assumed name) while students spend chunks of their day staring at a screen, teachers are just facilitators, and a bunch of data is steadily extracted as mediocre instruction is digitally delivered to digital natives who are not, I assure you, saying "Wow, when I do boring worksheets on a computer it's so much more exciting!" (In fact, digital natives now think that desktops and laptops are quaintly old-fashioned, but that's a discussion for another day.)

This is what companies are actually selling as personalized learning-- software teaching programs that take the person out of personalized and turn learning into lab rat style training. No thank you.












Wednesday, September 26, 2018

Scaling Up Personalized [sic] Education

Creating a personalized education program presents many challenges. It presents even more challenges if you want your delivery system to be a computer. And it presents even bigger problems if you intend to scale it up.

The dream for many education reformers is not just to come up with a new system, but to manufacture a new system that can scale up and reach a broad market. Personalizing education for a single student requires a great deal of work and it can be hugely beneficial for that student-- but it's not very profitable.

So who wants to do the assignment this time?
But scaling up is best accomplished by standardization, and standardization is the enemy of personalization.

Consider a burger chain, one that has decided to become the anti-McDonalds. This chain wants customers to know that when they order a burger, they can have it their way. The chain will gladly hold the pickle or the lettuce, because they have developed a production system that easily allows for that sort of personalization.

But that personalization falls within a very narrow range. You can hold the pickle or the lettuce, but you can not have a black bean patty in place of the hamburger. You cannot have thin-sliced sirloin covered with mushrooms or avocado slices. If your tastes do not fall within the range of burger possibilities, if you are an outlier, you will take your business elsewhere.

This makes sense for the burger chain. There may well be customers who want, say, a herring burger with sautéed kelp on top, but those customers are so few and far between that it's not worth the expense of stocking herring and kelp. So the burger chain makes the sensible choice to let those outliers dine elsewhere.

Personalizing a U.S. school's education program runs into the same problem, exacerbated by one educational fact of life: public schools aren't supposed to discard the outliers.

Personalized education software must involve the creation of a vast library of exercises and instruction to suit every possible student. Just to teach grammar and usage alone, you would need a vast library of exercises to cover every single rule, every single problem one can have with the rules, and then multiple versions of each exercise geared to the student's reading level and areas of interest.

Our personalized program could get by with a smaller library if we just play to the large middle ground. Only certain rules are problems for most students, most of those students will be reading about the same grade level, and a focus on just three or four areas of interest will engage most students. We can create a relatively small library of materials and hit most students.

But the mission in U.S. public education is not to educate most students. The goal is to educate all students. That means a true personalized education program can't cut corners. A public school is not a burger joint that can say, "Well, if we don't have what they need, they can just go somewhere else." Public schools must serve the outliers, too.

The more you scale up a personalized learning program, the more outliers you have-- and each one will be completely different. That means to truly scale up a personalized education program, you must create a vast library of materials to be prepared for every possible student who could appear-- even if some of those outlier materials might not be used more than once or twice or never in ten years.

So far, most personalized education programs have dealt with this problem by not dealing with this problem and instead creating a program for most students. Some are not really personalizing education at all; instead, they personalize pace. In other words, all students cover the exact same material, in the exact same sequence, but the program lets them each travel down the exact same path at their own speed.

But if personalized education programs have trouble with scaling and outliers, how can classroom teachers possibly cope?

For the most part, easily. The advantage that a classroom teacher has is that she gets to meet the students first. Someone creating educational software right now is doing it for some students who aren't even in school yet, so the programmer must guess and prepare for unknown possibilities in the future, thereby either wasting resources or failing to meet certain needs. The classroom teacher, on the other hand, knows the students, collects daily information about the students, and develops relationships with the students, all of which makes it possible for a teacher to personalize instruction far more effectively than a software program ever can.

This is one reason relationships matter in a classroom. Can the system break down? Sure-- put fifty students in one classroom and that will strain a teacher's ability to maintain the relationships that drive education (that's one reason class size matters). But for true personalized education, you need two persons to form that teacher-student relationship. It's nearly impossible to personalize instruction with just one person involved.

Originally posted at Forbes   

Mastering Mediocrity

That Question. It is the question that students ask a jazillion different ways.

How many pages does this paper have to be?

When is this going to be due, and do I have to hand it in during class, or can I have till the end of the day?

That's a lot of mastery.

Tomorrow's test-- how many questions can we miss and still pass?

How many sources do I have to cite?

How much is this homework going to be worth? Enough to hurt my grade?

How many examples do I have to include?

Do I have to show my work?

All of these questions (and the rest of the jazillion) are really just ways to ask one simple question:

What's the absolute least I can get away with doing for this assignment?

Students ask about the absolute least all the time. This does not make them lazy or terrible human beings. They're trying to manage their time and effort, often in a class that they're required to take and don't care much about (which, again, does not make them awful human beings-- all adults have the widely recognized right not to care about some things).

It's not really a good look on anybody. As I used to explain to my students when I was calling them out on this, "If you're new romantic partner asks 'What's absolute least amount of time I have to spend with you to keep this going," you will not be thinking to yourself "This one's a keeper." And you don't want to ask your employer, "What's the absolute least amount of work I have to do to keep this job."

Of course, there will always be students who shoot for amazing all the time, who always go above and beyond any requirement you give them. But for many, the moment you hand them a specific minimum is the moment they start thinking in terms of how little they need to do and not how much they could do.

For that reason, teachers often learn to be purposefully vague in the classroom. My answer to That Question was never terribly specific. Unlike some teachers of (bad) writing, I never gave a required paper length and never, ever told them that a paragraph must include X number of sentences. "Long enough to do a good job of making your point" was about the best they could get from me, or maybe, "Impress me. Amaze me." When you're proofreading your own work, there's a big difference between asking "Did I include the bare minimum?" and "Is this amazing?" Every English teacher has read at least one potentially good essay that basically stopped in the middle because the student believed she had written enough to satisfy the bare minimum requirements.

This has always been the problem with Learning for Mastery, an old ed concept enjoying a current comeback via Personalized [sic] Learning and Competency Based Education.

It makes a certain amount of sense-- you teach students an area of skill or knowledge, and once they can prove they've mastered it, you move on. But there are problems.

Mastery Learning asks us to create a performance task that will demonstrate mastery. That's our first problem, because if we define "mastery" too rigorously, may students will have great difficulty meeting the standard. Define "mastery" at a lower, more accessible level, and higher functioning students will become bored. If we're teaching basketball skills, does mastery look like LeBron James, or "student can dribble length of court without falling down." And there's a whole other problem with reducing complex constellations of skills to a list of performance tasks. Exactly how do we define mastery of, say, essay writing? But the list problem is one we'll save for another day.

Because another big problem is that in defining mastery, we are giving students an answer to That Question. And while we may see all of this as a culminating performance task that shows mastery of particular skills, what the student sees is an assignment or test and an answer to That Question. They will know exactly what they have to do to be marked as masters, and many will not do an iota more. This is double true in mastery systems where the assignment-- I'm sorry, the mastery performance task-- is just pass-fail and the students are grade-oriented. Why be amazing if you just get the same passing grade as a bare minimum project.

Mastery Learning makes it more difficult to push a student to work at the top of their game. Instead of a system in which teachers can use their full bag of tricks to raise or lower the bar for individual students, they're stuck with a system where the bar is welded in place, probably somewhere around the mediocre middle.

There are ways to make a mastery system better, starting by jettisoning the word "mastery" which suggests, incorrectly, that such a system will demand excellence of every student in all subjects, which is as realistic as No Child Left Behind's requirement for 100% above average test scores by 2014. Though I'm no fan of CBE, "competence" is a better term. Nobody takes a drivers test to show that they are master's of driving; the state just wants to know if you meet the basic competence level. To further tweak the competence system, we could institute an assessment system that showed just how competent the individual student was, and to get past the checklist flaw, we could give a wide variety of performance tasks that overlapped in the competencies that they assessed, with the ultimate effect that we get a more holistic look while assessing each competency multiple times instead of just once. Periodically we could combine all the competence level marks from each performance task into a combined competency ranking, and we could issue a report, maybe on something simple like a card, every so often, listing the precise gradations of that student's competencies and the grades on that report card would indicate whether the student is competent to proceed to the next level of.... oh. Never mind.

Mastery [sic] Learning is innately prone to promote mediocrity. Both in how it reduces complex learning to a list of simplified performance tasks, and also in how it answers That Question. What's the absolute least you can get away with doing in a class? Mastery [sic] Learning systems will always tell you, and that's not a good thing.








Monday, September 24, 2018

The Road Beyond The Test

I missed this piece when it first ran at KQED's education tab Mind/Shift, but Katrina Schwartz's article is worth looking at because it captures some of the inaccurate thinking that still surrounds the Common Core today.

Entitled "How To Teach the Standards without Becoming Standardized," the piece patches together some interviews at EduCon with Diana Laufenberg and  some others. Laufenberg teaches at a Philadelphia magnet school and also tells people about project based learning and "structuring modern learning ecosystems."

This was back in spring of 2014, so they could still talk about teacher "ambivalence" toward the Core rather than, say, hatred, and it is organized around a list of ways to avoid becoming standardized in your teaching. As is usually the way with such pieces, the advice is diplomatically worded versions of "ignore the standards stuff and just follow best practices in your professional judgment."

For instance, #2 is "Teach students to question. When kids develop effective questioning techniques they become active partners in constructing learning." That doesn't really have anything to do with the Core; it's just a decent piece of teacher advice.

But buried amidst the educational wonder bread is a piece of bad advice that KQED liked well enough to turn into a pull quote, but which captures a major misconception about the Core. It's a Laufenberg quote, and while she says some sensible things elsewhere in the article, this is not one of them:

Teach past the test to this other meaningful, creative work and you will get the test, but you’ll get all this other stuff too.

I've heard this sentiment expressed many times over the past decade, particularly from administrators. It's a pretty thought, but it's wrong.

The assumption here is that the Really Good Stuff is just straight on past and on beyond the "get ready for the test" stuff. Like if you're starting from Chicago and you have to go Cleveland, but you really want to go to Pittsburgh, so you just stop in Cleveland then hop on I-76 and head to Pittsburgh. Easy peasy. Only the real analogy is, starting from Chicago you're required to go to Tampa, but you really want to go to Seattle. Tampa is not on the way, not even a little.

To get to good writing, you do not test-pleasing writing, just a little more so. Test writing and authentic quality writing are two entirely different things that just happen to have a superficial resemblance to each other because they look like words arranged in sentences arranged in paragraphs. But as a classroom teacher, I would literally tell my students, "I'm now going to teach you some things to use for testing, but you should not ever use these in any other writing situation." The behaviors needed to game a writing portion (e.g. parrot the prompt, use big words, write a bunch even if it's repetitious, never worry about accuracy or internal logic) are not desirable features in real writing.

Nor is the ability to respond immediately to a multiple-choice question about a short reading excerpt taken out of context a step on the path to mature, reflective analysis of a full-sized work of literature.

This notion that the path to the test is at best a small detour and after we've touched that base we can head on into the Land of Actual Education is a snare and a delusion. The path to test readiness is a dead end' the road goes no further and all that lies beyond is just a vast, barren wasteland.

Sunday, September 23, 2018

ICYMI: Tech Sunday Edition (9/23)

No, not ed tech. It's tech Sunday, the kickoff of show week for the local theater production of The Producers that I'm directing. So I may be a little more scarce this week, but I've still collected a few things for you to read.

The Case Against High School Sports

Need to start a big argument? This article should do the trick-- but it's also some things to think about.

Learning from What Doesn't Work in Teacher Evaluation

Audrey Amrein-Beardsley shares some of the lessons of bad teacher evaluation (cough cough VAM).

Finding the Holy Grail in Poverty Mining

More scary news from the world of digital balls and chains. How exactly can the rich profit from poverty and data?

Bring Me a Higher Love

Jose Luis Vilson reminds us again of the place of the L word in teaching.

13 Things I learned While Blogging        

Nancy Flanagan remains one of my blogging heroes-- and she's pulling up stakes at Ed Week and moving out on her own. Her tenure remains one of the great runs in edublogging; I look forward to reading her without a paywall.

Rep Eddie Farnsworth makes a killing in AZ charters

Arizona remains one of the great places to run a charter school scam, particularly if you're a legislator there. Here's how one guy has played the game and made himself rich while pretending to be interested in education.

Behind Closed Doors    

Sarah Blaine has been quiet on the blogging front for a while, but she returns with a vengeance, looking at a New Jersey legislator's insistence that the decisions about the future of PARCC should be made far away from the prying eyes of parents, educators, and taxpayers.  

Who Is Behind Leaders in Education PAC    

The indispensable Mercedes Schneider once again does the research to peel back the layers of a reformster group and discovers the same old rich folks.

The Educational Outcomes Fund  

A plan for using education to strip more money from Africa and the Middle East.

A Teacher's Thoughts at 2 AM  

An honest and open reporting by Cori Anderson-Lais of her inner dialogue struggling with several school issues.


Saturday, September 22, 2018

How Pushback Against Reform Is Used To Push Reform Forward

The pushback against education reform ideas like Common Core and test-centered accountability has brought together a broad assortment of voices, from the right to the left, from supporters of public education to avid home schoolers. But the opposition to some forms of education reform also includes one other group--advocates of newer education reform.

It's called jiu-jitsu

Take this piece, recently being recirculated on the interwebs, entitled "The Testing Emperor Finally Has No Clothes." Writer Bruce Dixon wastes no time getting straight to the point-- he's talking about the "tyranny of testing." He's speaking to an Australian audience, but his criticisms are recognizable to U.S. and British educators critical of the practice of using Big Standardized Tests to measure students and schools and teachers.

Standardized testing policy is "intrusive, devisive, deceitful" and is "fast turning teachers into lab rats." It's an "insidious virus" based on "chicanery." It fosters the false impression that learning is a competitive sport. It "kills curiosity and penalizes diversity." He quotes writers like John Holt, Anya Kamenetz, Deborah Meier and Alfie Kohn, all critics of test-centered schooling. He invokes Finland as an education exemplar.

And he points out that the testing industry is big money, pushing the testing regimen as a giant cash cow. The choice, Dixon suggests, is between lobbyists and learners.
Up until this point in the article, Dixon's argument could have been written by any of the army of advocates for public education. But Dixon is not a soldier in that army, and he signals it as he starts to consider alternatives to a testing regimen. "Our modern world demands," he says, "a shift in thinking about credentials at every level, how and why they are awarded but more importantly why."

Dixon cites the Mastery Transcript Consortium, a group that proposes a new way of reporting student achievement based on what it calls "Mastery Credits." This is a version of what are sometimes called micro-credentials or badges. The idea behind them is to break down every bit of learning into one particular item, and once you pass some sort of competency test, a little badge goes in your permanent digital record (not unlike earning an achievement on your Xbox game). In the super-deal version, all of this is both delivered by and recorded via computer and the cloud, so that you can earn them anywhere, any time, and your digital "transcript," perhaps stored blockchain style can be accessed by any future employers, the government, and anyone who pays for the privilege, forever. If you want to know more about this, look at speculative video about The Ledger.

As you might have figured out, this approach to learning renders a traditional public school virtually obsolete and unnecessary.

So who is Bruce Dixon? He's the cofounder and president of the Anywhere Anytime Learning Foundation and a consultant who talks to education departments and tech companies and advocates for 1-to-1 learning, an approach that puts one computer in the hands of every student.

Another name for the mastery based approach is Competency Based Education, and another name for that is Personalized Learning, which many education experts see as Education Reform 2.0. Ironically, one of the great marketing tools for Education Reform 2.0 has been Education Reform 1.0. Bruce Dixon is just one example of someone whose using dissatisfaction with Education Reform 1.0 to pitch his 2.0 products.

If you are someone who is unhappy with test-centric schooling and the data gathering that goes with it, you should take a careful look at anyone who hopes to rescue you. The CBE you're considering may well get rid of the once-a-year Big Standardized Test--and replace it with small standardized tests every day. It may be heavily computer-based, and therefor hoovering huge amounts of data about your child. And rather than giving control back to your local school and your child's teacher, it may strip the local school of even more control and reduce the classroom teacher to an aide who monitors the software and keeps students on task.

And as always, the best question to ask someone who promises you to rescue from one Awful Thing is, "What are you selling?"

Originally posted at Forbes   

Friday, September 21, 2018

Field Guide To Bad Education Research

Folks in education are often  criticized for not using enough research based stuff. But here's the thing about education research-- there's so much of it, and so much of it is bad. Very bad. Terrible in the extreme. That's understandable-- experimenting on live young humans is not a popular idea, so unless you're a really rich person with the financial ability to bribe entire school districts, you'll have to find some clever ways to work your research.

The badness of education research omes in a variety of flavors, but if you're going to play in the education sandbox, it's useful to know what kinds of turds are buried there.

The Narrow Sampling

This is the research that provides sometimes shocking results-- "Humans Learn Better While Drinking Beer." But when you look more closely, you discover the sample size lacks a little breadth-- say, fifteen Advanced Psychology male college juniors at the University of Berlin. These may be experimental subjects of convenience; the above researcher may have been a U of B grad student who worked as a TA for the Advanced Psychology course.

Generally these narrow projects yield results that are not terribly useful, but if you're out shopping for research to back whatever you're selling, these can often provide the "research base" that you wouldn't otherwise find.

The Meta Study

Meta research involves taking a whole bunch of other studies and studying the studies in your study. The idea is to find patterns or conclusions that emerge from a broad field of related research. Met research is not automatically bad research. But if the meta researcher has gone shopping for studies that lean in his preferred direction, then the pattern that emerges is-- ta-da-- the conclusion he went fishing for.

This is a hard thing to check. If you know the literature really well, you might look for which studies are not included. But otherwise just keep a wary eyeball out.

The Not Actually A Study

These are cranked out pretty regularly by various thinky tanks and other advocacy groups. They come in nice slicky-packaged graphics, and they are not actual research at all. They're position papers or policy PR or just a really nicely illustrated blog post. There are many sleight of hand tricks they use to create the illusion of research-- here are just two.

Trick One: "Because there are at least ten badgers in our attic, many of the neighbors took to drinking pure grain alcohol." There will be a reference for this sentence, and it will provide a source for the number of badgers in the attic. Nothing else, including the implied cause and effect, will be supported with evidence.

Trick Two: "Excessive use of alcohol can lead to debilitating liver disease. The solution is to sell all alcoholic beverages in plastic containers." References will shore up the problem portion of the proposal, establishing clearly that the problem is real. Then the writers' preferred solution will be offered, with no evidence to support the notion that it's a real solution.

The Not Really A Study is also given away by the list of works cited, which tend to be other non-studies from other advocacy groups (or, in the case of ballsy writers, a bunch of other non-studies from the same group). No real academic peer-reviewed research will be included, except a couple of pieces that shore up unimportant details in the "study."

The Thousand Butterfly Study

Like studies of other human-related Stuff (think health-related studies), education studies can involve a constellation of causes. When researchers study data from the real world, they may be studying students over a period of time in which the teaching staff changed, new standards were implemented. administration changed, new standardized tests were deployed, new textbooks were purchased, the cafeteria changed milk suppliers, a factory closed in town, a new video game craze took off, major national events affected people, and any number of imponderables occurred in student homes. The researcher will now try to make a case for which one of those butterflies flapped the wings that changed the weather.

Some butterfly researchers will try to create a compelling reason to believe they've picked the correct butterfly, or what is more likely, they will try to make a case that the butterfly in which they have a vested interest is the one with the power wings. This case can never not be shaky; this is a good time to follow the money as well as the researcher's line of reasoning.

The worst of these will simply pretend that the other butterflies don't exist. The classic example would be everyone who says that the country has gone to hell since they took prayer out of school; crime rates and drug use and teen pregnancy, the argument goes, have all skyrocketing as a result of the Supreme's decision-- as if nothing else of importance happened in 1962 and 1963.

The Bad Proxy Study

Education research is tied to all sorts of things that are really hard, even impossible to actually measure. And so researchers are constantly trying to create proxies. We can't measure self-esteem, so let's count how many times the student smiles at the mirror.

Currently the King of All Bad Proxies is the use of standardized test scores as a proxy for student achievement or teacher effectiveness. It's a terrible proxy, but what makes matters worse is the number of researchers, and journalists covering research, who use "student achievement" and "test scores" interchangeably as if they are synonyms. They aren't, but "study shows humus may lead to higher test scores" is less sexy than "humus makes students smarter."

Always pay attention to what is being used as a proxy, and how it's being collected, measured, and evaluated.

The Correlation Study

God help us, even fancy pants ivy league researchers can't avoid this one. Correlation is not causation. The fact that high test scores and wealth later in life go together doesn't mean that test scores cause wealth (wealth later in life and high test scores are both a function of growing up wealthy). The best thing we can say about bad correlations is that it has given rise to the website and book Spurious Correlations.

Just keep saying it over and over-- correlation is not causation.

The Badly Reasoned Study and The Convenient Omission Study

For the sake of completeness, these old classics need to be included. Sometimes the researchers just follow some lousy reasoning to reach their conclusions. Sometimes they leave out data or research that would interfere with the conclusion they are trying to reach. Why would they possibly do that? Time to follow the money again; the unfortunate truth of education research is that an awful lot of it is done because someone with an ax to grind or a product to sell is paying for it.

The Badly Reported Study

Sometimes researchers are responsible and nuanced and careful not to overstate their case. And then some reporter comes along and throws all that out the window in search of a grabby headline. It's not always the researcher's fault that they appear to be presenting dubious science. When in doubt, read the article carefully and try to get back to the actual research. It may not be as awful as it seems.

Keep your wits about you and pay close attention. Just because it looks like a legitimate study and is packaged as a legitimate study and seems to come from fancy scientists or a snazzy university-- well, none of that guarantees that it's not crap. When it comes to education research, the emptor needs to caveat real hard.