Friday, January 21, 2022

The Search For Computerized Essay Grading Continues

It is the dream that will not die. For some reason, there are still people who think the world would be a better place if student essays could be evaluated by software, because reasons. The problem has remained the same--for decades companies have searched for a software algorithm that can do the job, but other than deciding to call the algorithms "AI," progress has been slim to none.

And yet, the dream will not die. So now we get a competition, mounted by Georgia State University has teamed up with The Learning Agency Lab (a "sister organization" with The Learning Agency).

The Feedback Prize is a coding competition being run through Kaggle, in which competitors are asked to root through a database of just under 26K student argumentative essays that have been previously scored by "experts" as part of state standardized assessments between 2010 and 2020 (which raises a whole other set of issues, but let's skip that for now). The goal is to have your algorithm come close to the human scoring results. Why? Well, they open their case with a sentence that deserves its own award for understatement.

There are currently numerous automated writing feedback tools, but they all have limitations. 

Well, yes. Primarily they are limited because they don't work very well. The contest says the current automated feedback programs is that "many often fail to identify writing structures" like thesis statements of support for claims. Well, yes, because--and I cannot say this hard enough--computer algorithms do not understand anything in the sense that we mean the word. Computer language processing is just weather forecasting--looking at some bank of previous language examples and checking to see if the sample they're examining has superficial characteristics that match what the bank of samples would lead one to expect. But no computer algorithm can, for instance, understand whether or not your supporting evidence provides good, er even accurate, support.

The competition also notes that most current software is proprietary so that A) you don't even know what it's trying to do, or how and B) you can't afford it for your school, particularly if your school is resource-strapped, meaning that poor kids have to depend on regular old humans to grade their writing.

For extra juice, they note that according to NAEP, only a third of students are proficient (without noting that "proficient" on NAEP is a high bar). They do not cite any data showing that automated essay grading helps students write better, because they can't. 

But if you enter this competition, you get access to a large dataset of student writing "in order to test your skills in natural language processing, a fast-growing area of data science."

If successful, you'll make it easier for students to receive feedback on their writing and increase opportunities to improve writing outcomes. Virtual writing tutors and automated writing systems can leverage these algorithms while teachers may use them to reduce grading time. The open-sourced algorithms you come up with will allow any educational organization to better help young writers develop.

902 teams have already entered; you can actually check their current status on a public leader board. There are lots of fun team names like Feedforward, Pomegranate, Zoltan and Fork is all you need. Plus many that are not in English. Poking through the site, you can see how much the writing samples are referred to ad discussed as data rather than writing; many of these folks are conceptualizing the whole process as analyzing data rather than assessing writing, and in fact there don't seem to be any actual writing or teaching experts in sight, which is pretty symptomatic of the whole field of automated essay evaluation. 

Who is in sight?

Well, you'll be unsurprised to find that the competition thanks The Gates Foundation, Schmidt Futures, and the Chan-Zuckerberg Initiative for their support. Schmidt Futures, the name you might not recognize here, was founded by Eric Schmidt, former Google CEO, to technologize the future.

And if we look at the Learning Agency and the Learning Agency Lab, it's more of the same. The Agency is "part consultancy, part service provider," so a consulting outfit that works to "improve education delivery systems." They tout a team of "former academics, technologists, journalists and teachers." Sure. We'll see.

The outfit was founded by Ulrich Boser in 2017, and they partner with the Gates Foundation, Schmidt Futures, Georgia State University, and the Center for American Progress, where Boser is a senior fellow. He has also been an advisor to the Gates Foundation, Hillary Clinton's Presidential Campaign, and the Charles Butt Foundation--so a fine list of reform-minded left-leaning outfits. Their team involves former government wonks, non-profit managers, comms people and one woman who used toi teach English at a private K-12 school. The Lab is more of the same; there are more "data scientists" in this outfit than actual teachers.

I'm going out on a limb to predict that this competition, due to wrap up in a couple of months, is not going to revolutionize writing assessment in any way. But the dream won't die, particularly as long as some folks believe that data crunching machines can uplift young humans. 




2 comments:

  1. Although I'm now retired after 32 years of teaching English/Language Arts at every level between ninth grade and grad school, I still get panic attacks on Sunday thinking I'll be spending 8 hours reading and commenting on essays. There was a time I would have sold what's left of my soul for software that would handle the stacks of papers mocking me all weekend, but your points are well taken. Writing- even academic writing- is highly personal and requires a personalized response. Students with strong writing skills require feedback and encouragement to develop their personal styles, while students with less proficiency require encouragement and constructive feedback. Try as I might, I never got comfortable with "Turn It In," and found it far more valuable and satisfying (and yes, time-consuming) to meet with students individually and discuss the paper itself and the larger concepts of generating and organizing ideas.

    ReplyDelete
    Replies
    1. SO much this. There's a disconnect about what student writers actually NEED. The insistence that assigning a number to someone's ideas benefits anyone is ... well, it would be sad, if it weren't so horrifying.

      Delete