Chances are you've seen the ads for Grammarly, a service that, at least in the ads, seems to offer the same mediocre writing advice that you can get from the red and green squiggly lines in Word. Can we expect to see Clippy offering to run your HR department soon?
In one ad, it helps a student spot passive voice and a comma splice and a plagiarized section, and so she gets an A+ and the professor writes "wonderful use of words." It's true-- my favorite student papers are the ones that use words! And that damned thing has been viewed almost seven million times. In another, Grammarly helps a man write a come-on text, saving him from using the wrong "its" and the wrong "write." And the one I often get, for the guy who wants to write a message to his new work team, spelling mistakes and thesaurus and all. All in all, it looks like the program could be as useful as hiring a smart seventh grader to look over your stuff.
But this guy (self-publishing Dale) thinks it's swell, as do the commenters on his video, and if you can't trust youtube commenters, who can you trust? He explains that Grammarly offers word choice suggestions, context improvement (yeah, I don't know what that is), grammar correction (presumably it means usage correction-- a common error) and plagiarism detection, which-- well, I mean, if you plagiarized, you already know that, don't you. Either that tool is meant for editors of other peoples' work (and if you're an editor, why do you need the rest of these features) or the tool is to help you see if you've camouflaged your plagiarism well enough to avoid detection. Either way, shame on you.
Scanning through youtube, I also sense that Grammarly has fans among folks whose first language is not English.
So once again, we have the claim that some software can evaluate writing effectively. This claim has always been bunk in the past-- has Grammarly cracked the code?
Jacob Brogan at Slate has been playing with, and as a bonus with his article about Grammarly's security issues, he noted some other issues as well. Grammarly has some lousy ideas about how to "fix" the construction "really important," the software seems relatively easy to stump.
Even Grammarly’s most basic suggestions can still lead users astray. Take this sentence from an article I recently published in Slate: “No matter what he’s wearing, he almost always opts for long sleeves—here in an Apple Store uniform (just one of the team!), there in a plain sport shirt.” Grammarly identifies three possible problems. First, seemingly thrown by syntactical complexity, it suggests that I should replace “there in a” with “there is a,” a change that would be ungrammatical, but that still leaves me questioning my own stylistic choices. Second, it proposes substituting “sports shirt” for “sport shirt,” an acceptable, if uncalled for, alternative. Third, and worst of all, it declares, “The word "plain " doesn’t seem to fit in this context,” and informs me that I should change it to “plaid.” While switching things up might be good for your sartorial style, it’s only going to make your prose more baffling. This is an instance of what I’m tempted to call the algorithmic uncanny valley, that point at which a program is astute enough to recognize that humans often pair "shirt" with "plaid" but not enough to understand that they also do so with "plain."
So no, the key to software that can handle language like a human is still undiscovered. Let's just hope that Grammarly doesn't try to market itself as school assessment software and-- oh, hell. Too late.
Yep. Grammarly@EDU promises "better students, happier teachers" and also says it "fuels academic success." It's trusted by 600 universities, including the University of Phoenix, so you know it's only the best schools that partner up (the full 600 are not listed, meaning that somebody thought that, out of that list, University of Phoenix would be a good one to highlight).
Ask for a quote today. Because while the search for software that can handle human language is not yielding much in the way of results, the search for software that can use baseless promises to generate revenue is never-ending, and often lucrative.