Essays get the electronic red pen

Get ready, college students. That lovable robot R2-D2 may be grading your next term paper. Well, almost. A few kinks remain, but about 200 master's students at Florida State University may soon become the first in American higher education to have their writing graded by computer. Only last spring, two researchers announced development of the Intelligent Essay Assessor, Web-based software they said could first be trained to "understand" expert writing in various disciplines - then pass judgment on student essays. Thomas Landauer, a University of Colorado at Boulder psychology professor, and Peter Foltz, a psycholinguist at New Mexico State University, Las Cruces, tested their creation first on their own students. To be fair, the students were given a choice of accepting the machine grade on essays, or having the professors grade them. Almost all preferred the machine. An interesting experiment? Surely. Yet efforts over the past decade to interest professors in such software hit a wall. This new approach, incorporating sophisticated ways of allowing a computer to put words into context, might also have remained a higher-ed footnote. But this time, overworked professors with large classes are signing up. Last fall's announcement generated a "big reaction," says Professor Foltz, including both ire and unusual interest from professors nationwide. Predictably, many deride the idea. "How much should you predicate [on] a semantic system that considers 'the cat ate the thumbtack' equivalent to 'the thumbtack ate the cat?' " wrote one Stanford professor via e-mail. But others saw promise and volunteered to use the software. "I'm VERY interested in reducing my 'Sisyphean' task of grading student essays," wrote a professor from the University of Wisconsin. In fact, five professors will implement the computerized essay assessor in classes this semester, Foltz says. Another 45 to 50 have written to say they are strongly interested in doing so. Colleges, software companies, and even Princeton, N.J.-based Educational Testing Service (some of its tests include essay questions) have been calling. One of those charging fearlessly into the future is Myke Gluck, an associate professor at Florida State University in Tallahassee who helps teach a first-year graduate course in Information Studies. Along with three faculty colleagues and three graduate assistants, Professor Gluck plans to use the computer program this semester to grade the final essay (several-thousand words) written by the class's 200-plus students. Besides reducing work loads, computer grading should also lend more consistency, which Gluck admits can vary among assistants. If any student decides the computer has been unfair - or simply does not wish to use it - a human will grade the paper, he says. "We understand that there are some essays it just won't be able to handle," he says. "Either the content or format will be a little off. So we'll do those by hand.... We're just hoping to get the bulk graded by the software - that would relieve the burden." Professor Landauer predicts the early, best use will be in distance learning, where students can get lots of practice writing - and quick grading and computer comments on what's not right. "They'll be able to write and get instant feedback," he says. "It will be useful for Internet classes with hundreds or thousands of students. It can give a student instant feedback about what's missing from their essay, and where in their textbook they might find it." Both Landauer and Foltz say their program can be taught to grade close to the way humans would - especially when student essays must include specific content in fields like chemistry, medicine, or biology. Both say the program cannot grade creative writing. Despite such concessions, some are less than enthusiastic about the project. "I don't think students really want machines reading what they write - they want people," says Dennis Baron, chairman of the English department at the University of Illinois, Urbana-Champaign. "It's seen as a labor-saving device to allow instructors to do more important things. What's more important than reading your students' writing? I'm not sure. I think you need a human interface." That sentiment notwithstanding, signs abound that computer grading of student writing may soon emerge in a full-blown way. In coming weeks, the Educational Testing Service plans for the first time to have a computer program - its own - grade essay questions on the Graduate Management Admissions Test, or GMAT, taken by those aspiring to a master's degree in business. A human and the computer will grade each essay question, says Lawrence Frase, executive director of research at ETS. If both agree, then the grade stands. But if the two disagree, another human will grade it to decide the matter. GMAT's owner, the Graduate Management Admission Council, decided to go ahead because "the rate of agreement [between computer and human] was as high as between two human readers," says Frederic McHale, vice president of assessment and research. "We're reaching a new stage in these computers where they're going to look intelligent," ETS's Mr. Frase says. "Right now we're thinking of keeping the humans in the loop." Comments? e-mail claytonm@csps.com

You've read  of  free articles. Subscribe to continue.
QR Code to Essays get the electronic red pen
Read this article in
https://www.csmonitor.com/1999/0112/p19s1.html
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe