Grammarly’s expansion from writing support to AI tools that can write for you continued this week with a new feature designed to recognize AI-composed text. Grammarly Authorship is coming to Google Docs in beta, and is intended to help teachers specifically identify when students are using AI to write their assignments.
Teachers and education policymakers have long grappled with the question of whether something was written using AI, but ChatGPT and its rivals have heightened the sense of urgency for such a tool, which they say renders traditional methods unreliable. The difficulty has been compounded by issues around false positives and difficulties in selecting AI compositions from those of students learning to write, particularly those who speak English as a second language.
Grammarly Authorship aims to address these issues by eschewing the way many existing AI detection tools analyze text as it’s written. Instead, Grammarly Authorship tracks the writing of text in real time. It can reportedly tell when something was typed, pasted somewhere else, or generated on the page using AI. The tool will integrate with more than half a million applications and websites, with writing platforms like Google Docs, Microsoft Word, and Apple’s Pages at its core.
The feature can track typing as it happens, noting when text is copied or when an AI tool generates or edits the text, such as Google Gemini or Microsoft’s Bing AI. Grammarly Authorship then creates a report, dividing the paper into categories based on whether it was written, AI-generated, or pasted from somewhere else. It even shows you a replay of the document as it was written to show exactly what happened.
Student AI Monitor
Individuals can purchase access to Grammarly Authorship, but teachers are clearly the tool’s primary market. AI-generated writing among students and in academic circles has been controversial and has led to false accusations by teachers, even as other students have admitted to using AI to write papers that never get caught.
“As the school year begins, many institutions lack consistent and clear AI policies, even though half of people ages 14 to 22 say they have used generative AI at least once,” said Jenny Maxwell, head of Grammarly for Education. explained“This lack of clarity has contributed to an overreliance on imperfect AI detection tools, leading to a hostile back-and-forth between professors and students when papers are flagged as AI-generated. What’s missing from the market is a tool that can facilitate a productive conversation about the role of AI in education. Authorship does just that by providing students with an easy way to show how they wrote their paper, including whether and how they interacted with AI tools.”
Of course, that only matters if teachers decide to trust Grammarly’s new concept. It will have to meet performance goals in real-world scenarios. That’s on top of competing with potential rivals. OpenAI has developed a set of new tools to detect content generated by ChatGPT and its AI models with a kind of watermark. But the company has decided not to roll it out yet, fearing it would cause problems even for people with benign interests. That follows the failure of its first AI Text Detectorwho has the to block in just six months.
While having evidence like the Authorship Report removes the question of whether a student actually wrote their paper, there is an inherent privacy issue that may make some reluctant to use the tool. Students may not want to share all the false starts in their writing or all the ways it didn’t go well. There may be no secret behind wanting to share only the final product. But they may not be given a choice. Writing at home may soon feel like taking a test, with a proctor keeping an eye on students to prevent cheating, and even innocent movements could arouse suspicion.