Does ChatGPT make the grade?
Cambridge research identifies hallmarks of AI-assisted essay writing
Cambridge researchers have identified the indicators of ChatGPT’s default writing style in student essay writing. Although this was a small sample, the research also represents an early attempt at understanding how students use ChatGPT in essay writing, and how ChatGPT-assisted essays score in assessment.
- Researchers identified tell-tale features of ChatGPT’s writing style – which include tautology, repetition and overuse of words like ‘however’
- They also found that ChatGPT-assisted essays performed worse than anticipated on skills such as analysis and comparison
Three undergraduate students participated in the research, and they wrote two essays each with the aid of ChatGPT (whose rules require users to be 18 or over - hence the inclusion of undergraduates in this research). These essays were marked by examiners and compared to 164 essays written by genuine IGCSE students. The undergraduates were then interviewed, and their essays were analysed.
Researchers found that while ChatGPT-assisted essays performed strongly on ‘information’ and ‘reflection’ compared to non-ChatGPT-assisted essays written by Cambridge IGCSE students – aged 14 to 16 – they performed poorly on ‘analysis’ and ‘comparison’.
The researchers noted that ChatGPT’s default writing style “echoes the bland, clipped, and objective style that characterises much generic journalistic writing found on the internet”.
Key features of this writing style include:
- The use of Latinate vocabulary (i.e. multisyllable, sophisticated vocabulary above the expected level of study);
- Paragraphs starting with specific discourse markers such as ‘however’, ‘moreover’, and overall’, followed by a comma;
- Numbered lists with items followed by colons;
- Pleonasms (using unnecessary words to convey meaning, e.g. ‘true fact’ or ‘free gift’);
- Tautology (saying the same thing twice, e.g. ‘We must come together to unite’);
- Repetition of words or phrases and ideas;
- Consistent use of Oxford commas (e.g. ChatGPT has many uses for teaching, learning at home, revision, and assessment).
In interviews, the researchers also found commonalities in how participating students used ChatGPT in constructing their essays. Although the students used ChatGPT to different extents, patterns emerged, with a broad consensus among students that ChatGPT is useful for gathering information quickly and can be integrated into essay-writing through steps including specific enquiries about topics, essay structuring and writing the essay.
However, the students also considered that complete reliance on ChatGPT would produce essays of a low academic standard.
Lead researcher Jude Brady highlighted the value of these findings:
"Our findings offer insights into the growing area of generative AI and assessment, which is still largely uncharted territory.
"Despite the small sample size, we are excited about these findings as they have the capacity to inform the work of teachers as well as students. We recommend expanding the work to include more representative and larger sample sizes. We hope our research might help people to identify when a piece of text has been written by ChatGPT.
"For students and the wider population, learning to use and detect generative AI forms an increasingly important aspect of digital literacy."
Explore the research in full in Research Matters, Cambridge University Press & Assessment's biannual education research publication