METM25 Chronicles: David Barick

Do they still need me? Teaching scientific writing in the age of AI

As a long-standing lecturer in academic writing and editor and translator of academic texts, David Barick approached a timely and uncomfortable question at METM25: Do they still need me? In an era when ChatGPT can outline papers, rephrase sentences, and even critique logic, David invited attendees to reflect on what exactly human teachers still bring to the table.

He opened with examples from his workshops and from the EASE (European Association of Science Editors) panels he attended in 2024. In one of them, James Zou of Stanford University asked, “How well can LLMs provide feedback on papers?”. The consensus was sobering: large language models can indeed flag stylistic inconsistencies and grammar problems. However, they remain weak at distinguishing between good and merely adequate research writing. As another speaker at the same workshop put it, “ChatGPT 4.0 has a weak ability to differentiate between good or excellent and weak or acceptable research”.

To illustrate how he tests these boundaries in his own teaching, David shared experiments and writing samples with the audience, including a student assignment at the VU University of Amsterdam that asked learners to write 500 words on horses imitating the style of Donald Trump. The exercise was humorous but revealing. It exposed both the creative potential and the limits of algorithmic imitation.

The session’s central case study came from an academic introduction on the neurocognitive effects of alcohol hangover. The text contained a number of grammatical errors and other problems typical of Spanish speakers. David compared excerpts from the original text with ChatGPT’s revisions. Participants could see that the AI produced grammatically correct and more concise sentences such as “Hangover, a common consequence of acute alcohol intoxication, has been recognized since antiquity”, but also introduced factual and logical distortions. ChatGPT tended to simplify or misinterpret relationships between ideas, such as confusing binge drinking with hangover or inserting transitions that disrupted the original argument flow.

David used this example to examine the program’s editorial judgment. While AI often praised its own rewritten version as “clean, direct, and elegant”, it missed subtler issues such as balance, rhetorical purpose, and scientific logic. In his words, “ChatGPT can teach form, but not thought”. He emphasized that coherence, proportion and reasoning, qualities embedded in Swales’s move–step model of scientific introductions, require human interpretation and disciplinary awareness.

Towards the end, David shared his own prompt to ChatGPT: “Do you think AI will become a satisfactory substitute for human teachers of scientific writing?” The model’s response, which he projected on-screen, was surprisingly diplomatic. AI can already give structure, examples, and feedback, but humans provide mentorship, ethical judgment, and emotional intelligence. In the model’s own phrasing, the best classrooms of the near future will combine AI tutors for individualized feedback and human instructors for mentorship, critical thinking, and social learning.

David closed by reaffirming this hybrid vision. Far from replacing writing instructors, AI highlights the value of what humans do best: guiding students to think like researchers, to argue ethically, and to communicate with purpose.

His final message was optimistic: the machines are here to stay, but so are we, because science still needs teachers who can read between the lines.

This METM25 presentation was chronicled by Gabriela Kouahla.

Featured photo by METM25 photographer Julian Mayers.

Leave a Reply