GenAI prompt engineering for language professionals
Since the initial release of ChatGPT in 2022, AI has been stirring up lots of contradictory emotions. As for me, I approached it with caution, wondering whether it would hold more promises or threats. Chronicling Michael’s talk seemed to be an opportunity to take the bull by the horns.
Please join me, sitting in the fourth row, my faithful fountain pen in hand, preparing for a rough ride.
Here comes Michael.
His voice is soothing; the pace is just right.
He starts by presenting the results of his recent survey: 29% of professional translators use generative AI (GenAI) for work, but almost three-quarters use AI. Confused? In fact, machine translation is “traditional” AI.
Having set the scene, Michael gets to the core of his talk: “the game is to get the prompt right”. He skillfully dissects an elaborate prompt into four elements: (1) instructions; (2) brief; (3) input data; and (4) output indicator, and shows an example asking for a translation. It becomes clear that a brief really means the context. “Oh, context!” I can hear sighs of comprehending relief all around. Be careful, though, Michael explains, AI only sees co-text (the words surrounding a particular word or phrase) and not context. Unless you give it a brief, of course! Doesn’t that ring a bell?
Now, on to some general tips. Michael explains that ChatGPT is optimized for English, so prompts in English often work best. He also emphasizes that prompt engineering is an iterative process – it needs experimentation. And a last tip: “Remember that GenAI cannot do everything.”
The whiteboard image on display reminds me of my students’ best creative outputs. We’ve reached techniques now. Will Michael guide us into AI-friendly creativity? Unexpectedly, the terms themselves are very creative: “zero-shot prompting” means “prompting without giving examples”.
Back to the differences between co-text and context. Michael explains in more detail that AI doesn’t use language the way we do: it only uses the form, with no access to the meaning. “It doesn’t actually understand a single word it writes”, he concludes. Logically, providing examples usually leads to better results.
But this is not all. You can also play personas! To my great surprise, prompting ChatGPT to respond in line with a defined persona, or “expert”, makes it much closer to being one – the obtained translation is much better. But Michael soon shows us how easily it can fail: it gets a known film title wrong. Why? Because GenAI – unlike Google – is not an information retrieval system, and, since it does no fact-checking, it can “hallucinate”.
Yet, the real surprise comes only now: you can actually ask GenAI to write its own prompt – a process that Michael likes to call “reverse engineering”. He shows how combining this with a few “shots” can provide excellent results.
Time flies; the talk will soon draw to a close. Michael takes us back to his survey results and puts them into context (not co-text!). While finding the definition of words appears to be the most common use of GenAI among translators, Michael shows us that this is where it easily fails. It is much better at finding synonyms and collocations, reverse translating or translating in a particular style, and – above all – proofreading. “Unfortunately, it’s actually very, very good at it”, he says.
A brisk Q&A session follows, with time to applaud. I’m putting my pen down; my mind full but surprisingly untaxed.
The ride was smooth; the “bull” stayed kind. While the AI-generated images may have fired our imagination, specific examples kept our feet on the ground. And Michael’s relationship with AI was reassuringly sound: “It’s stupid, you need to tell it exactly what you want.”
Many thanks, Michael, for so clearly showing us how.
This METM24 presentation was chronicled by Katarzyna Szymanska.
Featured photo courtesy of MET. Slides reproduced with presenter’s permission.