
Work smarter not harder, and don’t let your AI intern hallucinate: Highlights from ISMPP EU 2026
With a focus on Excellence in an Era of Efficiency, attendees might have guessed that the topics at this year’s ISMPP EU meeting would lean towards practical application of automation, AI augmentation, and enhancement, without jeopardising high standards in medical communications.
Two sessions which stood out thanks to their practical focus highlighted not just what AI can do, but how we can use it effectively in day-to-day medical communications. The first, a practical session around prompting frameworks, had attendees empathising with their AI of choice. The second enlightened attendees with just how far AI workflows have come.
Treat your AI like an intern
The interactive session encouraged attendees to think of AI as a new starter: capable, but still learning. Coach it, give it examples, use multiple personas to review from. Give a positive instruction, tell it what it can do rather than can’t, and above all – use the RICCE framework:
- Role
- Instructions
- Context
- Constraints
- Example
This simple structure can dramatically improve output quality. Group exercises reinforced that we are all learning, as we experimented with adding experience levels, reference rules, impact scoring, and step-wise confirmation. A few crowd favourites emerged too: explicitly telling AI not to hallucinate, and even asking the AI to critique our prompts before answering.
Editorial workflows: What the data shows
This session took a more analytical angle, looking at how AI performs in day-to-day editorial tasks within PowerPoint — a platform that is visually oriented, but rarely editor-friendly. Reference extraction on a fixed slide-set showed clear differences across tools: ChatGPT captured 91% of references, Custom GPT 86%, and Copilot 72%. Copilot struggled most with abstracts, non-journal sources, and clinical trial identifiers, while ChatGPT performed best with footnotes.
Error detection was equally revealing. Human editors corrected 96% of 69 seeded errors in 22 minutes. ChatGPT‑o3 corrected 94% in just 2.5 minutes, while Custom GPT reached 80% in 1 minute. Different error types tripped up different tools, emphasising that AI supports — rather than replaces — experienced reviewers.
Critique not creation
Many teams are seeing stronger results when using AI to critique, refine, or stress-test human-written content rather than create it from scratch. Personas aligned to target audiences can make this even more effective. AI is best used if the task requires judgement, pattern recognition or uncertain conditions.
Wrapping up
Traditional roles in medical communications are evolving fast and the takeaway message from ISMPP EU 2026 was clear: with the right prompting habits and oversight, we have a powerful partner in the delivery of fast, accurate, consistent and perceptive scientific communications.



