From hospital leaflets to spoken answers in dozens of languages, new research from the University of East London (UEL) suggests artificial intelligence could dramatically improve how patients learn about serious eye conditions.
A research team led by UEL’s Dr Mohammad Hossein Amirhosseini and Dr Fatima Kalabi from Queen’s Hospital in London, in collaboration with Moorfields Eye Hospital in London, and Inselspital University Hospital of Bern in Switzerland, has developed a multilingual, voice-enabled AI chatbot designed to help people understand retinal detachment - a sight-threatening condition that often requires urgent surgery. The system allows patients to ask questions in natural language and receive clear, clinically grounded answers drawn from trusted medical sources.
The technology is designed as a modern alternative to traditional patient information leaflets, which can be difficult for many patients to read or interpret - particularly those with visual impairment, limited health literacy, or language barriers. Instead of relying on static written material, the AI tool offers dynamic, conversational interaction and real-time responses, and can speak its answers aloud in multiple languages.
The system utilises large language models customised with a method known as retrieval-augmented generation, which ensures responses are grounded in verified clinical knowledge rather than freely generated text. The research team built a clinician-curated and validated knowledge base using peer-reviewed and hospital-approved information about retinal detachment and then evaluated how well different AI models could answer patient questions.
Testing compared three leading large language models- GPT-4o, Claude Opus, and Gemini 1.5 Pro - using 50 clinically relevant questions. The study found GPT-4o consistently outperformed the other models across multiple evaluation metrics, producing the most accurate and reliable responses when assessed using widely used language evaluation metrics.
The chatbot also includes accessibility features designed for real-world healthcare settings. Patients can speak their questions instead of typing them, while the system can read responses aloud using multilingual text-to-speech technology. This means people with low vision or limited English proficiency can still access clinically reliable and easy-to-understand information about their condition.
Research technical lead Dr Amirhosseini, Associate Professor in Computer Science and Digital Technologies, said the work highlights how AI could strengthen communication between patients and healthcare providers.
“Patient information leaflets have been used for decades, but they are static and often difficult for people to engage with, especially when they are anxious or struggling with vision problems,” he said.
“Our research shows that a carefully designed AI system can provide personalised, context-aware explanations, answer questions in real time and deliver information in multiple languages and formats. The goal is not to replace clinicians, but to augment clinical communication and empower patients to better understand their condition and feel more confident during the pre-operation and post-operation phases.”
Because retinal detachment requires timely treatment and careful post-operative care, clear communication is crucial. Patients frequently report confusion about symptoms, recovery steps and follow-up care after surgery. By offering an interactive way to ask questions at any time, the chatbot could help reinforce clinical guidance, reduce anxiety, and improve patient adherence to treatment plans outside busy consultations.
The system was developed as a research prototype and currently operates in a secure local environment. All answers are generated from clinician-approved information sources to ensure accuracy, transparency, and alignment with clinical governance standards.
Researchers say the approach could eventually be adapted for other conditions and clinical pathways where patients need clear, accessible explanations of complex medical information, including chronic disease management, surgical phase, and post-operative rehabilitation pathways.
The study, Transforming patient education on retinal detachment: A multilingual voice-enabled retrieval-augmented generation chatbot , was published in the peer-reviewed Journal of Artificial Intelligence & Robotics.
Journal of Artificial Intelligence and Robotics
Computational simulation/modeling
Not applicable
Transforming patient education on retinal detachment: A multilingual voice-enabled retrieval-augmented generation chatbot
27-Feb-2026