Yazarlar (2) |
![]() Kastamonu Üniversitesi, Türkiye |
![]() Kastamonu Üniversitesi, Türkiye |
Özet |
Objectives To evaluate the response and interpretative capabilities of two pioneering artificial intelligence (AI)-based large language model (LLM) platforms in addressing ophthalmology-related multiple-choice questions (MCQs) from Turkish Medical Specialty Exams. Materials and Methods MCQs from a total of 37 exams held between 2006-2024 were reviewed. Ophthalmology-related questions were identified and categorized into sections. The selected questions were asked to the ChatGPT-4o and Gemini 1.5 Pro AI-based LLM chatbots in both Turkish and English with specific prompts, then re-asked without any interaction. In the final step, feedback for incorrect responses were generated and all questions were posed a third time. Results A total of 220 ophthalmology-related questions out of 7312 MCQs were evaluated using both AI-based LLMs. A mean of 6.47±2.91 (range: 2-13) MCQs was taken from each of ... |
Anahtar Kelimeler |
Artificial intelligence | ChatGPT-4 Omni | e-learning | Gemini 1.5 Pro | large language model | medical education | ophthalmology |
Makale Türü | Özgün Makale |
Makale Alt Türü | SCOPUS dergilerinde yayınlanan tam makale |
Dergi Adı | Turkish Journal of Ophthalmology |
Dergi ISSN | 2149-8709 Scopus Dergi |
Dergi Tarandığı Indeksler | SCOPUS |
Makale Dili | İngilizce |
Basım Tarihi | 08-2025 |
Cilt No | 55 |
Sayı | 4 |
Sayfalar | 177 / 185 |
Doi Numarası | 10.4274/tjo.galenos.2025.27895 |