U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neuroophthalmology
  • v.43(5); 2019 Nov

Ophthalmology Review – A Case-study Approach

Ophthalmology Review – A Case-study Approach. Kuldev Singh, , William Smiddy  and  Andrew Lee ,  New York,  Thieme,  2018,  308 pp.,  £119.99. (softcover), 223 Illustrations, ISBN:  978-1-62623-176-4.

Your next patient has keratoconus. When was the last time you familiarized yourself with this disease? Herein, lies the value of Ophthalmology Review . This text is invaluable to those of us who were primarily trained in neurology or those who find themselves somewhat distant from the practice of general ophthalmology.

The authors assembled 62 distinguished contributors to create an easy to read text that presents ophthalmological disorders in a concise and useful format. The book takes a case-based approach to each disease and divides it into six or seven sections. This makes it very quick and easy to find the fundamental information that you seek.

  • Section 1: The history and findings on examination of a typical patient.
  • Section 2: Relevant diagnostic testing and interpreta-tions.
  • Section 3: The diagnosis.
  • Section 4: Medical management.
  • Section 5: Surgical options.
  • Section 6: Recommendations for rehabilitation and follow-up care.
  • Section 7: Suggested Reading

Ophthalmology Review is not meant to be an exhaustive review of ophthalmological disease. That would require many thousands of pages and would not fit on my bookshelf. What I cannot overstate is the brilliant way this book has been organized. The format is concise and well-illustrated. It provides the reader with the significant information that he or she needs and a useful list of references for further study.

  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Ophthalmology Review: A Case-Study Approach

Ophthalmology Review uses a novel concept to present practical and common-sense approaches to commonly encountered clinical scenarios. Each topic is introduced as a case study, with the pertinent history and physical findings of actual patients treated by the authors. Each case follows a standardized format consisting of differential diagnoses and key points, test interpretation, diagnosis, medical and/or surgical management, rehabilitation and follow-up, and a list of pertinent references.

Ophthalmology Review: A Case-Study Approach. Arch Ophthalmol. 2002;120(8):1111. doi:

Manage citations:

© 2024

Artificial Intelligence Resource Center

Ophthalmology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Are you in ?

Go to homepage

  • Show Ophthalmology

Ophthalmology Review, 9781626231764

Ophthalmology Review

A case-study approach.

Media Type:

  • Edition: 2 2019
  • Details: 330 pages, 223 ill., Paperback (Perfect Binding)
  • ISBN: 9781626231764

Media Type: Book

  • Language of text: English

US$ 119.99*

Prices exclude sales tax

  • Description Real-world ophthalmology patient management cases provide a solid foundation of clinical knowledgeOphthalmology Review: A Ca… More

Real-world ophthalmology patient management cases provide a solid foundation of clinical knowledge

Ophthalmology Review: A Case-Study Approach, Second Edition by renowned experts Kuldev Singh, William Smiddy, and Andrew Lee is a practical, case-based reference covering a wide array of common to serious ophthalmic conditions encountered in daily practice. The new edition reflects significant advances in ophthalmologic surgery and additional quick-reference material. The focus is on patient management problems and how to handle them and optimally manage the patient. A cadre of esteemed contributors discuss diagnostic methods, evaluation, contraindications, and patient management issues for a full spectrum of clinical disorders, with significant clinical pearls gleaned from hands-on expertise.

A full spectrum of subspecialties are reflected in nearly 100 ophthalmology cases presented in 11 sections, encompassing the cornea and external disease, lens, glaucoma, retina, uveitis, tumors, posterior segment complications, ocular trauma, neuro-ophthalmology, pediatrics, and oculoplastic surgery. Each succinct case walks readers step by step through patient history, the examination, differential diagnoses, test interpretation, definitive diagnosis, medical and/or surgical management, rehabilitation, and follow up, with handy key point summaries.

  • Hundreds of tables, full-color images, and line drawings enhance clinical understanding
  • Presents patient management problems, focusing on diagnosis, problem-solving, and treatment
  • Disorders of the retina such as diabetic retinopathy, retinal vein and artery occlusion, AMD, myopic degeneration, chorioretinopathy, vitreous hemorrhage, and retinitis pigmentosa
  • Neuro-ophthalmologic conditions including optic neuritis, various types of nerve palsy, internuclear ophthalmoplegia, and anisocoria

Clinical Ophthalmic Echography

A Case Study Approach

  • © 2014
  • Roger P. Harrie 0 ,
  • Cynthia J. Kendall 1

Moran Eye Center, University of Utah, Salt Lake City, USA

You can also search for this author in PubMed   Google Scholar

Sacramento, USA

  • Second edition includes additional case studies to expand the reader's understanding of the clinical applications of ocular ultrasound
  • Includes web-based video clips demonstrating basic examination techniques for ophthalmic ultrasound
  • Emphasizes the basic principles and techniques of the echographic examination
  • Provides the reader with a firm foundation in the fundamental examination techniques and applications of ophthalmic echography
  • Includes supplementary material: sn.pub/extras

32k Accesses

1 Altmetric

This is a preview of subscription content, log in via an institution to check access.

Access this book

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (202 chapters)

Front matter, indications for ophthalmic ultrasound, case study 1 optic nerve drusen.

  • Roger P. Harrie, Cynthia J. Kendall

Case Study 2 Ciliary Body Melanoma and Sector Cataract

Case study 3 small ciliary body melanoma, case study 4 iris bombe around intraocular lens implant, case study 5 choroidal melanoma, case study 6 small choroidal melanoma, case study 7 posterior vitreous detachment and retinal tear, case study 8 vitreous syneresis, case study 9 shallow retinal detachment, case study 10 dacryoadenitis, case study 11 optic nerve drusen, case study 12 optic nerve druse and disc hemorrhage, case study 13 central retinal artery embolus, case study 14 retinoblastoma with fine calcification, case study 15 extraocular muscles in graves’ disease, case study 16 orbital myositis, case study 17 idiopathic choroidal folds, case study 18 choroidal folds and orbital lymphoma.

  • Ultrasonography
  • Diagnostic Radiology

About this book

The second editon of this popular ultrasound book expands the reader's understanding of the clinical applications of ocular ultrasound through a case study approach. With the addition of high-quality video segments of examination techniques not currently available in any other format, this edition appeals to a broader range of practitioners in the field by presenting the subject starting at the basic level and progressing to the advanced.

The book is appealing to practitioners involved in ocular ultrasound, including ophthalmic technicians, ophthalmologists, optometrists, radiologists and emergency room physicians who, on occasion, are involved in the practice of ophthalmic ultrasound.

Authors and Affiliations

Roger P. Harrie

Cynthia J. Kendall

About the authors

Dr. Roger P. Harrie is an adjunct professor in the Department of Ophthalmology/Visual Sciences at the University of Utah.

Cynthia Kendall has provided didactic, clinical and technical training to physicians, ophthalmic personnel, and bio-engineers in principles, examination techniques and design of diagnostic ultrasound for ophthalmology.

Bibliographic Information

Book Title : Clinical Ophthalmic Echography

Book Subtitle : A Case Study Approach

Authors : Roger P. Harrie, Cynthia J. Kendall

DOI : https://doi.org/10.1007/978-1-4614-7082-3

Publisher : Springer New York, NY

eBook Packages : Medicine , Medicine (R0)

Copyright Information : Springer Science+Business Media New York 2014

Hardcover ISBN : 978-1-4614-7081-6

eBook ISBN : 978-1-4614-7082-3

Edition Number : 2

Number of Pages : XV, 492

Topics : Ophthalmology , Diagnostic Radiology , Ultrasound

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Medical College of Wisconsin

  • Departments /
  • Ophthalmology and Visual Sciences /
  • Education /
  • Ophthalmic Case Studies

Students Faculty Collaborate Whiteboard

MCW Ophthalmic Case Studies For Medical Students

Students at Desk

Disclaimer for Our Patients

After thoroughly reviewing these case studies, the learner will be able to:

  • Recognize and describe the typical presentation of common conditions affecting the anterior and posterior segments of the eye
  • Consider a range of multiple etiologies when examining patients with eye or vision problems, including trauma, infection, congenital abnormalities, autoimmunity, vascular issues, metabolic deficiencies, and environmental causes
  • Recall the basic pathophysiology underlying numerous ophthalmic conditions
  • Evaluate the significance of clinical findings in relation to common ophthalmic diseases
  • Formulate a differential diagnosis after reviewing the patient’s history and ocular exam
  • Identify which laboratory tests or exams are appropriate to confirm and evaluate specific ophthalmic diagnoses
  • Discuss therapeutic options and treatment plans for a number of acute and chronic ophthalmic diseases

We've included a listing of commonly used ophthalmic abbreviations for your review.

Abbreviations (PDF)

For questions regarding the cases contact Dr. Judy Hoggatt via email .

Medical College of Wisconsin Ophthalmology and Visual Sciences Case Studies

Ophthalmic case study 1, ophthalmic case study 10, ophthalmic case study 2, ophthalmic case study 11, ophthalmic case study 3, ophthalmic case study 12, ophthalmic case study 4, ophthalmic case study 13, ophthalmic case study 5, ophthalmic case study 14, ophthalmic case study 6, ophthalmic case study 15, ophthalmic case study 7, ophthalmic case study 16, ophthalmic case study 8, ophthalmic case study 17, ophthalmic case study 9, ophthalmic case study 18.

Loading metrics

Open Access

Peer-reviewed

Research Article

Large language models approach expert-level clinical knowledge and reasoning in ophthalmology: A head-to-head cross-sectional study

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected] (AJT); [email protected] (DSJT)

Affiliations University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom, Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, United Kingdom

ORCID logo

Roles Data curation, Investigation, Writing – review & editing

Affiliation University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom

Affiliation Eye Institute, Cleveland Clinic Abu Dhabi, Abu Dhabi Emirate, United Arab Emirates

Roles Data curation, Investigation, Writing – original draft, Writing – review & editing

Affiliations University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom, Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom

Roles Data curation, Investigation

Affiliation West Suffolk NHS Foundation Trust, Bury St Edmunds, United Kingdom

Affiliation Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, United Kingdom

Affiliation Birmingham and Midland Eye Centre, Sandwell and West Birmingham NHS Foundation Trust, Birmingham, United Kingdom

Affiliation Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan

Affiliation Yong Loo Lin School of Medicine, National University of Singapore, Singapore

Roles Data curation, Investigation, Project administration, Writing – review & editing

Affiliation Bedfordshire Hospitals NHS Foundation Trust, Luton and Dunstable, United Kingdom

Affiliation Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore

Roles Writing – review & editing

Affiliations Birmingham and Midland Eye Centre, Sandwell and West Birmingham NHS Foundation Trust, Birmingham, United Kingdom, Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom

Roles Funding acquisition, Project administration

Affiliations Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore, Duke-NUS Medical School, Singapore, Singapore, Byers Eye Institute, Stanford University, Palo Alto, California, United States of America

  •  [ ... ],

Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

Affiliations Birmingham and Midland Eye Centre, Sandwell and West Birmingham NHS Foundation Trust, Birmingham, United Kingdom, Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom, Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom

  • [ view all ]
  • [ view less ]
  • Arun James Thirunavukarasu, 
  • Shathar Mahmood, 
  • Andrew Malem, 
  • William Paul Foster, 
  • Rohan Sanghera, 
  • Refaat Hassan, 
  • Sean Zhou, 
  • Shiao Wei Wong, 
  • Yee Ling Wong, 

PLOS

  • Published: April 17, 2024
  • https://doi.org/10.1371/journal.pdig.0000341
  • Reader Comments

Table 1

Large language models (LLMs) underlie remarkable recent advanced in natural language processing, and they are beginning to be applied in clinical contexts. We aimed to evaluate the clinical potential of state-of-the-art LLMs in ophthalmology using a more robust benchmark than raw examination scores. We trialled GPT-3.5 and GPT-4 on 347 ophthalmology questions before GPT-3.5, GPT-4, PaLM 2, LLaMA, expert ophthalmologists, and doctors in training were trialled on a mock examination of 87 questions. Performance was analysed with respect to question subject and type (first order recall and higher order reasoning). Masked ophthalmologists graded the accuracy, relevance, and overall preference of GPT-3.5 and GPT-4 responses to the same questions. The performance of GPT-4 (69%) was superior to GPT-3.5 (48%), LLaMA (32%), and PaLM 2 (56%). GPT-4 compared favourably with expert ophthalmologists (median 76%, range 64–90%), ophthalmology trainees (median 59%, range 57–63%), and unspecialised junior doctors (median 43%, range 41–44%). Low agreement between LLMs and doctors reflected idiosyncratic differences in knowledge and reasoning with overall consistency across subjects and types ( p >0.05). All ophthalmologists preferred GPT-4 responses over GPT-3.5 and rated the accuracy and relevance of GPT-4 as higher ( p <0.05). LLMs are approaching expert-level knowledge and reasoning skills in ophthalmology. In view of the comparable or superior performance to trainee-grade ophthalmologists and unspecialised junior doctors, state-of-the-art LLMs such as GPT-4 may provide useful medical advice and assistance where access to expert ophthalmologists is limited. Clinical benchmarks provide useful assays of LLM capabilities in healthcare before clinical trials can be designed and conducted.

Author summary

Large language models (LLMs) are the most sophisticated form of language-based artificial intelligence. LLMs have the potential to improve healthcare, and experiments and trials are ongoing to explore potential avenues for LLMs to improve patient care. Here, we test state-of-the-art LLMs on challenging questions used to assess the aptitude of eye doctors (ophthalmologists) in the United Kingdom before they can be deemed fully qualified. We compare the performance of these LLMs to fully trained ophthalmologists as well as doctors in training to gauge the aptitude of the LLMs for providing advice to patients about eye health. One of the LLMs, GPT-4, exhibits favourable performance when compared with fully qualified and training ophthalmologists; and comparisons with its predecessor model, GPT-3.5, indicate that this superior performance is due to improved accuracy and relevance of model responses. LLMs are approaching expert-level ophthalmological knowledge and reasoning, and may be useful for providing eye-related advice where access to healthcare professionals is limited. Further research is required to explore potential avenues of clinical deployment.

Citation: Thirunavukarasu AJ, Mahmood S, Malem A, Foster WP, Sanghera R, Hassan R, et al. (2024) Large language models approach expert-level clinical knowledge and reasoning in ophthalmology: A head-to-head cross-sectional study. PLOS Digit Health 3(4): e0000341. https://doi.org/10.1371/journal.pdig.0000341

Editor: Man Luo, Mayo Clinic Scottsdale, UNITED STATES

Received: July 31, 2023; Accepted: February 26, 2024; Published: April 17, 2024

Copyright: © 2024 Thirunavukarasu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All data are available as supplementary information , excluding copyrighted material from the textbook used for experiments.

Funding: DSWT is supported by the National Medical Research Council, Singapore (NMCR/HSRG/0087/2018; MOH-000655-00; MOH-001014-00), Duke-NUS Medical School (Duke-NUS/RSF/2021/0018; 05/FY2020/EX/15-A58), and Agency for Science, Technology and Research (A20H4g2141; H20C6a0032). DSJT is supported by a Medical Research Council / Fight for Sight Clinical Research Fellowship (MR/T001674/1). These funders were not involved in the conception, execution, or reporting of this review.

Competing interests: AM is a member of the Panel of Examiners of the Royal College of Ophthalmologists and performs unpaid work as an FRCOphth examiner. DSWT holds a patent on a deep learning system to detect retinal disease. DSJT authored the book used in the study and receives royalty from its sales. The other authors have no competing interests to declare.

Introduction

Generative Pre-trained Transformer 3.5 (GPT-3.5) and 4 (GPT-4) are large language models (LLMs) trained on datasets containing hundreds of billions of words from articles, books, and other internet sources [ 1 , 2 ]. ChatGPT is an online chatbot which uses GPT-3.5 or GPT-4 to provide bespoke responses to human users’ queries [ 3 ]. LLMs have revolutionised the field of natural language processing, and ChatGPT has attracted significant attention in medicine for attaining passing level performance in medical school examinations and providing more accurate and empathetic messages than human doctors in response to patient queries on a social media platform [ 3 , 4 , 5 , 6 ]. While GPT-3.5 performance in more specialised examinations has been inadequate, GPT-4 is thought to represent a significant advancement in terms of medical knowledge and reasoning [ 3 , 7 , 8 ]. Other LLMs in wide use include Pathways Language Model 2 (PaLM 2) and Large Language Model Meta AI 2 (LLaMA 2) [ 3 ], [ 9 , p. 2], [ 10 ].

Applications and trials of LLMs in ophthalmological settings has been limited despite ChatGPT’s performance in questions relating to ‘eyes and vision’ being superior to other subjects in an examination for general practitioners [ 7 , 11 ]. ChatGPT has been trialled on the North American Ophthalmology Knowledge Assessment Program (OKAP), and Fellowship of the Royal College of Ophthalmologists (FRCOphth) Part 1 and Part 2 examinations. In both cases, relatively poor results have been reported for GPT-3.5, with significant improvement exhibited by GPT-4 [ 12 , 13 , 14 , 15 , 16 ]. However, previous studies are afflicted by two important issues which may affect their validity and interpretability. First, so-called ‘contamination’, where test material features in the pretraining data used to develop LLMs, may result in inflated performance as models recall previously seen text rather than using clinical reasoning to provide an answer. Second, examination performance in and of itself provides little information regarding the potential of models to contribute to clinical practice as a medical-assistance tool [ 3 ]. Clinical benchmarks are required to understanding the meaning and implications of scores in ophthalmological examinations attained by LLMs and are a necessary precursor to clinical trials of LLM-based interventions.

Here, we used FRCOphth Part 2 examination questions to gauge the ophthalmological knowledge base and reasoning capability of LLMs using fully qualified and currently training ophthalmologists as clinical benchmarks. These questions were not freely available online, minimising the risk of contamination. The FRCOphth Part 2 Written Examination tests the clinical knowledge and skills of ophthalmologists in training using multiple choice questions with no negative marking and must be passed to fully qualify as a specialist eye doctor in the United Kingdom.

Question extraction

FRCOphth Part 2 questions were sourced from a textbook for doctors preparing to take the examination [ 17 ]. This textbook is not freely available on the internet, making the possibility of its content being included in LLMs’ training datasets unlikely [ 1 ]. All 360 multiple-choice questions from the textbook’s six chapters were extracted, and a 90-question mock examination from the textbook was segregated for LLM and doctor comparisons. Two researchers matched the subject categories of the practice papers’ questions to those defined in the Royal College of Ophthalmologists’ documentation concerning the FRCOphth Part 2 written examination. Similarly, two researchers categorised each question as first order recall or higher order reasoning, corresponding to ‘remembering’ and ‘applying’ or ‘analysing’ in Bloom’s taxonomy, respectively [ 18 ]. Disagreement between classification decisions was resolved by a third researcher casting a deciding vote. Questions containing non-plain text elements such as images were excluded as these could not be inputted to the LLM applications.

Trialling large language models

Every eligible question was inputted into ChatGPT (GPT-3.5 and GPT-4 versions; OpenAI, San Francisco, California, United States of America) between April 29 and May 10, 2023. The answers provided by GPT-3.5 and GPT-4 were recorded and their whole reply to each question was recorded for further analysis. If ChatGPT failed to provide a definitive answer, the question was re-trialled up to three times, after which ChatGPT’s answer was recorded as ‘null’ if no answer was provided. Correct answers (‘ground truth’) were defined as the answers provided by the textbook and were recorded for every eligible question to facilitate calculation of performance. Upon their release, Bard (Google LLC, Mountain View, California, USA) and HuggingChat (Hugging Face, Inc., New York City, USA) were used to trial PaLM 2 (Google LLC) and LLaMA (Meta, Menlo Park, California, USA) respectively on the portion of the textbook corresponding to a 90-question examination, adhering to the same procedures between June 20 and July 2, 2023.

Clinical benchmarks

To gauge the performance, accuracy, and relevance of LLM outputs, five expert ophthalmologists who had all passed the FRCOphth Part 2 (E1-E5), three trainees (residents) currently in ophthalmology training programmes (T1-T3), and two unspecialised ( i . e . not in ophthalmology training) junior doctors (J1-J2) first answered the 90-question mock examination independently, without reference to textbooks, the internet, or LLMs’ recorded answers. As with the LLMs, doctors’ performance was calculated with reference to the correct answers provided by the textbook. After completing the examination, ophthalmologists graded the whole output of GPT-3.5 and GPT-4 on a Likert scale from 1–5 (very bad, bad, neutral, good, very good) to qualitatively appraise accuracy of information provided and relevance of outputs to the question used as an input prompt. For these appraisals, ophthalmologists were blind to the LLM source (which was presented in a randomised order) and to their previous answers to the same questions, but they could refer to the question text and correct answer and explanation provided by the textbook. Procedures are comprehensively described in the protocol issued to the ophthalmologists ( S1 Protocol ).

Our null hypothesis was that LLMs and doctors would exhibit similar performance, supported by results in a wide range of medical examinations [ 3 , 6 ]. Prospective power analysis was conducted which indicated that 63 questions were required to identify a 10% superior performance of an LLM to human performance at a 5% significance level (type 1 error rate) with 80% power (20% type 2 error rate). This indicated that the 90-question examination in our experiments was more than sufficient to detect ~10% differences in overall performance. The whole 90-question mock examination was used to avoid over- or under-sampling certain question types with respect to actual FRCOphth papers. To verify that the mock examination was representative of the FRCOphth Part 2 examination, expert ophthalmologists were asked to rate the difficulty of questions used here in comparison to official examinations on a 5-point Likert scale (“much easier”, “somewhat easier”, “similar”, “somewhat more difficult”, “much more difficult”).

Statistical analysis

Performance of doctors and LLMs were compared using chi-squared (χ 2 ) tests. Agreement between answers provided by doctors and LLMs was quantified through calculation of Kappa statistics, interpreted in accordance with McHugh’s recommendations [ 19 ]. To further explore the strengths and weaknesses of the answer providers, performance was stratified by question type (first order fact recall or higher order reasoning) and subject using a chi-squared or Fisher’s exact test where appropriate. Likert scale data corresponding to the accuracy and relevance of GPT-3.5 and GPT-4 responses to the same questions were analysed with paired t -tests with the Bonferroni correction applied to mitigate the risk of false positive results due to multiple-testing—parametric testing was justified by a sufficient sample size [ 20 ]. A chi-squared test was used to quantify the significance of any difference in overall preference of ophthalmologists choosing between GPT-3.5 and GPT-4 responses. Statistical significance was concluded where p < 0.05. For additional contextualisation, examination statistics corresponding to FRCOphth Part 2 written examinations taken between July 2017 and December 2022 were collected from Royal College of Ophthalmologists examiners’ reports [ 21 ]. These statistics facilitated comparisons between human and LLM performance in the mock examination with the performance of actual candidates in recent examinations. Failure cases where all LLMs provided an incorrect answer were appraised qualitatively to explore any specific weaknesses of the technology.

Statistical analysis was conducted in R (version 4.1.2; R Foundation for Statistical Computing, Vienna, Austria), and figures were produced in Affinity Designer (version 1.10.6; Serif Ltd, West Bridgford, Nottinghamshire, United Kingdom).

Questions sources

Of 360 questions in the textbook, 347 questions (including 87 of the 90 questions from the mock examination chapter) were included [ 17 ]. Exclusions were all due to non-text elements such as images and tables which could not be inputted into LLM chatbot interfaces. The distribution of question types and subjects within the whole set and mock examination set of questions is summarised in Table 1 and S1 Table alongside performance.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Question subject and type distributions presented alongside scores attained by LLMs (GPT-3.5, GPT-4, LLaMA, and PaLM 2), expert ophthalmologists (E1-E5), ophthalmology trainees (T1-T3), and unspecialised junior doctors (J1-J2). Median scores do not necessarily sum to the overall median score, as fractional scores are impossible.

https://doi.org/10.1371/journal.pdig.0000341.t001

GPT-4 represents a significant advance on GPT-3.5 in ophthalmological knowledge and reasoning.

Overall performance over 347 questions was significantly higher for GPT-4 (61.7%) than GPT-3.5 (48.41%; χ 2 = 12.32, p <0.01), with results detailed in S1 Fig and S1 Table . ChatGPT performance was consistent across question types and subjects ( S1 Table ). For GPT-4, no significant variation was observed with respect to first order and higher order questions (χ 2 = 0.22, p = 0.64), or subjects defined by the Royal College of Ophthalmologists (Fisher’s exact test over 2000 iterations, p = 0.23). Similar results were observed for GPT-3.5 with respect to first and second order questions (χ 2 = 0.08, p = 0.77), and subjects (Fisher’s exact test over 2000 iterations, p = 0.28). Performance and variation within the 87-question mock examination was very similar to the overall performance over 347 questions, and subsequent experiments were therefore restricted to that representative set of questions.

GPT-4 compares well with other LLMs, junior and trainee doctors and ophthalmology experts.

Performance in the mock examination is summarised in Fig 1 —GPT-4 (69%) was the top-scoring model, performing to a significantly higher standard than GPT-3.5 (48%; χ 2 = 7.33, p < 0.01) and LLaMA (32%; χ 2 = 22.77, p < 0.01), but statistically similarly to PaLM 2 (56%) despite a superior score (χ 2 = 2.81, p = 0.09). LLaMA exhibited the lowest examination score, significantly weaker than GPT-3.5 (χ 2 = 4.58, p = 0.03) and PaLM-2 (χ 2 = 10.01, p < 0.01) as well as GPT-4.

thumbnail

Examination performance in the 87-question mock examination used to trial LLMs (GPT-3.5, GPT-4, LLaMA, and PaLM 2), expert ophthalmologists (E1-E5), ophthalmology trainees (T1-T3), and unspecialised junior doctors (J1-J2). Dotted lines depict the mean performance of expert ophthalmologists (66/87; 76%), ophthalmology trainees (60/87; 69%), and unspecialised junior doctors (37/87; 43%). The performance of GPT-4 lay within the range of expert ophthalmologists and ophthalmology trainees.

https://doi.org/10.1371/journal.pdig.0000341.g001

The performance of GPT-4 was statistically similar to the mean score attained by expert ophthalmologists ( Fig 1 ; χ 2 = 1.18, p = 0.28). Moreover, GPT-4’s performance exceeded the mean mark attained across FRCOphth Part 2 written examination candidates between 2017–2022 (66.06%), mean pass mark according to standard setting (61.31%), and the mean official mark required to pass the examination after adjustment (63.75%), as detailed in S2 Table . In individual comparisons with expert ophthalmologists, GPT-4 was equivalent in 3 cases (χ 2 tests, p > 0.05, S3 Table ), and inferior in 2 cases (χ 2 tests, p < 0.05; Table 2 ). In comparisons with ophthalmology trainees, GPT-4 was equivalent to all three ophthalmology trainees (χ 2 tests, p > 0.05; Table 2 ). GPT-4 was significantly superior to both unspecialised trainee doctors (χ 2 tests, p < 0.05; Table 2 ). Doctors were anonymised in analysis, but their ophthalmological experience is summarised in S3 Table . Unsurprisingly, junior doctors (J1-J2) attained lower scores than expert ophthalmologists (E1-E5; t = 7.18, p < 0.01), and ophthalmology trainees (T1-T3; t = 11.18, p < 0.01), illustrated in Fig 1 . Ophthalmology trainees approached expert-level scores with no significant difference between the groups ( t = 1.55, p = 0.18). None of the other LLMs matched any of the expert ophthalmologists, mean mark of real examination candidates, or FRCOphth Part 2 pass mark.

Expert ophthalmologists agreed that the mock examination was a faithful representation of actual FRCOphth Part 2 Written Examination papers with a mean and median score of 3/5 (range 2-4/5).

thumbnail

Results of pair-wise comparisons of examination performance between GPT-4 and the other answer providers. Significantly greater performance for GPT-4 is highlighted green, significantly inferior performance for GPT-4 is highlighted orange. GPT-4 was superior to all other LLMs and unspecialised junior doctors, and equivalent to most expert ophthalmologists and all ophthalmology trainees.

https://doi.org/10.1371/journal.pdig.0000341.t002

LLM strengths and weaknesses are similar to doctors.

Agreement between answers given by LLMs, expert ophthalmologists, and trainee doctors was generally absent (0 ≤ κ < 0.2), minimal (0.2 ≤ κ < 0.4), or weak (0.4 ≤ κ < 0.6), with moderate agreement only recorded for one pairing between the two highest performing ophthalmologists ( Fig 2 ; κ = 0.64) [ 19 ]. Disagreement was primarily the result of general differences in knowledge and reasoning ability, illustrated by strong negative correlation between Kappa statistic (quantifying agreement) and difference in examination performance (Pearson’s r = -0.63, p < 0.01). Answer providers with more similar scores exhibited greater agreement overall irrespective of their category (LLM, expert ophthalmologist, ophthalmology trainee, or junior doctor).

thumbnail

Agreement correlates strongly with overall performance and stratification analysis found no particular question type or subject was associated with better performance of LLMs or doctors, indicating that LLM knowledge and reasoning ability is general across ophthalmology rather than restricted to particular subspecialties or question types.

https://doi.org/10.1371/journal.pdig.0000341.g002

Stratification analysis was undertaken to identify any specific strengths and weaknesses of LLMs with respect to expert ophthalmologists and trainee doctors ( Table 1 and S4 Table ). No significant difference between performance in first order fact recall and higher order reasoning questions was observed among any of the LLMs, expert ophthalmologists, ophthalmology trainees, or unspecialised junior doctors ( S4 Table ; χ 2 tests, p > 0.05). Similarly, only J1 (junior doctor yet to commence ophthalmology training) exhibited statistically significant variation in performance between subjects ( S4 Table ; Fisher’s exact tests over 2000 iterations, p = 0.02); all other doctors and LLMs exhibited no significant variation (Fisher’s exact tests over 2000 iterations, p > 0.05). To explore whether consistency was due to an insufficient sample size, similar analyses were run for GPT-3.5 and GPT-4 performance over the larger set of 347 questions ( S1 Table ; S4 Table ). As with the mock examination, no significant differences in performance across question types ( S4 Table ; χ 2 tests, p > 0.05) or subjects ( S4 Table ; Fisher’s exact tests over 2000 iterations, p > 0.05) were observed.

LLM examination performance translates to subjective preference indicated by expert ophthalmologists.

Ophthalmologists’ appraisal of GPT-4 and GPT-3.5 outputs indicated a marked preference for the former over the latter, mirroring objective performance in the mock examination and over the whole textbook. GPT-4 exhibited significantly ( t -test with Bonferroni correction, p < 0.05) higher accuracy and relevance than GPT-3.5 according to all five ophthalmologists’ grading ( Table 3 ). Differences were visually obvious, with GPT-4 exhibiting much higher rates of attaining the highest scores for accuracy and relevance than GPT-3.5 ( Fig 3 ). This superiority was reflected in ophthalmologists’ qualitative preference indications: GPT-4 responses were preferred to GPT-3.5 responses by every ophthalmologist with statistically significant skew in favour of GPT-4 (χ 2 test, p < 0.05; Table 3 ).

thumbnail

Accuracy (A) and relevance (B) ratings were provided by five expert ophthalmologists for ChatGPT (powered by GPT-3.5 and GPT-4) responses to 87 FRCOphth Part 2 mock examination questions. In every case, the accuracy and relevance of GPT-4 is significantly superior to GPT-3.5 (t-test with Bonferroni correct applied, p < 0.05). Pooled scores for accuracy (C) and relevance (D) from all five raters are presented in the bottom two plots, with GPT-3.5 (left bars) compared directly with GPT-4 (right bars).

https://doi.org/10.1371/journal.pdig.0000341.g003

thumbnail

t-test results with Bonferroni correction applied showing the superior accuracy and relevance of GPT-4 responses relative to GPT-3.5 responses in the opinion of five fully trained ophthalmologists (positive mean differences favour GPT-4), and χ 2 test showing that GPT-4 responses were preferred to GPT-3.5 responses by every ophthalmologist in their blinded qualitative appraisals.

https://doi.org/10.1371/journal.pdig.0000341.t003

Failure cases exhibit no association with subject, complexity, or human answers.

The LLM failure cases—where every LLM provided an incorrect answer—are summarised in Table 4 . While errors made by LLMs were occasionally similar to those made by trainee ophthalmologists and junior doctors, this association was not consistent ( Table 4 ). There was no preponderance of ophthalmological subject or first or higher order questions in the failure cases, and questions did not share a common theme, sentence structure, or grammatical construct ( Table 4 ). Examination questions are redacted here to avoid breaching copyright and prevent future LLMs accessing the test data during pretraining but can be provided on request.

thumbnail

Summary of LLM failure cases, where all models provided an incorrect answer to the FRCOphth Part 2 mock examination question. No associations were found with human answers, complexity, subject, theme, sentence structure, or grammatic constructs.

https://doi.org/10.1371/journal.pdig.0000341.t004

Here, we present a clinical benchmark to gauge the ophthalmological performance of LLMs, using a source of questions with very low risk of contamination as the utilised textbook is not freely available online [ 17 ]. Previous studies have suggested that ChatGPT can provide useful responses to ophthalmological queries, but often use online question sources which may have featured in LLMs’ pretraining datasets [ 7 , 12 , 15 , 22 ]. In addition, our employment of multiple LLMs as well as fully qualified and training doctors provides novel insight into the potential and limitations of state-of-the-art LLMs through head-to-head comparisons which provide clinical context and quantitative benchmarks of competence in ophthalmology. Subsequent research may leverage our questions and results to gauge the performance of new LLMs and applications as they emerge.

We make three primary observations. First, performance of GPT-4 compares well to expert ophthalmologists and ophthalmology trainees, and exhibits pass-worthy performance in an FRCOphth Part 2 mock examination. PaLM 2 did not attain pass-worthy performance or match expert ophthalmologists’ scores but was within the spread of trainee doctors’ performance. LLMs are approaching human expert-level knowledge and reasoning in ophthalmology, and significantly exceed the ability of non-specialist clinicians (represented here by unspecialised junior doctors) to answer ophthalmology questions. Second, clinician grading of model outputs suggests that GPT-4 exhibits improved accuracy and relevance when compared with GPT-3.5. Development is producing models which generate better outputs to ophthalmological queries in the opinion of expert human clinicians, which suggests that models are becoming more capable of providing useful assistance in clinical settings. Third, LLM performance was consistent across question subjects and types, distributed similarly to human performance, and exhibited comparable agreement between other LLMs and doctors when corrected for differences in overall performance. Together, this indicates that the ophthalmological knowledge and reasoning capability of LLMs is general rather than limited to certain subspecialties or tasks. LLM-driven natural language processing seems to facilitate similar—although idiosyncratic—clinical knowledge and reasoning to human clinicians, with no obvious blind spots precluding clinical use.

Similarly dramatic improvements in the performance of GPT-4 relative to GPT-3.5 have been reported in the context of the North American Ophthalmology Knowledge Assessment Program (OKAP) [ 13 , 15 ]. State-of-the-art models exhibit far more clinical promise than their predecessors, and expectations and development should be tailored accordingly. Results from the OKAP also suggest that improvement in performance is due to GPT-4 being more well-rounded than GPT-3.5 [ 13 ]. This increases the scope for potential applications of LLMs in ophthalmology, as development is eliminating weaknesses rather than optimising in narrow domains. This study shows that well-rounded LLM performance compares well with expert ophthalmologists, providing clinically relevant evidence that LLMs may be used to provide medical advice and assistance. Further improvement is expected as multimodal foundation models, perhaps based on LLMs such as GPT-4, emerge and facilitate compatibility with image-rich ophthalmological data [ 3 , 23 , 24 ].

Limitations

This study was limited by three factors. First, examination performance is an unvalidated indicator of clinical aptitude. We sought to ameliorate this limitation by employing expert ophthalmologists, ophthalmology trainees, and unspecialised junior doctors answering the same questions as clinical benchmarks; and compared LLM performance to real cohorts of candidates in recent FRCOphth examinations. However, it remains an issue that comparable performance to clinical experts in an examination does not necessarily demonstrate that an LLM can communicate with patients and practitioners or contribute to clinical decision making accurately and safely. Early trials of LLM chatbots have suggested that LLM responses may be equivalent or even superior to human doctors in terms of accuracy and empathy, and experiments using complicated case studies suggest that LLMs operate well even outside typical presentations and more common medical conditions [ 4 , 25 , 26 ]. In ophthalmology, GPT-3.5 and GPT-4 have been shown to be capable of providing precise and suitable triage decisions when queried with eye-related symptoms [ 22 , 27 ]. Further work is now warranted in conventional clinical settings.

Second, while the study was sufficiently powered to detect a less than 10% difference in overall performance, the relatively small number of questions in certain categories used for stratification analysis may mask significant differences in performance. Testing LLMs and clinicians with more questions may help establish where LLMs exhibit greater or lesser ability in ophthalmology. Furthermore, researchers using different ways to categorise questions may be able to identify specific strengths and weaknesses of LLMs and doctors which could help guide design of clinical LLM interventions.

Finally, experimental tasks were ‘zero-shot’ in that LLMs were not provided with any examples of correctly answered questions before it was queried with FRCOphth questions from the textbook. This mode of interrogation entails the maximal level of difficulty for LLMs, so it is conceivable that the ophthalmological knowledge and reasoning encoded within these models is actually even greater than indicated by results here [ 1 ]. Future research may seek to fine-tune LLMs by using more domain-specific text during pretraining and fine-tuning, or by providing examples of successfully completed tasks to further improve performance in that clinical task [ 3 ].

Future directions

Autonomous deployment of LLMs is currently precluded by inaccuracy and fact fabrication. Our study found that despite meeting expert standards, state-of-the-art LLMs such as GPT-4 do not match top-performing ophthalmologists [ 28 ]. Moreover, there remain controversial ethical questions about what roles should and should not be assigned to inanimate AI models, and to what extent human clinicians must remain responsible for their patients [ 3 ]. However, the remarkable performance of GPT-4 in ophthalmology examination questions suggests that LLMs may be able to provide useful input in clinical contexts, either to assist clinicians in their day-to-day work or with their education or preparation for examinations [ 3 , 13 , 14 , 27 ]. Further improvement in performance may be obtained by specific fine-tuning of models with high quality ophthalmological text data, requiring curation and deidentification [ 29 ]. GPT-4 may prove especially useful where access to ophthalmologists is limited: provision of advice, diagnosis, and management suggestions by a model with FRCOphth Part 2-level knowledge and reasoning ability is likely to be superior to non-specialist doctors and allied healthcare professionals working without support, as their exposure to and knowledge of eye care is limited [ 27 , 30 , 31 ].

However, close monitoring is essential to avoid mistakes caused by inaccuracy or fact fabrication [ 32 ]. Clinical applications would also benefit from an uncertainty indicator reducing the risk of erroneous decisions [ 7 ]. As LLM performance often correlates with the frequency of query terms’ representation in the model’s training dataset, a simple indicator of ‘familiarity’ could be engineered by calculating the relative frequency of query term representation in the training data [ 7 , 33 ]. Users could appraise familiarity to temper their confidence in answers provided by the LLM, perhaps reducing error. Moreover, ophthalmological applications require extensive validation, preferably with high quality randomised controlled trials to conclusively demonstrate benefit (or lack thereof) conferred to patients by LLM interventions [ 34 ]. Trials should be pragmatic so as not to inflate effect sizes beyond what may generalise to patients once interventions are implemented at scale [ 34 , 35 ]. In addition to patient outcomes, practitioner-related variables should also be considered: interventions aiming to improve efficiency should be specifically tested to ensure that they reduce rather than increase clinicians’ workload [ 3 ].

According to comparisons with expert and trainee doctors, state-of-the-art LLMs are approaching expert-level performance in advanced ophthalmology questions. GPT-4 attains pass-worthy performance in FRCOphth Part 2 questions and exceeds the scores of some expert ophthalmologists. As top-performing doctors exhibit superior scores, LLMs do not appear capable of replacing ophthalmologists, but state-of-the-art models could provide useful advice and assistance to non-specialists or patients where access to eye care professionals is limited [ 27 , 28 ]. Further research is required to design LLM-based interventions which may improve eye health outcomes, validate interventions in clinical trials, and engineer governance structures to regulate LLM applications as they begin to be deployed in clinical settings [ 36 ].

Supporting information

S1 fig. chatgpt performance in questions taken from the whole textbook..

Mosaic plot depicting the overall performance of ChatGPT versions powered by GPT-3.5 and GPT-4 in 360 FRCOphth Part 2 written examination questions. Performance was significantly higher for GPT-4 than GPT-3.5, and was close to mean human examination candidate performance and pass mark set by standard setting and after adjustment.

https://doi.org/10.1371/journal.pdig.0000341.s001

S1 Table. Question characteristics and performance of GPT-3.5 and GPT-4 over the whole textbook.

Similar observations were noted here to the smaller mock examination used for subsequent experiments. GPT-4 performs to a significantly higher standard than GPT-3.5

https://doi.org/10.1371/journal.pdig.0000341.s002

S2 Table. Examination statistics corresponding to FRCOphth Part 2 written examinations sat between July 2017-December 2022.

https://doi.org/10.1371/journal.pdig.0000341.s003

S3 Table. Experience of expert ophthalmologists (E1-E5), ophthalmology trainees (T1-T3), and unspecialised junior doctors (J1-J2) involved in experiments.

https://doi.org/10.1371/journal.pdig.0000341.s004

S4 Table. Results of statistical tests of variation in performance between question subjects and types, for each trialled LLM, expert ophthalmologist, and trainee doctor.

Statistically significant results are highlighted in green.

https://doi.org/10.1371/journal.pdig.0000341.s005

S1 Protocol. Procedures followed by ophthalmologists to grade the output of GPT-3.5 and GPT-4 in terms of accuracy, relevance, and rater-preference of model outputs.

https://doi.org/10.1371/journal.pdig.0000341.s006

Acknowledgments

The authors extend their thanks to Mr Arunachalam Thirunavukarasu (Betsi Cadwaladr University Health Board) for his advice and assistance with recruitment.

  • 1. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language Models are Few-Shot Learners. In: Advances in Neural Information Processing Systems [Internet]. Curran Associates, Inc.; 2020 [cited 2023 Jan 30]. p. 1877–901. Available from: https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
  • 2. OpenAI. GPT-4 Technical Report [Internet]. arXiv; 2023 [cited 2023 Apr 11]. Available from: http://arxiv.org/abs/2303.08774
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 9. Google. PaLM 2 Technical Report [Internet]. 2023 [cited 2023 May 11]. Available from: https://ai.google/static/documents/palm2techreport.pdf
  • 17. Ting DSJ, Steel D. MCQs for FRCOphth Part 2. Oxford University Press; 2020. 253 p.
  • 21. Part 2 Written FRCOphth Exam [Internet]. The Royal College of Ophthalmologists. [cited 2023 Jan 30]. Available from: https://www.rcophth.ac.uk/examinations/rcophth-exams/part-2-written-frcophth-exam/

IMAGES

  1. (PDF) Ophthalmology case

    ophthalmology review a case study approach pdf

  2. Ophthalmology Cameras Case Study

    ophthalmology review a case study approach pdf

  3. (PDF) Review of recent innovations in ophthalmology

    ophthalmology review a case study approach pdf

  4. Ophthalmology Review: A Case-Study Approach, 2nd Edition • Free PDF

    ophthalmology review a case study approach pdf

  5. Basic Ophthalmology 10th Edition PDF

    ophthalmology review a case study approach pdf

  6. Ophthalmology Board Review PDF

    ophthalmology review a case study approach pdf

VIDEO

  1. 3rd Global Neuro-Ophthalmology Case Festival 2023

  2. 6 Case Studies in Pediatric Ophthalmology and Strabismus

  3. ocular lesion case review and D D

  4. Клиническая анатомия органа зрения. Методы обследования в офтальмологии

  5. Kanski Clinical Ophthalmology Made Easy

  6. Complex Ophthalmology Cases: A Discussion With Neuro-Ophthalmology and Oculoplastics

COMMENTS

  1. PDF Ophthalmology Review: A Case-Study Approach by Kuldev Singh ...

    Ophthalmology Review: A Case-Study Approach(2nd Edition) is true to its title and geared toward ophthalmology residents. Ninety-eight cases are divided into 11 parts cover-ing Cornea and External Disease, Lens, Glaucoma, Retina, Uveitis, Tumors, Posterior Segment Complications, Trauma, Neuro-Ophthalmology, Pediatrics, and Orbit/Oculoplastics.

  2. Ophthalmology Review

    Ophthalmology Review - A Case-study Approach. Kuldev Singh, , William Smiddy and Andrew Lee , New York, Thieme, 2018, 308 pp., £119.99. (softcover), 223 Illustrations, ISBN: 978-1-62623-176-4. Your next patient has keratoconus. When was the last time you familiarized yourself with this disease?

  3. Ophthalmology Review: A Case-Study Approach

    Ophthalmology Review uses a novel concept to present practical and common-sense approaches to commonly encountered clinical scenarios. Each topic is introduced as a case study, with the pertinent history and physical findings of actual patients treated by the authors. Each case follows a...

  4. [PDF] Ophthalmology Review A Case-Study Approach

    This paperback is intended as a practical, case-based manual for ophthalmology residents and professionals and discusses diagnosis, differential diagnosis, management, treatment options and follow-up in a simple and easy-to-read manner. This paperback is intended as a practical, case-based manual for ophthalmology residents and professionals. The authors present 99 real-world examples of cases ...

  5. Ophthalmology Review. A case study approach

    Request PDF | On Oct 17, 2002, Ahti Tarkkanen published Ophthalmology Review. A case study approach | Find, read and cite all the research you need on ResearchGate

  6. Ophthalmology Review : A Case-Study Approach

    Ophthalmology Review: A Case-Study Approach, Second Edition by renowned experts Kuldev Singh, William Smiddy, and Andrew Lee is a practical, case-based reference covering a wide array of common to serious ophthalmic conditions encountered in daily practice. The new edition reflects significant advances in ophthalmologic surgery and additional quick-reference material.

  7. Ophthalmology Review: A Case-study Approach [PDF] [557b5503ahs0]

    With more than 99 fully illustrated cases, this new book provides a practical, up-to-date review of differential diagnosis and treatment in ophthalmology. Designed to improve your problem-solving skills, each case leads you from history and differential diagnosis to medical management, rehabilitation, and follow-up. E-Book Information. Year:2,002.

  8. Ophthalmology Review: A Case-study Approach [PDF] [5ijlet11nmf0]

    E-Book Overview Ophthalmology Review: A Case-Study Approach, Second Edition by renowned experts Kuldev Singh, William Smiddy, and Andrew Lee is a practical, case-based reference covering a wide array of common to serious ophthalmic conditions encountered in daily practice.

  9. Ophthalmology Review: A Case Study Approach

    With more than 99 fully illustrated cases, this new book provides a practical, up-to-date review of differential diagnosis and treatment in ophthalmology. Designed to improve your problem-solving skills, each case leads you from history and differential diagnosis to medical management, rehabilitation, and follow-up. Excellent. 1,889 reviews on.

  10. [PDF] Ophthalmology Review by Kuldev Singh eBook

    Ophthalmology Review: A Case-Study Approach, Second Edition by renowned experts Kuldev Singh, William Smiddy, and Andrew Lee is a practical, case-based reference covering a wide array of common to serious ophthalmic conditions encountered in daily practice.

  11. Ophthalmology Review

    Real-world ophthalmology patient management cases provide a solid foundation of clinical knowledge. Ophthalmology Review: A Case-Study Approach, Second Edition by renowned experts Kuldev Singh, William Smiddy, and Andrew Lee is a practical, case-based reference covering a wide array of common to serious ophthalmic conditions encountered in daily practice.

  12. PDF TITLE Ophthalmology Review

    TITLE Ophthalmology Review A Case-Study Approach Second Edition PRICE €99.99 ISBN 9781626231764 (Previous edition 9780865779822, 2001) PUBLICATION DATE September 2018 FORMAT Softcover · 234 illustrations · 350 pages · 8.5 X 11 IN MEDIA CONTENT None SPECIALTY Ophthalmology; Test Preparation & Review LEVEL Residents; Medical Practitioners EDITORS Kuldev Singh

  13. Ophthalmology Review: A Case-Study Approach.

    Ophthalmology Review: A Case-Study Approach. Kuldev Singh, William E. Smiddy, Andrew G. Lee. New York: Thieme, 2002. Pages: 416. Price: $79.00. ISBN -86577-982-1. The textbook entitled Ophthalmology Review: A Case-Study Approach is a clinically oriented, case-driven book that discusses the evaluation, management, and treatment of various ...

  14. Ophthalmology Review: A Case Study Approach

    A practical, case-based manual for all residents and professionals, this new book covers key topics in cornea and external disease, cataract, glaucoma, retina, uveitis, tumors, neuro-ophthalmology, pediatrics, oculoplastics, and more! You will find 99 concise sample cases that take you step-by-step through the entire process, from history, examination, and differential diagnosis to treatment ...

  15. Download PDF

    Download PDF - Ophthalmology Review: A Case Study Approach [PDF] [1mo1qavvfqbg]. Improve your diagnostic skills with this unique case-based guide! With more than 99 fully illustrated cases, this new bo...

  16. Ophthalmology Review: A Case-Study Approach by Kuldev Singh et al

    Request PDF | On Mar 13, 2019, Bradley T. Smith published Ophthalmology Review: A Case-Study Approach by Kuldev Singh et al. (2018) 330pp., 223 illustrations, paperback/softback, ISBN ...

  17. Ophthalmology Review

    Herein, lies the value of Ophthalmology Review. This text is invaluable to those of us who wer... Ophthalmology Review - A Case-study Approach: Kuldev Singh, William Smiddy and Andrew Lee, New York, Thieme, 2018, 308 pp., £119.99 (softcover), 223 Illustrations, ISBN: 978-1-62623-176-4: Neuro-Ophthalmology: Vol 43, No 5

  18. Ophthalmology Review: A Case-Study Approach by Kuldev Singh ...

    Ophthalmology Review: A Case-Study Approach (2nd Edition) is true to its title and geared toward ophthalmology residents. Ninety-eight cases are divided into 11 parts covering Cornea and External Disease, Lens, Glaucoma, Retina, Uveitis, Tumors, Posterior Segment Complications, Trauma, Neuro-Ophthalmology, Pediatrics, and Orbit/Oculoplastics.

  19. Clinical Ophthalmic Echography: A Case Study Approach

    Dr. Roger P. Harrie is an adjunct professor in the Department of Ophthalmology/Visual Sciences at the University of Utah. Cynthia Kendall has provided didactic, clinical and technical training to physicians, ophthalmic personnel, and bio-engineers in principles, examination techniques and design of diagnostic ultrasound for ophthalmology.

  20. Ophthalmology Review: A Case-Study Approach, 2ed (PDF)

    Ophthalmology Review: A Case-Study Approach, 2ed (PDF) $ 119.99 $ 7.00 119.99 $ 7.00

  21. Ophthalmic Case Studies

    MCW Ophthalmic Case Studies For Medical Students. This is a collection of case studies to help you get an insight on the typical history and initial examination of various ophthalmic disorders. The discussion, although brief, is intended to give you a simple overview of each disease. The questions at the end of each case are a good review for ...

  22. Large language models approach expert-level clinical knowledge and

    Author summary Large language models (LLMs) are the most sophisticated form of language-based artificial intelligence. LLMs have the potential to improve healthcare, and experiments and trials are ongoing to explore potential avenues for LLMs to improve patient care. Here, we test state-of-the-art LLMs on challenging questions used to assess the aptitude of eye doctors (ophthalmologists) in ...

  23. Download PDF

    Download as PDF. Download Original PDF. This document was uploaded by user and they confirmed that they have the permission to shareit. If you are author or own the copyright of this book, please report to us by using this DMCAreport form. Report DMCA. CONTACT. 1243 Schamberger Freeway Apt. 502Port Orvilleville, ON H8J-6M9. (719) 696-2375 x665.

  24. Download PDF

    CONTACT. 1243 Schamberger Freeway Apt. 502Port Orvilleville, ON H8J-6M9 (719) 696-2375 x665 [email protected]