Purpose: Nearly all published ophthalmology-related Big Data studies rely exclusively on International Classification of Diseases (ICD) billing codes to identify patients with particular ocular conditions. However, inaccurate or nonspecific codes may be used. We assessed whether natural language processing (NLP), as an alternative approach, could more accurately identify lens pathology.
Design: Database study comparing the accuracy of NLP versus ICD billing codes to properly identify lens pathology.
Methods: We developed an NLP algorithm capable of searching free-text lens exam data in the electronic health record (EHR) to identify the type(s) of cataract present, cataract density, presence of intraocular lenses, and other lens pathology. We applied our algorithm to 17.5 million lens exam records in the Sight Outcomes Research Collaborative (SOURCE) repository. We selected 4314 unique lens-exam entries and asked 11 clinicians to assess whether all pathology present in the entries had been correctly identified in the NLP algorithm output. The algorithm's sensitivity at accurately identifying lens pathology was compared with that of the ICD codes.
Results: The NLP algorithm correctly identified all lens pathology present in 4104 of the 4314 lens-exam entries (95.1%). For less common lens pathology, algorithm findings were corroborated by reviewing clinicians for 100% of mentions of pseudoexfoliation material and 99.7% for phimosis, subluxation, and synechia. Sensitivity at identifying lens pathology was better for NLP (0.98 [0.96-0.99] than for billing codes (0.49 [0.46-0.53]).
Conclusions: Our NLP algorithm identifies and classifies lens abnormalities routinely documented by eye-care professionals with high accuracy. Such algorithms will help researchers to properly identify and classify ocular pathology, broadening the scope of feasible research using real-world data.
Copyright © 2024 Elsevier Inc. All rights reserved.