Experts have long explored computer algorithms meant to improve healthcare, with some having been shown to make valuable clinical predictions. However, few are in use because computers best process information laid out in neat tables, while physicians typically write in creative, individualized language that reflects how humans think.
Cumbersome data reorganization has been an obstacle, researchers say, but a new type of AI, large language models (LLM), can "learn" from text without needing specially formatted data.
In a study publishing online June 7 in the journal Nature, the research team designed an LLM called NYUTron that can be trained using unaltered text from electronic health records to make useful assessments about patient health status. The results revealed that the program could predict 80% of those who were readmitted, a roughly 5% improvement over a standard, non-LLM computer model that required reformatting of medical data.
"Our findings highlight the potential for using large language models to guide physicians about patient care," said study lead author Lavender Jiang, BSc, a doctoral student at NYU’s Center for Data Science. "Programs like NYUTron can alert healthcare providers in real time about factors that might lead to readmission and other concerns so they can be swiftly addressed or even averted."
Jiang adds that by automating basic tasks, the technology may speed up workflow and allow physicians to spend more time speaking with their patients.
Large language models use specialized computer algorithms to predict the best word to fill in a sentence based on how likely real people would use a particular term in that context. The more data used to “teach” the computer how to recognize such word patterns, the more accurate its guesses become over time, adds Jiang.
For their study, the researchers trained NYUTron using millions of clinical notes collected from the electronic health records of 336,000 men and women who had received care within the NYU Langone hospital system between January 2011 and May 2020. The resulting 4.1-billion-word language “cloud” included any record written by a doctor, such as radiology reports, patient progress notes, and discharge instructions. Notably, language was not standardized among physicians, and the program could even interpret abbreviations unique to a particular writer.
According to the findings, NYUTron identified 85% of those who died in the hospital (a 7% improvement over standard methods) and estimated 79% of patients’ actual length of stay (a 12% improvement over the standard model). The tool also successfully assessed the likelihood of additional conditions accompanying a primary disease (comorbidity index) as well as the chances of an insurance denial.
"These results demonstrate that large language models make the development of 'smart hospitals' not only a possibility, but a reality," said study senior author and neurosurgeon Eric Oermann, MD. "Since NYUTron reads information taken directly from the electronic health record, its predictive models can be easily built and quickly implemented through the healthcare system."
Jiang LY, Liu XC, Nejatian NP, Nasir-Moin M, Wang D, Abidin A, Eaton K, Riina HA, Laufer I, Punjabi P, Miceli M, Kim NC, Orillac C, Schnurman Z, Livia C, Weiss H, Kurland D, Neifert S, Dastagirzada Y, Kondziolka D, Cheung ATM, Yang G, Cao M, Flores M, Costa AB, Aphinyanaphongs Y, Cho K, Oermann EK.
Health system-scale language models are all-purpose prediction engines.
Nature. 2023 Jun 7. doi: 10.1038/s41586-023-06160-y