黑料社

AI鈥檚 blind spots, one PhD student鈥檚 clear vision

Body

When artificial intelligence makes a mistake, the consequences can be annoying鈥攐r they can be life-altering. For one 黑料社 computer science PhD student, the stakes are clear: biased algorithms in health care aren鈥檛 just a technical flaw, they鈥檙e a human risk. And with internship experience teaching him crucial skills beyond the classroom, Fardin Ahsan Sakib is ready to make a difference in health care. 

Sakib鈥檚 research in natural language processing (NLP) addresses concerns that many may have when working with emerging large language models (LLMs), but when it is applied to health decisions, there could be deeper negative risks. Tools like ChatGPT pull from vast amounts of information, but they have sometimes been known to 鈥渉allucinate,鈥 or make up information.   

鈥淪ometimes these systems are taking a shortcut to get you an answer,鈥 said Sakib. 鈥淏ut regardless of where it comes from, it can provide you incorrect information.鈥 And in the health care space, these LLMs could aid doctors, but not when they could be driving improper care.  

In electronic health records (EHRs), patients鈥 diagnoses, medical history, and demographic data are stored and organized. This information includes social determinants of health, like employment status, family situations, or housing conditions, hidden in records.  

Fardin Ahsan Sakib is seeking to bring accurate and reliable large language models' capabilities to healthcare. 

鈥淭hese factors can affect up to 80% of health outcomes,鈥 Sakib said. 鈥淪o it's very important that we can extract them correctly from clinical notes and that the model is not introducing any bias.鈥 

Sakib poses a scenario: A doctor is using an NLP tool to quickly assess a new patient鈥檚 health. The LLM is pulling from the patient鈥檚 records, and the physician asks the system, 鈥淚s this patient a smoker?鈥 The system quickly sifts through the data, and says that yes, this patient is a smoker. The physician then proceeds to make care recommendations based on this information. Maybe they order a lung cancer screening, or they speak with the patient about smoking cessation resources.  

But what if that answer was incorrect? The LLM decided to take a shortcut, because, based off the information it has, it discovered that most people with similar demographic data to the patient are indeed smokers, but this one isn鈥檛.  

鈥淭here are two things at play: bias and hallucination,鈥 said Sakib. 鈥淔irst, algorithms can only use the information they have, and if that information is missing an entire racial or ethnic group, it can be biased. Second, these models can use this biased information to take shortcuts, leading to them delivering false information.鈥 

Sakib isn't simply identifying these problems; he's developing solutions. His recent research on detecting and mitigating shortcut learning in health record processing was accepted for presentation at the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025), one of the premier NLP conferences. 

That desire to build reliable and trustworthy NLP tools has driven Sakib鈥檚 academic work and his professional pursuits. 

At Brillient Corporation, where he interned last summer, Sakib worked on creating a retrieval augmented generation system that connected an LLM to the Food and Drug Administration鈥檚 (FDA) knowledge base to make accessing and retrieving information faster and grounded in facts. He and his colleagues submitted a patent for this effort.  

This summer, he's deep into another high-impact internship at Amazon. 鈥淎mazon wants to automate support so that when you ask for help, a large language model can try to solve the problem before handing it off to a human,鈥 he said.  

Despite the different settings, he sees clear continuity between his internships and academic work. 鈥淭he industry experience helps me in my research, and my research experience helps in industry. It goes both ways,鈥 he said. 鈥淚n academia, collaboration is usually focused within a lab or research group. In industry, I鈥檝e worked with engineers, product managers, and domain experts all at once, spanning from health regulators to AWS cloud architects. That diversity of perspectives changes how you solve problems, and I鈥檝e brought that mindset back into my research collaborations.鈥 

It鈥檚 that dual perspective鈥攁cademic precision paired with industry scale鈥攖hat he plans to take with him into industry after graduation. 

鈥淕eorge Mason has prepared me for a lot. From day one, all of the professors have helped me grow as a researcher, a person, and as a team member,鈥 said Sakib.