Dana-Farber Researchers Create Experimental AI-based Oncologist’s Assistant – Dana

Researchers at Dana-Farber have developed an AI-based assistant to help oncologists identify approved precision medicines matched to a patient’s tumor profile. In a study published in Cancer Cell, the tool correctly identified appropriate therapies in 93% of about 100 realistic clinical queries.

Precision medicines — more than 100 of which are approved for cancer treatment — target the effects of specific mutations. Matching patients to these therapies is complex and constantly evolving, and clinicians report difficulty keeping up with new approvals. “The clinicians we engaged with said that catching up with FDA approvals is far from seamless,” said Helena Jun, PhD, a computational biologist in the lab of Eliezer Van Allen, MD.

Jun trained a large language model (LLM) that powers ChatGPT with domain-specific knowledge in precision oncology and augmented it with a curated database called the Molecular Oncology Almanac (MOAlmanac), created by Van Allen’s team. MOAlmanac contains expert-curated information on targeted therapies and molecular biomarkers and is updated regularly.

Initial tests of commercially available LLMs showed the GPT-4o model achieved between 85.9% and 89.3% accuracy depending on prompt complexity. To improve performance, the team used retrieval-augmented generation (RAG) to combine the LLM with MOAlmanac, producing higher accuracy on their test prompts.

The RAG-LLM was evaluated using realistic queries solicited from oncologists at Dana-Farber and other Boston-area hospitals and reached a 93% accuracy rate. The researchers identified recurring error patterns, including instances where the model hallucinated an approved option when none existed. Jun adjusted the model’s settings to reduce creativity, so it reports when no approved options are found.

The model is currently intended for research use only and is available online with features such as region selection to reflect differing approvals across jurisdictions. Further work is required before the tool can be used in clinical practice.

“We wanted to see if we could create an assistant that can be helpful to an oncologist without taking away the autonomy, decision making, and relationship between the patient and provider, and we’ve learned both that it is possible and that there is more work to be done,” Van Allen said. He added that proper clinical trials are needed to assess the safety, effectiveness, and utility of such decision-support tools for providers and patients.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *