Accessibility Tools

Skip to main content

Artificial Intelligence in Medicine: Paving the Way for the Future

February 2025, Vol 2, No 2

Artificial intelligence (AI) has become an integral force in every aspect of society, including medicine.

A session on AI at the 66th American Society of Hematology held (ASH) Annual Meeting & Exposition in San Diego, CA, explored AI’s potentially transformative impact, underscoring not only the promise of generative AI in improving patient care but also the ethical challenges that may accompany its integration into healthcare.1

One of the session’s discussions came from James Zou, PhD, Associate Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering at Stanford University, who spoke about the evolution of AI from a tool to a proactive agent capable of communication and decision-making.

Dr Zou discussed the “Virtual Lab,” a system where AI agents, each specializing in various aspects of medical science, collaborate to conduct medical research. An example of this technology was seen recently with the development of nanobodies that exhibited improved binding to 2 SARS-COV-2 variants, Dr Zou said.2 The Virtual Lab accelerated the identification of nanobody candidates by analyzing vast data sets, significantly reducing the time traditionally required for drug discovery.

“We should think of this AI, not just simply as tools, but also really as partners in research,” Dr Zou said. “We actually give quite a lot of flexibility in what the Virtual Lab can do, right? We didn’t tell it to design nanobodies. It made its own decision to design the nanobodies, as opposed to the more standard, common antibodies. And the virtual lab also created by themselves into an interesting computational workflow for optimizing these nanobodies, which is, I think, quite different from anything that we’ve seen in previous literature.”

A separate AI model created by researchers at the National Institutes of Health called Logistic Regression-based Immunotherapy-response Score (LORIS) demonstrated the ability to predict certain groups of cancer patients who might benefit best from certain immunotherapy treatments. This form of predictive modeling, increasingly important in cancer treatment, mirrors the process seen in the Virtual Lab for COVID-19, where AI predicts and optimizes therapeutic molecules for more targeted drug discovery.3

Ethical Considerations: Balancing Innovation With Responsibility

During the same presentation at ASH, Camille Nebeker, EdD, who is Professor of Public Health at the University of California (UC) San Diego and Director of the UC San Diego Research Ethics Program, discussed how AI is changing hematologic diagnostics, enhancing the accuracy and speed of diagnostic processes, and offering improvements over traditional methods.

“We really have to step up and think through how do we use AI, what is our role and how do we accept that responsibility,” said Dr Nebeker. She started her discussion by noting the limitations of institutional review boards when it comes to AI, and outlined some ethical tools for AI research, including health sheets. She emphasized the importance of consent frameworks in AI, noting that informed consent may not be required, which puts additional responsibility on data stewards and data users to manage patient data responsibly.

The integration of AI into clinical practice requires careful planning, she said. High-quality data collection, model validation, and clinician feedback are essential to ensure that AI tools are reliable and effective.

There are already numerous cases of people receiving wrong and potentially dangerously incorrect diagnoses after interacting with popular large language models. One study published in JAMA Pediatrics in 2024 found that OpenAI’s ChatGPT incorrectly diagnosed 83% of pediatric case studies with which it was presented.4

AI’s reliance on large data sets, often containing sensitive health information, presents risks regarding data security and patient privacy. As noted by the American Medical Association, safeguarding patient data is paramount, especially as AI technologies become more embedded in routine clinical practice.5

Bias is another significant challenge in the ethical deployment of AI. Dr Nebeker stressed the need for diversity in the data used to train AI models to ensure equitable outcomes for all populations. This issue is particularly relevant in healthcare, where certain groups—particularly racial minorities—may face disparities in care if AI systems are not sufficiently trained on representative data sets. AI’s ability to learn from historical data can inadvertently perpetuate existing biases in medical decision-making unless proper safeguards are in place.

Informed consent, a cornerstone of medical ethics, becomes even more complex when AI is involved. Dr Nebeker pointed out that the pace at which AI technology is advancing may outstrip the ability of institutional review boards to fully evaluate its ethical implications. As AI becomes more autonomous, healthcare providers can educate patients about how their data are being used and how AI systems contribute to their care. This calls for updated frameworks for patient and provider communications that account for the unique features of AI in healthcare.

ASCO has provided guidelines for the responsible use of AI in oncology, emphasizing transparency, patient autonomy, and fairness.6

Looking Toward the Future of AI in Federal Medicine

The session concluded with an optimistic outlook on the future of AI. With the potential to improve accelerate drug discovery and enhance personalized patient care, AI is poised to revolutionize the field. However, as with all transformative technologies, it is crucial to address the ethical challenges that accompany its implementation. Ensuring data privacy, minimizing bias, and maintaining transparency are essential to ensuring that AI is used responsibly in medical practice.

Looking ahead to 2025, officials with the Department of Veterans Affairs and the FDA have noted that they are hoping to construct a Health AI Lab, which would enable users to have HIPAA-compliant, deidentified data to test products.7 This initiative is the latest in ongoing federal efforts to facilitate the use of trustworthy AI across medical centers.

References

  1. Elemento O, Zou J, Nebeker, C, Haferlach C. Artificial intelligence in hematology: from generative AI to ethics and applications. Presented at: 66th ASH Annual Meeting & Exposition. December 7-10, 2024; San Diego, CA.
  2. Swanson K, Wu W, Bulaon NL, et al. The Virtual Lab: AI agents design new SARS-CoV-2 nanobodies with experimental validation. BioRxiv. 2024;11:623004.
  3. Chang TG, Cao Y, Sfreddo HJ. LORIS robustly predicts patient outcomes with immune checkpoint blockade therapy using common clinical, pathologic and genomic features. Nat Cancer. 2024;8:1158-1175.
  4. Barile J, Margolis A, Cason G. Diagnostic accuracy of a large language model in pediatric case studies. JAMA Pediatr. 2024;3:313-315.
  5. American Medical Association. Principles for the responsible use of AI in healthcare. Accessed January 31, 2025. https://society.asco.org/sites/new-www.asco.org/files/ASCO-AI-Principles-2024.pdf
  6. American Society of Clinical Oncology. Six guiding principles for AI in oncology. Accessed January 31, 2025. https://society.asco.org/news-initiatives/policy-news-analysis/asco-sets-six-guiding-principles-ai-oncology
  7. VA. VA chooses finalists in AI Tech Spring $1M to reduce health care worker burnout [press release]. Published March 13, 2024. Accessed November 18, 2024. https://news.va.gov/press-room/va-artificial-intelligence-tech-sprint-competition-finalists/

Usage Disclosure: This article was created with assistance from AI tools. The content has been reviewed and edited by a human.

Related Items