Skip to main content

Get to Know: Dr. Katherine E. Goodman

February 27, 2026

Katherine E. Goodman, PhD, JD, has carved out a unique niche for herself at University of Maryland School of Medicine. An Assistant Professor of Epidemiology and Public Health, Dr. Goodman originally trained as a lawyer, and frequently collaborates with colleagues at the UM Francis King Carey School of Law. Dr. Goodman also serves on the faculty of the University of Maryland Institute for Health Computing (UM-IHC), where her work focuses on artificial intelligence (AI) tools and their applications in the medical field and in patient encounters. 

In an edited interview, Dr. Goodman discussed the promise and perils of AI, how her legal training informs her research, and how AI tools are deployed in the clinic today.  

Q: Acknowledging that AI is a rapidly evolving field, what’s the current AI landscape in terms of clinical applications and potential pitfalls?

The newest AI technologies are what we call “generative AI,” such as Large Language Models (LLMs), or chatbots like ChatGPT. In medicine, we’re seeing the most uptake of these technologies as so-called “administrative” tools. Tools that, for example, record a clinical visit and generate a summary note, or help obtain a prior authorization or approval from an insurer. Clinicians are really clamoring for these AI administrative tools because they make their day jobs easier.

But I would highlight three fundamental perils and pitfalls regarding these tools.

First, calling them “administrative tools” implies they will only be used for rote tasks that are not clinically substantive in nature. But in many cases, these are hybrid tasks involving both routine paperwork and clinically important documentation. It really matters if the AI documents clinical information incorrectly. It’s also a short hop from generating a draft note summarizing a clinical visit to using that information as the input for, say, a future risk score algorithm. We need to understand how we draw the line between serving purely administrative functions and diagnostic functions.

Second, since the FDA does not currently consider these tools as medical devices, they are not overseeing how well the tools actually work. So caution and further study are warranted.

Lastly, many people have heard about LLM hallucinations, where these tools fabricate information. That is a unique challenge with LLMs and a risk we don't see with older technologies, requiring the development of different approaches to testing these tools.


Q: Tell us about your journey from lawyer to epidemiology researcher and how your legal background informs your work today.

After two years as a private practice FDA lawyer helping scientists interpret FDA standards, I realized I wanted to work on scientific issues through a research lens rather than a regulatory lens. I was accepted into a PhD program in infectious disease epidemiology at Johns Hopkins and absolutely loved it. It turned out to be the perfect career for me.

I joined the University of Maryland in 2019, and I research and write on legal issues now more than I ever anticipated. Those legal critical thinking skills I learned in law school are very helpful in my research. They are also the skills I believe future physicians will need to use AI effectively — these tools can give you a lot of information, and you need the skills to reason through that information and decide what it means for your patients.


Q: What can you tell us about AI medical scribes and how legal uncertainties are complicating the use and research on these technologies?

“AI scribe” is an informal name for an app on a clinician's phone that records a patient visit and creates a transcript from the audio. An LLM scans the transcript and creates a short summary, which, in theory, includes all clinically relevant information. The clinician then reviews that draft note, finalizes it, and pushes it into the electronic health record.

The proliferation of these AI scribes is by far the fastest uptake of any digital health tool seen in clinical medicine. These are now being used for millions of clinical visits every year around the US, and two and a half years ago, they did not exist at all.

AI scribes are exciting for researchers because they provide, for the first time, a record of the exact words exchanged between patient and clinician in each visit. That’s powerful if you're interested in, for example, patient safety and diagnostic error, which is where a lot of my work has shifted. It's also useful if you want to see whether there are early indications of illnesses that patients later develop that humans might miss.

That said, there are often legal hurdles to making full use of these AI scribe transcripts and recordings. Many healthcare systems are deleting these records after the scribes generate the draft notes. They are, understandably, concerned about the potential malpractice lawsuit risk that retaining those records could open, which we describe in a recent paper published in The New England Journal of Medicine.


Q: Tell us a bit more about how you currently make use of AI in your own infectious disease and emerging pathogens epidemiology research.

In a recent paper published in the journal Clinical Infectious Diseases, we used LLMs to conduct infectious disease surveillance around avian influenza. Avian influenza shares symptoms with regular flu, and you cannot test everyone who comes into an emergency department. Our goal was to identify patients at the highest risk of avian flu exposure. Healthcare personnel could then order special testing and possibly isolate those patients from other patients.

We trained an LLM to scan the admission histories of 14,000 emergency department visits within the University of Maryland Medical System, which would never have been resource-effective for a human to accomplish. The LLM found that about one in every 1000 patients showed a risk factor for avian influenza, such as recent exposure to waterfowl.

This was a proof-of-concept study using retrospective data — the tool we developed has not been deployed for real clinical care yet — but I believe this could be an ideal application for these tools when properly vetted and trained. 


Q: Are there any projects or lines of inquiry you’re most interested in pursuing in the future?

One study I’m excited about is using LLMs to extract information on symptoms for patients diagnosed with early-onset colorectal cancer, the incidence of which is rising about 2% each year in the US. The idea is to use AI models and machine learning approaches to attempt to identify early indications of patients developing cancer. There may be subtle or non-specific symptoms mentioned in medical visits that, on their own, might be easily overlooked. Using sophisticated models combined with other data, we could uncover early symptomatic patterns or currently unappreciated signals. Eventually, we could better predict who is at high risk and possibly intervene earlier.

More broadly, I'm trying to adapt that approach to other cancers and would eventually like to expand the scope to have patients documenting their own words and symptoms throughout the day, outside of regularly scheduled clinical care, using wearable devices.


Q: Are there any particular types of collaborations or collaborators you are looking for in your work?

I'm really fortunate that I work with amazing clinicians in my group, but most of these clinicians are focused on inpatient care. I would love to find clinical collaborators in outpatient care or primary care settings.

Additionally, my research is increasingly shifting from evaluating off-the-shelf LLM models to actually building LLM-based tools. We are looking at developing chatbots to interact with patients directly, for example, to take longer histories than is possible during time-constrained visits, or to engage in preventative cancer screening discussions with patients. I would welcome technical collaborators for that work.

Watch the Video:

Contact

Office of Public Affairs
655 West Baltimore Street
Bressler Research Building 14-002
Baltimore, Maryland 21201-1559

Contact Media Relations
(410) 706-5260

Related stories

    Monday, August 07, 2023

    AI Transformation of Medicine: Why Doctors Are Not Prepared

    As artificial intelligence systems like ChatGPT find their way into everyday use, physicians will start to see these tools incorporated into their clinical practice to help them make important decisions on diagnosis and treatment of common medical conditions. These tools, called clinical decision support (CDS) algorithms, can be enormously helpful in helping guide health care providers in determining, for example, which antibiotics to prescribe or whether to recommend a risky heart surgery.

    Tuesday, December 22, 2020

    Largest Study of Its Kind Identifies Which COVID-19 Patients Face the Greatest Risk of Mortality During Hospitalization

    Hospitalized COVID-19 patients have a greater risk of dying if they are men or if they are obese or have complications from diabetes or hypertension, according to a new study conducted by University of Maryland School of Medicine (UMSOM) researchers. In a study published in the journal Clinical Infectious Diseases, the researchers evaluated nearly 67,000 hospitalized COVID-19 patients in 613 hospitals across the country to determine the link between certain common patient characteristics and the risk of dying from COVID-19. Their analysis found that men had a 30 percent higher risk of dying compared to women of the same age and health status. Hospitalized patients who were obese, had hypertension or poorly managed diabetes had a higher risk of dying compared to those who did not have these conditions. Those aged 20 to 39 with these conditions had the biggest difference in their risk of dying compared to their healthier peers.