app

Artificial Intelligence Versus Sir William Osler

— Note-taking in the electronic health record still leaves much to be desired

MedpageToday
 A photo of Sir William Osler
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

Much has been written about changes to the electronic health record over the past few years, especially updates that attempt to make our clinical notes reflect more of their original intentions.

The standard SOAP note was created to document what was going on with our patients, what we learned from our history, what we gleaned from our physical examination, and the clues that lab tests and imaging pointed us towards. The SOAP note then offered up a differential diagnosis that often served as a starting point for initiating treatment and providing ongoing care.

But over many decades, the electronic health record morphed into something way beyond this, a lumbering creature built more for serving as a document for billing, compliance, and medicolegal purposes.

A Gradual Evolution

Many years ago, chart notes were much shorter, capturing essential details, putting down our impressions and our thoughts, guided by our clinical acumen. But as things evolved, they became a repository for everything, a place to park data, full of rules and requirements that needed to be satisfied before we could send out a bill.

Recently, there have been a number of legislative changes that have aimed to improve note-taking, to simplify things, to get us (and our notes) back to our roots. It used to be that if you mentioned that a patient had chest pain, the auditors would then look through your note and see if you had every possible characteristic of the chest pain one could expect to find documented therein: onset, duration, location, radiation, severity, exacerbating and alleviating factors, and so on. And if every single one of those checkboxes was not clicked off, then you were not allowed to bill for the chest pain that day.

We were left with huge menus of items that needed to be included -- shopping lists of review of systems, how many items in an organ system needed to be noted to count towards the complex physical examination, all of which really just got in the way of taking care of patients, more than they helped us come to a definitive diagnosis.

There is a famous quote that is often attributed to Sir William Osler (apparently never actually found in his writings), that goes something like, "Listen to your patient; they are telling you their diagnosis." Sometimes this is true; there are often classic findings on history that help guide us quickly to what we think is going on with the patient, even before we do a physical exam or any additional testing. Sometimes, no matter how many questions we ask, things are still vague, and we need to go further.

No matter what, we're then left with the task of documenting all of this in the medical record, and the burden of this, whether it was on paper with a pen in an old chart, or typing away inside the newer electronic health records, added up to a lot of time and effort, especially for those of us who still do hunt-and-peck typing.

Are Scribes Part of the Solution?

Scribes are one relatively new innovation meant to ease the work of doctoring, and also boost productivity. There are several versions of this, including an actual scribe who comes into the room with the doctor, and stands there typing out what they're saying, converting the conversation into a viable note. A more virtual scribe version also exists, where someone is listening in through a microphone, and typing away off-site. And now there are even new fancier versions that are combining voice recognition with artificial intelligence, giving us a virtual electronic scribe that listens in on the doctor-patient interaction, and crafts that conversation into a viable and usable history of present illness.

So far, I'm not that impressed. As these systems have been tested at various places, I've heard about a lot of situations where stuff got mangled or misinterpreted. And many of the people who told me they have been using these systems say that the work that they need to put in to massage the output of these programs into an accurate and relevant note is more than the effort of typing.

A colleague at another institution told me about a conversation he'd been having with a patient, who relayed during their talk that the patient's beloved Lionel (not his real name) had passed away after a long battle with cancer and diabetes. The note that was sent to this provider afterwards described how the patient was grieving over the loss of her beloved husband. Actually, it was her cat.

When I've used one of these systems, it often feels like it's recording and putting down something that isn't exactly what we talked about, while it also isn't exactly wrong, but it just doesn't seem to catch the flavor of what's going on in the room.

The back-and-forth that we have with our patients is often so much more than the actual words alone, there is nuance in how they are sitting, how they are talking, how they are holding their hands, the look in their eyes.

And there certainly are times where a patient may go on and on about something that sounds really critical, but then it seems to evaporate in the end, and this is really hard for any voice recognition and artificial intelligent system to realistically capture what this might actually mean.

What the Future May Hold

None of us, with our non-and artificial intelligence, are perfect at this, we get things wrong, we misinterpret, we miss the boat, we fail to truly listen. I'm also not saying that the answer is 100% capture of every single word we say. Perhaps someday the note will be an actual video/audio compilation of everything that goes on in the exam room, but we're definitely not there yet (and maybe we never want to be).

Maybe one way to improve these systems is to have the AI system read all of our previous notes, all of the HPI's we've written on this particular patient and many others, to capture our style and how we like to talk about things in our charts. Or maybe there is just so much subtlety contained within the doctor-patient interaction that we should be saving the brute computing power of these systems for documenting and analyzing the kind of things that it's probably much better at. Perhaps after we feed our history, physical exam, vital signs, lab data, and everything else from a visit into an AI system, it can help us with the differential diagnosis, spot trends we may be missing, look at the forest for the trees and the trees for the forest.

I'm happy to keep working with these folks to try and make the systems better, as we all should. If we can help prevent "note bloat" and keep all of us from having to write our notes at night in our pajamas when we should be spending time with our families or falling asleep on the couch watching TV, then we should be all for it.

But just as we listen to our patients, listen to those of us who are taking care of them, the answers are all there.