6 Comments

I think this is a great post, also applies to primary care. I wrote, and then deleted a post because it was too wonky, basically taking a complicated patient history and feeding it into ChatGPT to see how they would manage a chronic care visit for a patient with like 20 medical problems. It was pretty terrible. I hope to use ChatGPT with challenging diagnoses to give me more ideas, and perhaps be a personal assistant of sorts, but in terms of juggling diagnoses, seeing the whole person/animal, translating and counseling regarding treatment options, and just being a decent, compassionate human being, I think AI will be a partner not a captain for at least the next 30-40 years

Expand full comment

Thanks Ryan, totally agree! I think a lot of people default to either “skynet apocalypse is imminent” or “these tools are garbage and don’t work and won’t ever will,” and have a harder time sitting with the messy contradictions of the reality in the middle. These are helpful tools that will probably change practice, but they’re very far from perfect and have a ton of limitations!

Expand full comment

Agree to both of you on that. In Europe the Innovative Health Initiative has a project called Big Picture that plans to harness these tools to support pathologists so that histopathological slides can be pre-read with machine-based learning type of approaches. It's fascinating!

Expand full comment

What I love about this article is how well thought out and full of critical thinking it is. That’s something AI cannot realistically do YET, although I’m sure it’s around the corner. Humans still haven’t seemed to learn that shortcuts always mean more work!

My biggest AI fear is that some of the, shall we say, lazier youngsters (or perhaps just ignorant to critical thinking) will rely on AI far too much. Thankfully the majority of my surgeries are behind me - but I will say this...human and robot did my last very difficult abdominal surgery (Sugarbaker peristomal hernia repair with DaVinci), and severe pain and recovery was a lot easier and faster!

Expand full comment

Jim, you raise an excellent concern about the possibility of over-relying on the AI and not being able to distinguish when it is wrong or fallible. This was tested in an interesting study in one of the Nature open access journals about this very problem. From the discussion:

"This observed over-reliance has important implications for automated advice systems. While physicians are currently able to ask for advice from colleagues, they typically ask for advice after their initial review of the case. Clinical support systems based on AI or more traditional methods could prime physicians to search for confirmatory information in place of conducting a thorough and critical evaluation."

The whole study is worth a read!

Gaube, S., Suresh, H., Raue, M. et al. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digit. Med. 4, 31 (2021).

https://www.nature.com/articles/s41746-021-00385-9

Expand full comment

Thanks! On it!

Expand full comment