Rather than taking the wheel, I find that AI is helpful for generating ideas, brainstorming, rough drafts and the like. I will sometimes use while seeing patients and I need some answers/ideas that would take 10X the time to find in reference books or studies. But having experience, a good speciality specific knowledge base, and a scalpel to take to much of the responses is needed.
I don’t see LLMs pushing the needle of progress forward that fast… more like productivity tools that end up costing us a lot of time to clean up, cross reference, and check sources.
That's been my experience, too. For factual queries, I rely on Perplexity over any others. Whatever they have done with the training data and tuning, it seems to rarely if ever hallucinate. And it links all of its sources by specific claim, which is great. It has honestly replaced Google searches for me a lot of the time since it curates more relevant links and its not bogged down by commercial sales URLs.
I have a paid subscription to ChatGPT, which I use very differently. I will have it proofread drafts, ask it to poke holes in arguments, and use it for brainstorming and refining vague early ideas. It's also good for annoying repetitive tasks like "come up with 20 versions of a title for this." I rarely use one word for word, but it can be useful when I hit a wall. I also have to admit it's done a good job massaging language in sensitive texts and and emails...
I have not directly used any of the pathology or radiology AI tools on the clinic floor (I'm sure it's coming as I return to practice!), but I was pretty impressed by demos at a conference. I've also used voice dictation software like Dragon and PowerScribe. That is similar to your point that it has some efficiency gains, but I end up correcting a lot of misquotes anyway
All good points, and I also find perplexity the best for answering pithy questions backed by references. Sometimes I’ll ask it “using top tier journals” and that seems to help weed out random studies. I pay for Claude to crunch stuff too. I should probably practice more ChatGPT, but how do you navigate all the multiplying GPT‘s they offer?
It will be interesting to see what the next few years of OpenAI's future looks like. They were the clear dominant market leader in the LLM space, but a ton of boneheaded product decisions weakened their appeal, and now they are reportedly facing aggressive talent poaching from xAI, Anthropic, and others.
I have used Anthropic's free version of Claude and been really impressed. It is certainly a better writing companion than GPT-4o. My main hang-up on paying for it is they do not offer the breadth of features like internet search, image generation, and advanced data and file analysis. Once Claude goes multi-modal I'll probably make the switch from ChatGPT
Great post. Thank you!
Rather than taking the wheel, I find that AI is helpful for generating ideas, brainstorming, rough drafts and the like. I will sometimes use while seeing patients and I need some answers/ideas that would take 10X the time to find in reference books or studies. But having experience, a good speciality specific knowledge base, and a scalpel to take to much of the responses is needed.
I don’t see LLMs pushing the needle of progress forward that fast… more like productivity tools that end up costing us a lot of time to clean up, cross reference, and check sources.
That's been my experience, too. For factual queries, I rely on Perplexity over any others. Whatever they have done with the training data and tuning, it seems to rarely if ever hallucinate. And it links all of its sources by specific claim, which is great. It has honestly replaced Google searches for me a lot of the time since it curates more relevant links and its not bogged down by commercial sales URLs.
I have a paid subscription to ChatGPT, which I use very differently. I will have it proofread drafts, ask it to poke holes in arguments, and use it for brainstorming and refining vague early ideas. It's also good for annoying repetitive tasks like "come up with 20 versions of a title for this." I rarely use one word for word, but it can be useful when I hit a wall. I also have to admit it's done a good job massaging language in sensitive texts and and emails...
I have not directly used any of the pathology or radiology AI tools on the clinic floor (I'm sure it's coming as I return to practice!), but I was pretty impressed by demos at a conference. I've also used voice dictation software like Dragon and PowerScribe. That is similar to your point that it has some efficiency gains, but I end up correcting a lot of misquotes anyway
All good points, and I also find perplexity the best for answering pithy questions backed by references. Sometimes I’ll ask it “using top tier journals” and that seems to help weed out random studies. I pay for Claude to crunch stuff too. I should probably practice more ChatGPT, but how do you navigate all the multiplying GPT‘s they offer?
It will be interesting to see what the next few years of OpenAI's future looks like. They were the clear dominant market leader in the LLM space, but a ton of boneheaded product decisions weakened their appeal, and now they are reportedly facing aggressive talent poaching from xAI, Anthropic, and others.
I have used Anthropic's free version of Claude and been really impressed. It is certainly a better writing companion than GPT-4o. My main hang-up on paying for it is they do not offer the breadth of features like internet search, image generation, and advanced data and file analysis. Once Claude goes multi-modal I'll probably make the switch from ChatGPT