Weekend Roundup: Artificial Intelligence
A selection of articles, videos, and books that have influenced my thinking on AI
Weekend Roundup: This is a loose post format I will be using in the future to curate thought-provoking articles and media on selected topics. This weekend’s focus will be on machine learning and AI, particularly how they intersect with science and healthcare.
The third magic: A meditation on history, science, and AI
This excellent blog post by economist Noah Smith places the current boom in AI (really “deep learning”) in the context of the history of scientific breakthroughs in theory and methods since the 1600s. One of the best things he does is breakdown the critique that we don’t know how these models fundamentally work and offer understanding it on its own terms:
A big knock on AI is that because it doesn’t really let you understand the things you’re predicting, it’s unscientific. And in a formal sense, I think this is true. But instead of spending our effort on a neverending (and probably fruitless) quest to make AI fully interpretable, I think we should recognize that science is only one possible tool for predicting and controlling the world. Compared to science, black-box prediction has both strengths and weaknesses.
One weakness — the downside of being “unscientific” — is that without simple laws, it’s harder to anticipate when the power of AI will fail us. Our lack of knowledge about AI’s internal workings means that we’re always in danger of overfitting and edge cases. In other words, the “third magic” may be more like actual magic than the previous two — AI may always be powerful yet ineffable, performing frequent wonders, but prone to failure at fundamentally unpredictable times.
But even wild, occasionally-uncontrollable power is real power.
I’m an ER doctor: Here’s what I found when I asked ChatGPT to diagnose my patients
Radiologists have long been fretting whether or not AI will replace them (spoiler alert: probably not any time soon), but now with the introduction of large language model tools like ChatGPT, people have speculated that a wider swath of healthcare practice may be impacted. This article written by an ER physician explains what he found when he tried to use ChatGPT to diagnose patients:
In short, ChatGPT worked pretty well as a diagnostic tool when I fed it perfect information and the patient had a classic presentation.
This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database. ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part), but underneath that is a knowledge retrieval process similar to Google Search. And most actual patient cases are not classic.
My experiment illustrated how the vast majority of any medical encounter is figuring out the correct patient narrative. If someone comes into my ER saying their wrist hurts, but not due to any recent accident, it could be a psychosomatic reaction after the patient’s grandson fell down, or it could be due to a sexually transmitted disease, or something else entirely. The art of medicine is extracting all the necessary information required to create the right narrative.
The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
This is a video lecture given by multiple AI researchers—certainly no Luddites—presenting their concerns about the risks of AI and what can be done to mitigate them. They explain why deep learning AI took off like a rocket over starting 2-3 years ago and continues to smash performance records and tests faster than we can create them. They also explain the unsettling phenomenon of “emergent” properties: AI models that develop novel skills they weren’t trained to understand, like a chat bot that suddenly learns how to do chemistry.
2-for-1 Bonus: Book and podcast
I began reading Meghan O’Gieblyn’s thought provoking book God, Human, Animal, Machine after it was discussed on a recent Ezra Klein Show podcast episode “Why AI might not take your job or supercharge the economy” (which you should definitely check out). As a former student of theology and current philosopher with an interest in technology, Meghan O’Gieblyn brings a unique perspective to the AI conversation. Through this book I learned that many ostensibly secular futurists, tech founders, and computer scientists—knowingly or unknowingly—share a philosophical tradition, conceptual framework, and linguistic metaphors with Christian and Jewish eschatology. She discusses how AI is changing society and what it will mean to be human when AI becomes superior at the feats we’ve used to justify our dominance over animals and the natural world: consciousness and complex cognition. A terrific book that stuck with me long after reading.
Thanks for putting these together. I have been trying to read more about this topic and found all links helpful.