There are more future uses for artificial intelligence than humans can probably comprehend at the moment. There are also a litany of terrible use cases for the technology that will be prototyped, tried and failed in the coming months, years and decades. That’s the nature of pushing the frontier forward aggressively; it kinda comes with the territory. But one area you can all but guarantee will be a match made in heaven? Medicine.
There are some special circumstances that go into making this “match made in heaven”, but a powerful match it remains. For one thing, medicine — modern medicine in America especially — produces mountains of data. Every scan, every test, every consultation, every symptom, every side effect, every everything in medicine is a datapoint in some form or fashion. And what do neural networks, machine learning and A.I. thrive on?
Mountains of data, that’s what. Specifically, mountains of data that contain unseen lessons and conclusions because there’s just so much information humans can’t draw useful connections on their own.
The other side of the same coin is the sheer amount of medical research going on at any given time. The number of clinical trials, research papers, scholarly articles, etc. that are published in any given year is staggering. And most doctors are too busy practicing, ya know, medicine to sit down and read them in a thorough way.
That’s where A.I. comes in.
The right version of a machine learning platform could scour all the relevant publications, alert doctors to interesting or valuable advancements made in their particular speciality so the doctors could then synthesize that data in a useful way. Or, barring that, things like IBM’s Watson can connect to a clinic’s electronic medical record and simply alert the doctor to the best course of treatment in any given case based on the data it pulls and analyzes from all the other clinical trials and scholarly publications recently released.
Doctors have a unique relationship with failure. In most professions, you mess up or fail, the worst that happens is you lose your company some money. Or you may lose your job. But for many doctors, the price of failure is a human life (even if the doc didn’t make an overt mistake, not making the most correct decision could possibly mean they didn’t give their patient the best possible chance to survive).
Because doctors are in the trenches, battling life and death with their bare hands, their inclination is often to trust their intuition or experience when the find themselves in tricky situations. You can give many doctors all the data you want, and they still might ignore it in favor of a statistically worse outcome because their conclusion about the right course of action coincides with their lived anecdotal experience.
That’s a hard obstacle to overcome in the abstract. Getting a doctor to read the study is hard enough, but getting that same doctor to heed its conclusions without testing it out for themselves? That’s a tall ask. But A.I.’s ability to really crunch data and present it in a useful and compelling way could very much change the oft-adversarial relationship many docs have with data.
A.I. won’t solve every healthcare problem on its own. For one, it does a really bad job (currently, anyway) of distinguishing between primary, secondary and tertiary symptoms. If you enter your symptoms into an online form with a neural network on the other end of it, it has no way of determining which symptom is the most important based on what you’ve told it. Many symptoms are noise in the face of the signal, and human doctors are generally quite good at filtering out the noise to determine which symptoms matter the most to any given diagnosis.
That’s why doctors working in concert with A.I. really does look to be the future. Each partner is strong where the other is weak, making for quite the formidable team.