Artificial Intelligence, or AI, has become the official buzzword of the healthcare industry in recent years, inspiring billions of dollars in investment. When the lives and well-being of patients hang in the balance, do the benefits of using AI outweigh the risks?

According to Google CEO Sundar Pichai,

AI is probably the most important thing humanity has ever worked on. Sundar Pichai, CEO of Google

In light of this, we may find ourselves asking, “Is healthcare ready for AI?”

Rushing headlong into a technology that produces results we have yet to fully understand could spell disaster, and we have seen such potential issues with the recent shortcomings of IBM’s Watson – an AI meant to assist clinicians by recommending therapeutic options for cancer patients.

Where IBM’s Watson Went Wrong

IBM has invested nearly $5 billion into medical data acquisition efforts for Watson, but as reported in Daniel Hernandez and Ted Greenwald’s recent Wall Street Journal (WSJ) article, it has changed the course of treatment in only 2-10% of cases, and often offers recommendations that are no different than what was initially recommended by the oncologists themselves. Much more alarming is that Watson was reported to offer both inaccurate and unsafe recommendations in some instances (cited above and in another article by Casey Ross and Ike Swetlitz of Statnews).

When we consider these results in light of the extensive and optimistic marketing campaign that IBM has employed for Watson, it seems as if Watson has at least fallen short of what was promised, if not done more harm than good.

However, as Dr. Michael Kelley – a VA Oncologist – stated in his interview with the WSJ, Watson is useful “at finding relevant medical articles, saving time and sometimes surfacing information doctors aren’t aware of”. But AI is not at all necessary to serve that end, and $5 billion dollars is a steep price to pay for what could be produced through easier and less costly approaches.

The most meaningful mistake IBM made appears not to be an irredeemable failure on the part of their Watson algorithm, nor even the high price tag, but rather their precocious claims to Watson’s revolutionary abilities. An article by Catherine Stinson for the Globe and Mail appeals to the philosophical side of this dilemma, and asserts that the current “build-it-first, fix-it-later ethos isn’t cutting it”. This is especially true when the predictions/recommendations of your AI are directly affecting something as sensitive as patient care.

Healthcare’s Responsibility in Using AI

The use of AI in healthcare demands that we understand the consequences of the technology we build, as well as understand that an AI, just like any person, can be misled. After all, it is only able to make assertions based upon the data we provide it. Examples such as when Microsoft’s AI, Tay, began spewing neo-nazi content as a result of its interactions attest to the importance of the source and diversity of such initial training datasets.

The need for properly organized and diverse datasets paired with a more attainable goal takes on greater importance within the healthcare field. Decisions such as IBM’s to train Watson on a small number of synthetic patient cases rather than a larger number of real cases can have significant consequences for the patients whose care may be affected by potentially dangerous inaccuracies. In light of this, the answer to the original question – Is healthcare ready for AI? – may seem to be absolutely not. I think the question may be better put –

Is AI ready for healthcare?

There have been some missteps and spectacular failures in using AI in the healthcare sector, but there is hope! I believe the mistake here has not been the use of AI itself, but of moving too fast and getting ahead of what is possible at this stage in its development.

When it Comes to AI, Think SMALL

Genomenon logo - Powering Evidence-Based Genomics

Given the scale and scope of the AI challenge, our goal of improving genetic sequence interpretation in the era of genomic medicine may seem to be overly ambitious and potentially perilous…but that’s where it gets interesting.

Our team at Genomenon is employing computational intelligence and machine learning techniques to solve the problem of organizing unstructured genetic information in the medical literature with the Mastermind database and Genomic Search Engine. These AI programs can bring crucial structure to previously hard to find information on diseases, genes, and variants, which allows searchers to find what they are looking for 5 times faster (based on user feedback).

Discover the powerful AI of the Mastermind Genomic Search Engine. Sign up for free.

In contrast to typical approaches to using AI in healthcare, we always seek to use AI to improve our solutions rather than to have the AI become the solution itself.

Our AI attack strategy at Genomenon invokes a useful mnemonic – think SMALL.

S – Specific
M – Measurable
A – Accessible
L – Limited
L – Likely

 

AI should be Specific. AI projects should not focus on vague and wildly ambitious concepts such as ‘curing cancer’. We focus on a specific outcome for a specific type or subtype of cancer predicated on specific and defined data inputs.
AI should be Measurable. The output or results of an AI project (and therefore its success or failure) should be measurable. A claim that AI will “save 600,000 lives” cannot be properly validated, and thus is in no way a measurable effect.
AI should be Accessible. The output from AI should be readily interpretable by the clinicians/researchers who utilize it. If you do not know how or why the AI algorithm makes the decisions it does, then placing your confidence and your patient’s well-being in the AI may prove perilous.
AI should be Limited. AI algorithms cannot be so “greedy” as to require a preponderance of patient data to be effective. Such an endeavor is likely to come with a $5 billion dollar price tag. If this level of comprehensiveness is required, it is unlikely to be an effective algorithm to begin with.
AI should be Likely to produce a useful outcome. AI that outputs the same recommendations as the clinician who is asking the question is of little value. But of equally limited value would be an AI that outputs a recommendation that the clinician cannot understand or independently verify for herself.

As big as our goals are here at Genomenon, we are constantly thinking SMALL while approaching the challenge of extracting and interpreting genetic information from the medical literature, which previously required a significant manual effort. These unique and specific principles offer a much higher likelihood of success than the contrasting approach of ‘gathering all the data in the world to cure cancer’.

As we continue to develop our Mastermind AI technology to approach new problems such as identifying and quantifying gene-disease relationships, our approach will remain SMALL no matter how large our ambitions.

What are your thoughts on using AI in healthcare? Please share in the comments.

Leave a Reply