Thursday , February 22 2018
Home / Resources / Articles / Point-Counterpoint, Part 1: How to Overcome the Complexities of Treatment Decisions for Your Patients with Type 2 Diabetes

Point-Counterpoint, Part 1: How to Overcome the Complexities of Treatment Decisions for Your Patients with Type 2 Diabetes

Feb 10, 2018
 

In the last four issues, we discussed a new software that can go through over 6 million possible treatments using anywhere from 1 to 5 drugs for an effective treatment for type 2 diabetes. Along with determining the best treatment for the patient, it includes the ability to put in what the patient can afford in their budget.

Dr. John Interlandi, one of our readers, has submitted a counterpoint to using this software to decide the best treatment options. In Part 1, Dr. Interlandi shares his concerns about the use of AI in medical decision making. 

CounterPoint: View from the Diabetes Trenches: Slow It Down — Part #1

By Dr. John Interlandi

I am circumspect about claims that treatment choices for diabetics should be done by AI. We have recently seen it claimed that AI (and soon to follow, Big Data) are more effective, cheaper, and can reduce the decision making burden on Providers. I would like to offer some counterarguments to the recent viewpoints published in Diabetes in Control. Firstly, helping patients decide on treatment choices is not a burden; it is what Endocrinologists do. The real burden is the dysfunctional health care system we work and live in.

The Markov decision analysis paper by Bennett and Hauser reviewed by Steve Freed indicates that twice the treatment effectiveness can be obtained at half the cost by using AI. Regarding effectiveness, their study fails due to methodological weaknesses. Their simulation of Markov decision-making was compared to actual treatment decisions made on 500 patients by one phyisician. I salute their effort in using actual clinical decisions made on real patients to use as a control group, but most clinicians would say this study is an analysis subject to the same weakness seen in any study missing a control group, the difference here being that the treatment group is not real, it is just a simulation. Also, Markov decision-making achieves its effectiveness by accelerating steps in treatment according to certain mathematical assumptions. In real life, patients with diabetes cannot speed up the pace of their efforts when they are trying to hold down a job, raise a family, and just survive diabetes, so compressing the time taken up by implementing various treatment choices is not possible in the 3-D world. And Markov is still trial and error it does not bypass any of the hiccups in treatment we all experience.

Also, the EMR database used to represent real patient data was not selected from an actual diabetes population, but from a multicenter database of patients in a psychiatric hospital system. We have no idea how many of the patients were actually diabetic, or if they were even seen for diabetes treatment, so any claims for effectiveness on diabetes are exaggerated. And ONE physician? This is not a fair comparison.

I have my doubts about the claim of cheaper also. The Markov decision-analysis paper is internally inconsistent on that point. Although in their conclusion and abstract they claim that AI would lower costs, they admit that the data points required for the Markov analysis still have to be acquired by a provider for the AI program to work. Practically speaking, this means a Provider has to be present. Can a computer palpate an insulin injection site in a patient, thereby determining whether insulin will be effective, or look at a skin rash to see if it really is a drug allergy, or evaluate the risk of foot ulceration by checking for inflammation, heat, or monofilament response? Doubtful. So we are back to using clinicians.

In the paper they indicate that “…combining AI with human clinicians may serve as the most effective long term path. I do not agree since that changes the math in their conclusions. Using a clinician alone for an episode of care costs $497 in their scenario, whereas using a clinician + AI would cost $497 + $189 = $686. One could argue that having 30-35% more CPU (their acronym for bang for your buck) is worth the extra cost, but that extra bang actually costs 28% more using their own math. This estimate does not include implementation costs, training costs, or software upgrades for providers who would use such a system.

Finally, the AI system offers no replacement for negotiating with patients, which is always part of the formula.

I welcome replies from the authors.

 

In Part 2, Dr. Interlandi will provide further comments on our four-part series, with a response from series author Dr. Bradley Eilerman, MD MHI.

 

Fill out my online form.
Use Wufoo templates to make your own HTML forms.

John Interlandi, MD is a practicing Endocrinologist in Hermitage, TN. Dr. Interlandi graduated from University of Oklahoma College of Medicine in 1976 and has been in practice for 42 years. He completed a residency at Medical College of Wisconsin. Dr. Interlandi also specializes in Internal Medicine. He currently practices at John Interlandi and is affiliated with TriStar Summit Medical Center, Saint Thomas Midtown Hospital and Sumner Regional Medical Center. Dr. Interlandi is board certified in Endocrinology, Diabetes and Metabolism.