Saturday, July 15, 2006

Do docs-to-be feel threatened by technology?

“Medicine is two things, the human thing and informatics.” Thus spoke my attending this past Thursday. During a powwow session at the VA between him, myself, two interns (1st year residents), a sub-intern (4th year med student) and our upper-level resident, the attending asked, “Has anyone here heard of Big Blue?”

While that's the nickname given to IBM, one of us correctly guessed he meant the IBM supercomputer that beat the world’s best chess player some years ago. (That computer's name is Deep Blue actually.)

The attending then posed this question: Does computer “brainpower” already exceed human brainpower? The consensus was a qualified yes. Yes, the computer can process vast amounts of data & information faster than a person can. But "instinct" (as he called it, or "intuition" as I call it) based on hundreds if not thousands of instances of some particular experience allows in some cases that our brains to work faster than a computer chip.

Here's a hypothetical to explain that. A pulmonologist sees hundreds of COPD cases in his 30 years of practice and develops a diagnostic sense (or instinct or intution) for COPD, so that he no longer has to consider all the data made available from labs, images, history & physical exam, fit the relevant stuff into an algorithm, and consider all the possibilities before picking COPD from the differential diagnoses. He does it instantaneously. A computer on the other hand becomes mired down in crunching everything it gets before making the correct diagnosis.

But to me this notion doesn't hold up well. Advances in computing speed and artificial intelligence (which aims to mimic such shortcuts in human thinking) may soon make this notion obsolete.

The attending said there's another advantage we human physicians have over future robots who would be competing for our jobs: a computer can’t give comfort to people because it lacks what we have, this nebulous thing named “the human connection.” This seems to hold up on firmer ground. Even the anthromorphised robots in films like “I, Robot” and “A.I.,” which are two visions of what what robots will be like, gave people the creeps.

Let me now shift the focus onto the reaction of my fellow doctors-to-be during this 15 minute chat. Basically, some felt threatened, and perhaps became more comfortable when they saw that technology doesn't necessarily have to replace the doctor. Here, let me explain.

One intern said it's entirely conceivable that at some point tubes could be plugged into a patient's body and all the labs will be taken, thereby providing a quick diagnosis that considers a more thorough picture of the body than we're capable of. Think of the OBD reader a car mechanic plugs into the shiftboard in your car to quickly diagnose what’s the problem with you car. It provides diagnostic codes within a few minutes and tells the mechanic if anything is wrong, and if so, where.

So in such a world, is there a real need for human diagnosticians? Probably not if this vision comes true. The upper level resident asserted how bad this would be for people, but didn't say why. The sub-intern, considering a vision that's more realistic, said if we doctors carry around tablet PCs that gets all the pertinent history/physical exam information electronically, perhaps after a nurse checks the patient out and types in what she heard and saw, won't we just stand by a patient’s room, download the info into their PCs, and not ever take a single step into the room or exchange a single word with their patients?

Technology, he implied -- and it's often the case in other arenas -- would disrupt the doctor-patient relationship. It would get in the way of the human connection. So then I asked why we couldn't use technology, like simple diagnostic software, as tools? If we do, we can become better doctors.

When put that way, the upper level, seemingly the most upset by the mere notion that info technology could replicate the work of doctors, agreed. Technology, if used by humans to augment their abilities instead of replacing them, would be a good thing.

We had two lovely pharmacists with us, and their perspective was interesting. Apparently mechanized robots already dispense medications and have replaced pharmacists who worked as mere dispensers. Not only at the VA, but in mail-order pharmacy companies and in some high-volume retail spots like CVS that find advantages in automating the filling of thousands of prescriptions.

So what have pharmacists done to deal with this? “We’ve had to change our roles.”

“Y’all don’t just dispense medicines any more, right?” I asked.

“No actually, we counsel patients and help doctors get the right drugs to their patients.”

At this point I mentioned Dan Pink’s book A Whole New Mind to make a point that automation of low-end routine work allows people to do more high-end, less routine work, work that's more valuable. (If you want to read his thesis, here's his Wired essay "Revenge of the Right Brain" from early 2005)

One form of routine work is diagnosing common ailments. In fact, Mr. Pink writes about software in use that uses decision trees that patients use by answer questions to arrive at a preliminary diagnosis. "Health care consumers have begun to use such tools both to 'figure out their risk of serious diseases -- such as heart failure, coronary artery disease and some of the most common cancers -- [and] to make life-and-death treatment decisions once they are diagnosed,' reports the Wall Street Journal" (45).

During our powwow I briefly mentioned the narrative medicine movement which aims to train future physicians to see patients' histories as literary narratives that follow an arc, helping them better read between the lines and thus getting more out of patient interviews.

I’m not sure if I accept the premise that reading a patient history as a short story will make one a better doctor quite yet, but the movement is certainly there because some people passionately believe it will. Mr. Pink writes, "Several leaders in the medical field are urging that the profession shift its overarching approach from 'detached concern to empathy,'... The detached scientific model isn't inappropriate, they say. It's insufficient... Rules-based medicine builds on the accumulated evidence of hundreds, and sometimes thousands, of cases... But the truth is, computers could some of this work. What they can't do -- remember, when it comes to human relations, computers are 'autistic' -- is to be empathetic" (168).

The idea struck a chord in my attending’s mind. He said, “So what you Raj, and I guess Dan Pink actually, is trying to say is that technology, instead of getting in the way of the human relationship as [our sub-intern] believes, actually allows physicians to build on it.”

I think so. Again quoting Mr. Pink, "According to Newsweek, '72 percent of Americans say they would welcome a conversation with their physician about faith'" (223). But medical students don't learn how to do this, and as importantly, there are too many biomedically relevant facts and numbers to get out of patients and their charts, and then to consider and play with and throw into algorithms, that there is almost not time left to talk about spiritual and other social matters, despite desires to do so and the medical acceptance that such matters do make a difference in recovery. It's likely if doctors were trained in how to use new technology to help make proper diagnoses, they'd be able to spend more time getting to know their patients in practice.

In the end, my attending was undecided (or perhaps kept his opinions to himself) as to how technology would influence medicine and the role of physicians.

But technology and its cultural effects are already changing medical education as "the curriculum at American medical schools are undergoing its greatest change in a generation" (52). Mr. Pink documents the narrative medicine program at Columbia, how Yale med students take art classes to hone their observation skills, that "more than fifty medical schools across the United States have incorporated spirituality in their coursework," how UCLA established a "Hospital Overnight Program" to have its students playact as patients going through the entire admission program to learn what they go through, and a new measure of physician effectivness centered on empathy developed by Jefferson Medical School.

While these changes in medical education aren't centered around learning new technologies per se (which is probably unnecessary for students who grew up playing on computers and surfing the web), it's definitely advances in technology that have allowed for, nay pushed, the humanistic aspect of being a doctor to more prominence, and perhaps eye-to-eye with the eternally desired aspect of being supremely knowledgeable.

And so like fellow pharmacists and many other professionals and blue-collar workers, technology will force physicians to change the way they do things. Is technology a threat? Yes, it's a threat to the status quo.

But it's not a threat to physicians who are willing to change. Technology, used as a tool, will allow competent physicians to be faster, more accurate, and more empathetic. Better able to manage people's overall well-being, physicians will be as important in people's lives as they have always been. Technology will make good doctors better, not obsolete. After all, a computer may gain the upper hand in scientific competence compared to the human mind, but it can never replicate the compassion that pours forth from the human heart.

6 Comments:

At 7/24/2006, Blogger Bob G said...

I absolutely loved this book, Raj. Your post and reflection encouarges me even more. When we reach the tipping point in the status quo, our culture will explode quickly. My hope is humankind will be all the better for it.

 
At 7/24/2006, Anonymous Anonymous said...

Great post Raj. I'm a medical oncologist and we're looking at various electronic medical record (EMR) products. As one of them says: "Spend time on exceptions, not the routine."

Lots of things I do in making sure patients are being treated according to plan can be done better by a computer than by me. I can't wait to be relieved of those duties.

One key thing physicians will be doing more of in the future (and the essence of medical oncology) is shared decision making. That is, there will be more-and-more "toss-up" decisions in which the physician's duty will be to help the patient make an informed decision in which that individual's preferences are critical: Do I take six months of chemotherapy to improve my chance of being alive 10 years from now by 3%? Do I take a test to see if my chance of getting prostate cancer is 30% or 5%? Should I go on a vegan diet?

Automation is just going to get more and more exciting. I envy you the 15 years you're likely to see when I'm retired.

-Ted

 
At 7/25/2006, Blogger Niraj "Raj" Patel said...

Thank you for writing! It is really exciting to be on the cusp of seeing how the insights in Dan Pink’s book (the 3 A’s – Asia, abundance, and especially automation) are transforming medicine.

Bob, I agree. And I don’t think it’s preposterous to say we’re living through a renaissance of human thought and achievement that will influence all disciplines and many aspects of life thanks to Internet technologies.

And Ted, I see what you mean. Physicians are burdened with data & information, paperwork involved in managing care, bueracrating hassles and disagreements, and having to make decisions on patients’ behalf, that there's little energy and time left to help guide our patients. And that's a travesty.

Personally I see automation (when used to make diagnoses and manage patient care more efficiently) helping doctors do more complex work like fully managing a patient case.

Here’s what I mean. Physicians on every team I’ve been on are singularly concerned about the acute problem (i.e., the chief complaint) of each patient, for instance angina (chest pain).

We look at the cardiac enzymes, make sure he’s ox-saturated and give him nitroglycerin, aspirin & lovenox for prophylaxis. But what about his glycemic instability? His increasing obesity? And oh yeah, how should we deal with his obstructive sleep apnea since it’s making him more prone to developing heart attacks?

All these conditions are part of a patient’s case, yet we don’t worry about it in the modern hospital. With an excess of information, data, problems, bureaucracy and patients, it’s difficult to fully manage a case. In this situation, once we rule out MI we give the patient some instructions and the proper meds, then send him home.

And because we failed to address and treat the other conditions, the patient is likely to be back in the hospital within months. It shouldn’t be this way, and I believe information technology will change things for the better.

How else do yall or anyone else see information technologies changing medicine?

 
At 7/30/2006, Anonymous Anonymous said...

I see 2 uses of automation that seem to be addressed here: automation of the healthcare process and automation of the clinical decision making process. Ted, I believe, was speaking of the former and Niraj of the later.

I think we are all eagerly awaiting a well implemented EMR to address the automation of the healthcare process. A few things come to mind that I would expect a good system to do:
1. implement an order entry system with preconditions and postconditions that must be met (e.g. document allergies, list other meds, check for contraindications, maybe even document a justification, make sure results get to the right people);
2. enforce institutional policy (e.g. physician note is signed within 24 hrs of admission, AM blood drawn by 6am, standard labs done by 8am);
3. when policy exceptions occur escalate to management;
4. track resource utilization (e.g. patients/team, physician or nurse, days/patient, cost/patient).

Automating the clinical decision making process is significantly more difficult and seems to be the one that elicits fear from practitioners. I'm gonna collect my thoughts a bit and try to post on this later. Here are a few questions to ponder: Assume a program was available that could automate much of clinical practice.
1. What if a provider chooses not to use it and his/her decisions differ from the decisions of the computer system? Can he/she be held liable for extra costs or adverse outcomes that may have only occured by chance? Will he/she be rewarded for outcomes that exceed those of the automated system?
2. What if the program makes a mistake who will be held responsible? The makers of the program or the oversears who have been lulled into complacency because the program does all the "thinking"?

-Bala

 
At 7/30/2006, Anonymous Anonymous said...

Doh! I forgot:
5. billing.
under the EMR paragraph.

-Bala

 
At 8/07/2006, Blogger Gordon said...

Clear thinking in a muddled profession. Admirable!

 

Post a Comment

<< Home