I wasn’t sure right away what didn’t feel right, but upon reflection I realize that part of my viscerally negative reaction at the Health 2.0 expo today was the number of vendors that talked down to me the minute they realized I was a patient advocate rather than a doctor or tech representative. I guess the good thing that can be said is that they didn’t assume they didn’t talk down to me as a women, rather they waited until they found out I was a patient advocate.
Then there were other vendors that were developing applications that doctors would “prescribe” to patients in order to increase “compliance”. My challenge, when I question them, they were developing their solutions based upon what doctors thought patients needed to know. They were addressing the reasons doctors think patients aren’t complying with directions. They never thought to get together a focus group of patients and ask them. Maybe, the are afraid that they will find out that they are solving the wrong problem. That the hospitals and insurers won’t pay for a tool that patients actually need, rather they will pay for tools that doctors think patients need (that is the pessimists in me speaking).
For one demo, to be fair the folks at the booth (CEO and CTO I think) were quite receptive to my questions and suggestions. Now in writing this post I think I might be conjoining or convoluting more than one booth – regardless my point holds.
The tool allowed doctors to prescribe the avatar for certain chronic medical conditions, like diabetes or heart disease (they didn’t have cancer yet). The patient then could interact with the avatar, asking medical questions and getting medical answers. Further, the doctor is informed of the questions the patient has. It could also be used to allow the doctor to get information from the patient such as blood pressure (assuming home monitoring via bluetooth device). My first reaction to this was that doctors already get more information than they can deal with – adding a way to give them more doesn’t sound like it is solving a problem. But I did realize a problem that I think their avatar could help with.
I think the chemo situation is an interesting one. We are told right away to “tell our care team” and “don’t needlessly suffer”, but then when we do tell our care team they appear to completely ignore what we are telling them. At first you report everything. Then after a while you stop. You learn that your doctor isn’t going to do anything about it, or there is nothing they can do about it, so you stop telling them. In psychology terms this is called “learned helplessness”. It can be really dangerous, especially for chemo patients, because some of the side effects are life threatening. I saw their tool and thought, if it had a way to tell the patient “I hear you” in a believable way, it might be a way to help with the learned helplessness. Sometimes all the patient needs is validation. Personally, I think this learned helplessness problem is a big problem that often gets confused with a lack of compliance.
The conversations highlighted to me that in some cases the tech companies are trying to solve what they perceive to be a patient problem by asking what doctors think patients need, rather than asking patients what they need. They seem to miss that if the end user of their tool is a patient community, that perhaps the patient community should be consulted as the tool is being designed. I don’t mean after the fact usability testing. I mean asking patients during the early design / concept phases whether they would actually use a tool, whether the tool would help solve the problem, and what the patients see as the problem rather than what the medical team sees as the problem.
Tech can solve many real problems, but too often tech is thrown at a problem as if it is the solution without really analyzing what the problem actually is.
Ya, that and don’t talk down to me.