Monday, December 23, 2013
It is really hard to see a randomized controlled trial conclusion that does not support your previously held view of medical practice or education.* Yet it's vitally important to pay close attention when this happens, and try to deeply understand what's going on so you can change your practice, if necessary. So I'm here to pay close attention (and be rather long-winded in the process).
But the JAMA RCT took it a step further with an attempt to determine if the educational intervention would positively impact patient-centered outcomes.
In short: No benefits found in this study. No increase in patient, family, or other clinician rating of quality of communication surrounding end of life care. No improvement in mental health outcomes. In fact, patients exposed to the intervention group's trainees experienced a small but statistically significant increase in depressive symptoms. Let's take a closer look at the purpose and methods of the study.
What's the purpose of the educational intervention?
The educational intervention is designed to give trainees skills necessary to conduct pivotal conversations- to ensure that patients and their families understand where things stand medically (and where things might be going), empathize and provide support when the news is serious, and to align treatments with patient values and goals. While I'm not familiar with the specific simulations used in the intervention, from other experiences with this method, the simulations usually feature patients who are at a pivotal point with their health. A new or relapsed serious illness. Worsening of a condition, indicating that treatments aren't working very well. A transition to the actively dying phase.
Was patient selection appropriate for this study?
All of the patients included in the study had serious illnesses (25% outpatients and the rest inpatients), and one might anticipate that each patient would need to have conversations about serious news, goals of care, and end of life care at some point along their disease trajectory. But at every visit? Likely not. Some of the skills taught could be applied at any visit, but would the full benefit of these skills be felt if the patient was stable without troubling new issues? Could patient stability reduce the importance of applying these skills at many of the visits?
So, how many of these patients were actually at a pivotal point in their disease trajectory during the study encounters? We don't know, exactly. Here's what we know:
- Patients in hospice care were significantly less likely to return surveys (only 26% of patients in hospice care returned surveys). We don't know how many patients in each study group were in hospice or on the verge of hospice referral,which represents one of many possible pivots in a patient's health and care.
- When there was documented communication about end of life issues, patients were significantly less likely to return survey (34% estimated return rate)
- Families of the 16% of patients who died during a hospitalization were dramatically and significantly less likely to return the survey (a measly 29% when the patient died vs. 78% for families when the patient survived hospitalization).
- Among respondent patients who rated their health status as "poor", quality of communication scores were higher in the intervention group.
We have no idea what the subject of the conversation was during each visit. Perhaps the control group had sufficient communication skills to navigate routine patient visits and achieve scores comparable to the intervention group. Reviewing the QOC tool, many of the skills on which patients/families were asked to evaluate trainees are basic skills (making eye contact, etc). Maybe there were enough routine visits to drown out the pivotal conversation visits in the study. There's no way to know.
Confounders for the measurement of trainee skill
Another variable which was not reported was the timing of when patient/family/clinician evaluators filled out their surveys. It's entirely possible that by the time the evaluations were filled out, patients and families might have remembered the face of the trainee. However, as time elapses from the visit, recall bias likely only grows. Think of the number of factors which could impact the rating- the plethora of other clinicians the patient could have seen during the same hospitalization or in the interim. One inclusion criteria for the study was patients who had palliative care consultation, but it's not known how many of the patients received this. You could imagine that being a confounder for patient satisfaction! Also, consider the conflicting messages patients and families receive and the illness experience itself as other confounders.
The depression result
First off, it's not clinically significant by the pre-defined minimial clinically significant difference of a 5-point change in the PHQ-8. Granted, it was statistically significant, with a covariate adjusted 2.2 point increase between the groups.
What to make of this clinically insignificant increase in depressive symptoms? We know sadness comes with some of the messages we have to give to patients and their families. But none of us want to make someone more depressed, if we can help it.
PHQ-8 was measured at a a single time. And we don't know when that was, and whether timing varied for some reason between the intervention and control group. But let's assume measurements occurred at exactly the same time in each group.
We have empirical evidence to suggest that the elements of grief have "average" trajectories, with depressive symptoms being an expected element of that grief. Assuming that trainees in the intervention group were having more end of life discussions (a leap, because we don't know for sure), is it possible that the "arc" of the anticipatory grief experience was modulated somehow? More depression earlier on, with greater acceptance to come when the control group was hitting the peak of their depression?
This is merely a hypothesis. But I think simultaneously while we ponder the possibility of harm, alternative hypotheses need to be considered as well.
How might this mesh with another highly regarded communication intervention trial?
Previously, Lautrette (and Curtis) et al. demonstrated the effectiveness of a communication strategy at reducing depression, anxiety, and PTSD symptoms in bereaved family members three months after the loss of a loved on in an intensive care unit. One of the core interventions was a structured family meeting that involved clinicians using the VALUE mnemonic. While this particular mnemonic was not central to the educational intervention in the JAMA study, the intervention helps trainees gain the skills necessary to accomplish the goals central to the mnemonic. Why the difference in outcomes, then? I hypothesize that it comes back to patient selection. All of the patients in Lautrette were clearly at a pivotal point in their care when the intervention was applied.
Anecdotes remain powerful
A critical care fellow sends an email noting that after an impromptu late night family meeting, the nurse commented on how smoothly the meeting went and the resident told her he hoped he could communicate like that one day. Another fellow sends a text with a report that in a difficult conversation, he made a statement admiring the care the family gave to the patient, and the emotionally charged family calmed down, becoming more able to focus on the difficult conversation at hand. You observe one of your fellows find a new way to align herself with a patient asking for treatments that won't be beneficial, and she remarks afterwards about how she wasn't doing that just a few weeks before, and how useful she has recently found the new technique.
All of the above fellows participated in a communication course conducted using the Vital Talk teaching methods studied in the JAMA RCT. Multiple studies (including the recent JPM study linked with this RCT) have now demonstrated change in behavior from this intervention. It's possible that the change is not long lasting. If we were to set aside some of the methodological issues of the JAMA RCT and believe the conclusions (no improvement in patient outcomes), it's possible that the trainees didn't have adequate reinforcement to change their behavior permanently. It's also possible that some of the anecdotal success comes from viewing this as an iterative process with high level learners who are a different audience than the intervention group of the study. For more on that and further commentary, here's Drew...
Like Lyle, I have been trained in and use the techniques used by the Oncotalk/Vitaltalk crowd, and, via first-hand experience, strongly believe they are effective in improving the quality of communication. I watch communication skill acquisition happen rapidly, in a way that is nearly giddying (as an educator) to watch. In my own fellows, I perceive that these improvements have a lasting effect on the level/quality of my fellows' communication skills. So, for me, it was disheartening to see such a well-designed study of these techniques be so 'negative.' Admittedly, my first-hand experience with this is has been so positive that I find it nearly impossible to believe that the techniques aren't effective.
That said, as I have thought about this investigation, one of my conclusions is that perhaps the reason I perceive the techniques to be so effective is that I have been using them, myself, to train palliative care fellows (not medical interns). My fellows 1) are physicians who are very, very motivated towards excellence in physician-patient/family communication, and 2) after the Vitaltalk-style workshop they continue to have close supervision and feedback from me and my faculty on their communication. Effective, empathetic, patient-centered communication is a huge, explicit part of their curriculum, as opposed to ineffective, aloof, and pathology-centered communication being a huge part of the "hidden curriculum" as it is for too many other medical trainees. I think it's possible that those phenomena may, in part, explain the disconnect between the apparent ineffectiveness of the Curtis trial vs my own experience with these training methods.
I'll also note that I don't necessarily believe patients are the best judge of physician communication quality. Certainly, patient satisfaction with physician communication cannot be the sole measure of quality (note: I'm not suggesting that the Curtis trial judged the quality of communication uni-dimensionally). I am reminded of the CanCORS study, which showed patients with metastatic lung or colon cancer who did not understand that their chemotherapy was not going to cure them, were more satisfied with their oncologists' communication than patients who were better informed. Sobering stuff.
While the results of this trial are disappointing, the negative results may reflect how challenging it is to study patient centered outcomes of educational interventions. This study also gives us an opportunity to reflect on how to define the patients who benefit most from communication training interventions, when the benefit is most likely to occur, and which trainees/practitioners might be in the best position to receive the intervention. No doubt, there are other interventions which may improve clinician communication in pivotal conversations, and we should also reflect on ways to improve the studied intervention. As a palliative care community, we should be interested in a multi-faceted approach that involves changing both practitioner behaviors and early patient/family preparation for pivotal moments near the end of life. Evidence supports the notion that we can help trainees grow their communication skills. Fortunately, we don't need FDA approval to disseminate medical education interventions which help trainees develop skills that most would agree are valuable. This intervention remains an integral part of the equation.
For other perspectives on this study, see the JAMA editorial by Chi and Verghese, Vital Talk's commentary, and Geripal's commentary.
*Disclosure: Lyle currently receives funding from an IU Health Values Grant to use these teaching methods with various fellowships at IU School of Medicine. All opinions expressed in this post are solely those of the authors.