Wednesday, March 31, 2021

Advance Care Planning? Meh. - Part 2

by Drew Rosielle (@drosielle)

This is Part 2/2 of a couple posts about advance care planning (ACP). The last post outlined why there are really good reasons to believe that ACP (completion of health care directives and the healthcare conversations that occur around healthcare directive (HCD) completion, implemented on a broad scale) does not lead to any better, patient-centered outcomes, particularly when evaluated as a health intervention to be applied across a population (which is how ACP is typically conceptualized and researched).

In the prior post, I perhaps obnoxiously promised that I thought one of the most important ACP research studies to occur has just been published, although I think it's important for reasons that weren't necessarily intended by its investigators.

The study is in JAMA Internal Medicine and was published a couple months ago.

If you recall from my first post on ACP, one of the challenges with ACP research has been confounding in observational studies. Take an observational study showing that people with HCDs are more likely to have a longer than average hospice length of stay -- that may be true but we don't know why it's true. It could be true because completion of the ACP casually lead to the longer hospice LOS. It could also be true because patients who are inclined towards ACP completion are also inclined towards wanting less 'invasive' EOL care; both of those measurable phenomena stem from underlying personal/social characteristics which presumably emphasize planning, acceptance, surrender, and specific notions of dignity at the EOL which center preferences around being at home (etc) and not being on machines (etc), 'quality over quantity,' what have you.

One of the tricky things about that is that this sort of confounding may even creep its way into ACP randomized trials, because of the possibility that many patients who are skeptical of ACP or otherwise disinclined to engage in ACP-like activities won't even agree to participate in an ACP trial in the first place. Ie, that the current ACP research is enriched with subjects who are already sort of ACP-accepting, because why else would you put yourself into such a study? This seems like a legit problem and one could even wonder if one of the issues with the prior ACP RCT research showing very little in the way of positive patient-centered outcomes is because such research is only investigating pro-ACP patients anyway who are as a group disinclined towards 'invasive' EOL treatments at baseline so it's tough to show any 'benefit' between the group who got an active ACP intervention vs those who didn't.

The JAMA IM study utilized a design to try to mitigate that sort of confounding: they prerandomized subjects before obtaining consent, but still compared the 2 groups on the basis of initial randomization, not whether the patients actually consented and received the intervention (this I learned is called a Zelen design). This is tricky so I'll unpack the Zelen design a bit. The idea is you want to study the effects of a real-world ACP intervention, eg what would be the impact of implementing an ACP program aimed at high-risk patients in a healthcare system, say. If you approach patients about the study to get consent, a whole bunch of patients will decline to participate, for all sorts of reasons, including because some are actively disinclined to do ACP. So those folks never enter the study, and you only study 'pro-ACP' people, and you can't really understand how your intervention would actually affect the entire at risk population. With prerandomization you randomize half the at-risk population to the active group initially, then approach the active group subjects for consent. Within the active treatment group, some will consent to receive the ACP intervention, some won't, but when you analyze the data you compare the entire active treatment group (both those who consented to receive intervention + those who declined it) as a whole vs those randomized to routine care because that mimics the actual effects of your intervention in 'real life.' All this is ethical because all the data essentially comes from administrative health care data and everyone just gets 'routine care' unless they actively agree to the ACP intervention.

The study itself investigated community-dwelling older patients in a 5 county area of North Carolina, and looked at patients only with significant comorbidity/frailty. The ACP intervention was a Respecting Choices ACP program involving a telephone-based nurse-led ACP education visit, followed by the nurse accompanying the patient & a caregiver to a primary care visit where the PCP completed a ACP discussion with prompts to discuss prognosis, disease understanding, unacceptable states at the EOL, and patients' preferred treatment limitations (eg if they wanted a DNR order etc). Providers had an EHR template/platform to document all this. Subjects in the control group received usual care, and their providers had access to the EHR ACP platform but of course no specific actions were taken encouraging/enabling them to use it.

This is a really, really strong study design compared to some ACP trials. I read the methods section and was like 'Damn, they did this one right.' In particular because it used both a nurse-led education intervention plus staged a prognosis/'goals' discussion with a provider.

The primary outcome was new documentation of ACP discussions in the EHR after randomization. They measured a variety of secondary outcomes too including healthcare utilization.

They randomized 765 patients. Of the 379 randomized to the intervention group, about 90 weren't able to be approached for consent to participation (eg they couldn't locate them, etc). Of the 294 approached, 146 consented to the intervention and 139 completed the intervention. Patients had a mean age of 78 years, 60% were women, 17% Black/80% white.

They crushed their primary outcome: 42% of patients randomized to the intervention group had ACP documentation in the chart vs 4% in the usual care group. They collected some data on patient rating about the quality of ACP communication and patients generally rated it very high.

Nothing else though seemed to be different between the groups. About 10% of the entire studied population died in the study (median follow up time of 304 days) and there were no differences in the care they received (as measured in the study: ED visits, hospitalizations, etc).

(In Part 1 post I wondered about studying ACP as a suffering-reducing intervention, and those sorts of outcomes weren't investigated in this study.)

So, top level summary: this was a really well-designed, well-done, RCT of an ACP intervention which showed it did lead to more completion of ACP but did not impact other measured outcomes about the care patients actually received.

The question remains for all of us, though: So what? If you think ACP is intrinsically a patient-good then this is a great outcome. But as I discussed before, there is not any sort of clear evidence from the well-done ACP trials that completing ACP changes things that our patients actually care about: reducing suffering, reducing unwanted/burdensome healthcare interventions at the EOL. This current study is another top-notch, well done, study of a comprehensive ACP intervention which could demonstrate no change in these patients' lives apart from having more ACP documented. Yes, they did not measure every patient-centered outcome they could have, but that is not an argument that we should be treating ACP like some standard of care. Instead, best, it should be an investigational intervention, not ready for widespread implementation. If ACP was some sort of drug or medical device and you did a study aimed at getting more people the medical device, and showed in fact that your well-designed intervention successfully lead to more patients getting the medical device, but nothing else measured about the patient's well-being changed, we'd all be like Ok fine study but why should I do this in the absence of patient benefit? Same with ACP.

So why do I think this study was one of the most important ones in recent memory? Fundamentally, I think it's because this study is the best data yet we have that having done ACP is a good proxy for patients wanting 'less invasive' care at the EOL but doesn't in itself change anything.

This is because of their Zelen design and an incredibly damning (to ACP) exploratory analysis they mention, that compared patients within the randomized-to-being-offered-the-active-intervention group who received the intervention to those who didn't. They found that those patients in the active group who agreed to go through with the ACP interventions had fewer hospitalizations and ED visits than those who declined to. Ie, in the randomized population about half those subjects did not consent to the ACP intervention, and yes those subjects' EOL care looked different from those who consented to receive the ACP intervention. This makes you think maybe the ACP intervention did something. But you need to remember that there was an entire other control group! The people who refused the intervention were not the actual control group! That is, if you compare the entire randomized group to the entire control group (which itself presumably contained an equal measure of ACP-interested and ACP-not interested patients as the randomized group), there were no differences in overall outcomes. Ie, looking at this across the entire population, the ACP intervention made no difference, presumably because the control group itself contained just as many ACP-interested patients as the randomized group. I can't think of any better, available evidence that ACP interventions don't change patient outcomes than this.

All we're doing is capturing a group of patients who weren't going to allow themselves to die in the hospital regardless.

No one is saying that ACP activities don't benefit some patients. I've met some patients myself who have benefitted. But millions of dollars are spent each year getting people to fill out these forms. The opportunity cost of that is gigantic. Imagine, if, 25 years ago, when ACP was really kicking off, people did high quality research which came up short, abandoned ACP, and instead all the resources we've spent on it the last generation were spent on investigating other, better ways to mitigate the suffering of our patients with advanced illness, and provide them with the sort of care they actually want. That's what is at stake here.

For more Pallimed posts about advance care planning.
For more Pallimed posts by Dr. Rosielle click here.

Drew Rosielle, MD is a palliative care physician at the University of Minnesota and M Health Fairview in Minneapolis. He founded Pallimed in 2005. You can occasionally find him on Twitter at @drosielle.

Pallimed | Blogger Template adapted from Mash2 by Bloggermint