Mastodon Pallimed: research issues
Showing posts with label research issues. Show all posts
Showing posts with label research issues. Show all posts

Monday, January 18, 2021

Social Media Stats for Palliative Care Journals 2020

by Christian Sinclair (@ctsinclair)

Over the past two years I have been working to increase the profile of the Journal of Pain and Symptom Management as the associate editor of social media. In that time, I have come to make a few observations on the current state of social media use by palliative care journals and researchers that I would like to share with you dear readers along with some statistics. Could I make all of this into a paper, published in one of said journals? Possibly. But curiously enough I am looking to effect positive change quickly, so for now we will go with a blog, some Tweet threads and data visualizations. The article can come later!

Over time I will be looking at the social media ecosystem including research organizations, researchers and academic programs, but today the focus will be on the journals.

First off, do you follow any of the main palliative care journals on Twitter, Facebook or Instagram? How many are there even to follow?

I will tell you it is not necessarily easy to find them. No unified place to find them all, no organizing hashtag that is in all of the bios. I have been following 14 Twitter accounts, 10 Facebook Pages, and 3 Instagram accounts for palliative care journals. And so far to my knowledge, there are none on Snapchat nor TikTok. I have not been collecting data on podcasts nor YouTube either, but those areas are good for exploration. If I am missing any or my stats are off, please let me know.

If you want to start following any of them, here are links to make it simple:

A Twitter List maintained by the JPSM Twitter account.



I have been taking some publicly available stats over the past few months. My hope is to check in every once in a while here and likely on Twitter to see what the different accounts are doing that is helping to promote palliative care research online. Let’s take those good social media practices and replicate!



While follower numbers do not equal engagement or influence they are a fair proxy for measuring who is getting people’s attention. The clear leader on each platform is the journal Palliative Medicine. The editorial team has consistently published good content on each of the platforms, has an easy to find journal title, and appears to get good engagement from researchers. While Supportive Care in Cancer has been around for a while, it is new to Twitter, and has already been gaining followers at a rapid pace since debuting in Fall 2020. A new journal Palliative Medicine Reports also recently joined Twitter in May 2020 and has been making ground on some of the more established accounts, now ranking 10th out of 14.

As I am creating social media posts for the Journal of Pain and Symptom Management, it can be surprisingly difficult to find researchers on Twitter to tag them and help promote their work. In a later post focusing on researchers and research organizations, I will share why we need to remedy this absence from the digital public square. (but here is a quick summary to show you why it matters!)





What is interesting to me is that not all the journals follow each other on Twitter. Above is a table showing which journals follow other journals. Start on the left hand side and ask “Does _____…” then move to the top and complete the question “Follow _____?” It is important for the journals to follow each other and possibly help promote a healthy environment for more researchers to participate online. Of course there is natural competition in terms of authors and publications, but I feel there is benefit to demonstrating relationships of mutual respect and support online.

As I am creating social media posts for the Journal of Pain and Symptom Management, it is VERY difficult to find the authors on Twitter to tag them and help promote their work. In a later post focusing on researchers and research organizations, I will share why we need to remedy this absence from the digital public square.

Of the 14 Twitter accounts, 5 have posted less than 30 tweets over the past 90 days. So if they are not that active, will you get that much from following them? Probably not. But it does not cost anything to follow them and maybe this post getting them a lot of new followers may reinvigorate their work.

For all 3 platforms I would propose that the journals consider using a unifying hashtag. #hapc (hospice and palliative care) is a natural one as it already has a built in audience that would be interested in the content and is short on characters. I have flirted with #hapcResearch but I am not confident that it needs a separate hashtag on Twitter. Yet, #hapc may not be enough, since on on Facebook and Instagram #hapc is not well defined, often cross-populated with lots of irrelevant content. So maybe #hapcResearch is a good one to bridge across all three platforms. The journal social media editors need to hash this one out.

I’m not quite sure what qualifies as a palliative care journal. I included JAGS mostly because they have some very relevant research to the field of hospice and palliative care, and their social media editor is Eric Widera of GeriPal, so a natural overlap there. It also serves as a good benchmark. Additionally I have included the Cochrane Pain, Palliative and Supportive Care Review Group. Is Cochrane a journal? Kind of. Should they be classified as a research group instead? Maybe. I need to probably ask them how they see they fit best.

There are two journals that have palliative care in the title but I have chosen not to list them, because they may be associated with predatory publishers. I keep track of them to see how they operate, and use them as a benchmark since I am not actively promoting them by including them in the rankings above.

Well I hope you enjoy this glimpse into the social media stats of palliative care journals. I have some more thoughts, some calculations and stats, I am waiting to gather some more data on before I share them widely. If you do have a moment, please go follow @JPSMjournal on Twitter and Facebook! If you are interested in helping with these stats, writing a paper or learning how to do social media for a journal, I would be happy to hear from you.

For more Pallimed posts about social media.
For more Pallimed posts by Dr. Sinclair click here.

Christian Sinclair, MD, FAAHPM, is a associate professor of palliative medicine at the Univeristy of Kansas Health System. He is editor-in-chief of Pallimed, and cannot wait to play board games in person again.

Monday, January 18, 2021 by Christian Sinclair ·

Friday, November 2, 2018

Nope. We STILL Shouldn't Claim Prolonged Survival in Hospice and Palliative Care

by Drew Rosielle (@drosielle)

A group of investigators from Tulane recently published a meta-analysis in Annals of Behavioral Medicine indicating that outpatient palliative care improves survival and quality of life in advanced cancer patients (free full-text available here, although I'm not sure if that's permanent).

Perhaps you'll remember in June of this year when I pleaded with our community to stop claiming that palliative care prolongs survival (my little Twitter rant about this starts here).

My basic plea was this:
Hospice and palliative care community, I'm calling for a moratorium on all blanket, unqualified claims that hospice and palliative care improve survival.
I still, 100%, stand-by that plea as stated.

However, this meta-analysis does, I think, show the way forward for future investigators to continue to pool survival data across trials, as some of the trials show trends toward a survival benefit which aren't statistically significant and pooling data in a meta-analysis is reasonable. I just don't think this one really adds much to the discussion, unfortunately.

We've been able to for a while now, and still should, I think, continue to be able to make basically unqualified claims that hospice and palliative care programs improve the quality of life for patients living with serious illness, because the preponderance of the evidence continues to show that, and this meta-analysis also looked QOL outcomes too and showed, not surprisingly, broad and consistent improvements in QOL (I won't really talk about this in this post because, while awesome, it's also not really news).

So - great news! However we'll look through the metaanalysis and I hope you'll conclude with me that it's hardly the final story on this.

Nor does it change the concern that I have that our (over-) exuberant trumpeting of such outcomes as a community may not be in our best interest long-term. At the very least, I think we should continually be on the forefront of articulating the idea in any and all venues available to us, the desperately necessary moral position, that longevity alone is not the fundamental goal of medicine.

Living longer is great, as long as it's living well (whatever that may mean to someone), and we have to be one of the key groups of professionals clearly, proudly articulating the idea that living well is as important as living longer. We all know too well the physical and emotional/existential devastation that longevity-obsessed, technology & organ-focused medical 'care' brings to too many of our dying patients and their families. Helping those patients receive medical care which actually helps them is our gig, and the longevity bonus we may bring sometimes is of course totally swell, especially as it lowers barriers to us being able to see the patients we can actually help.

On to the meta-analysis. Basically the analysis confines itself to high-quality, randomized (either by patient or cluster-randomization) trials of specialist, outpatient palliative care services for patients with advanced solid tumors. There's a lot they looked at, but it basically is a meta-analysis of the 2 big Temel studies (NEJM 2010, JCO 2017), 2 of the Bakitas ENABLE studies (JAMA 2009 and JCO 2015) and Carla Zimmerman's 2014 Lancet study. (They sort of partially included 3 more preliminary and lower quality studies, but most of the headline findings here are restricted to the 5 larger, higher quality studies listed above.)

The Temel studies were basically looking at early (near the time of diagnosis), automatic, ambulatory palliative care for patients with metastatic lung (2010), or lung and GI cancers (2017) vs usual care. Both studies showed some QOL improvements, the 2010 study famously showed a survival benefit, the 2017 one did not. Both studies took place at Massachusetts General Hospital.

The Bakitas studies are a little complicated, but look at a package of interventions which include a visit with a specialist clinician (an APRN in 2009, unclear who they were in the 2015 study) and then regular phone calls for support and coaching around a variety of important concerns - life review, coping, end-of-life planning, etc. The 2009 study showed improvements in QOL and a non-statistically significant trend towards improvement in survival. The 2015 study showed improvements in longevity but not QOL. Perplexing, right?

The Zimmerman study is a Toronto cluster randomized (by different cancer clinics all within the same organization) trial of automatic clinic-based palliative care for advanced cancer patients which basically showed improvements in QOL (but not longevity that I know of).

So, to be clear, the meta-analysis involves 3 studies involving palliative care clinics within 2 organizations, and the 2 ENABLE studies involve a sort of intervention which, while scalable and probably beneficial to patients, is sort of its own thing, and not something most people would recognize as typical ambulatory palliative care services.

But, as it turns out the meta-analysis doesn't even really include all these studies in the actual primary analysis (they are included in the qol analysis and secondary analyses), because they chose survival at 1 year their primary outcome, and only the Bakitas studies and Temel 2010 have 1 year survival data available. I.e., Temel 2017 and Zimmerman 2014, neither of which showed a survival benefit, were even included in the analysis (and they also happened to be the largest of the 5 studies).

I.e., this is a meta-analysis of the famous Temel 2010 study, and the 2 Bakitas studies, one of which bizarrely showed a survival benefit and no QOL benefit (which, again, is interesting, and I honestly don't know what to make of it, but it's not the sort of trial I am going to make broad generalizations about, especially given that it doesn't involve a clinical model that is routinely used in ambulatory palliative care.) If you're interested, the pool survival benefit was 14.1% at 1 year (56.2% surviving vs 42.1% with usual care) in the Bakitas and Temel 2010 studies.

In their secondary analysis, which included survival at 3 months (and thus data from all 5 high quality trials), there was no survival benefit. There was a tiny benefit at 6 months (using data from all the trial except Zimmerman which, by the way, was the largest of the trials), and they looked at 9, 15, and 18 month survival but these basically were the same trials as the primary outcome trials.

So, to try to summarize all this, the actual headline finding here is that when you combine data of 2 of the studies to actually show a survival benefit with one which showed a non-statistically significant trend in survival benefit, you end up with a result which shows a statistically significant survival benefit.

This is why I don't think this paper should change how we talk about survival benefits from palliative care. It's still unclear, a lot of studies have shown no benefit, a few have, maybe we do some times, in narrow circumstances, but that's still not clearly been shown in any generalizable way, and that's about all yo
u can say about it. We can continue to say that we don't worsen survival, because only one study I know of has shown that, and we should continue to shout from the fucking mountaintops that we improve the quality of life of our very sick, very much suffering, seriously ill patient population. And I'm damn proud of that.

I do look forward to more meta-analyses in the future, and think it's a good approach to pool data from heterogenous trials to try to get some clarity on the possibility that there's a sort of 'generalized' survival benefit to palliative specialist services or not, and I'm glad these authors tried this approach, although I don't think it ends up adding much to the survival debate. So, onwards we go....

Drew Rosielle, MD is a palliative care physician at the University of Minnesota Health in Minnesota. He founded Pallimed in 2005. You can occasionally find him on Twitter at @drosielle. For more Pallimed posts by Drew click here.

Friday, November 2, 2018 by Drew Rosielle MD ·

Friday, July 6, 2018

True Confessions On Why I Prescribe Things Without 'Evidence'

by Drew Rosielle

We have a 'required reading' list for our fellowship, which includes a bunch of what I think are landmark or otherwise really important studies. One of them is this very well done RCT of continuous ketamine infusions for patients with cancer pain, which showed it to be ineffective (and toxic).

We also recently have seen another high-quality study published with negative results for ketamine. This was a Scottish, multi-center, randomized, placebo-controlled, intention-to-treat, and double-blinded study of oral ketamine for neuropathic pain in cancer patients. The study involved 214 patients, 75% of whom were through cancer treatments and had chemotherapy-induced peripheral neuropathy (CIPN), and the median opioid dose was 0 mg. They received an oral ketamine (or placebo), starting at 40 mg a day, with a titration protocol, and were followed for 16 days.

There were exactly zero measurable differences in outcomes between the groups (on pain, mood, or adverse effects). Zip.

All this got me thinking about a conversation I had with a palliative fellow this year, who, upon reading the continuous infusion study, confronted me with the question - Why do you even still use ketamine, then? The answer to this has a lot to do with the nature of evidence and how that is different for symptom management than it is for other outcomes, as well as the challenging reality of the placebo effect in everything we do.

I should note that you can 'dismiss' these studies based on generalizability (and plenty of people do), i.e., "The infusion study was well-done, but it's a protocol that I don't use, therefore I can ignore it." This very detailed letter to the editor about the infusion study does just that, for instance. Or, that the oral ketamine study was really a study about CIPN, and virtually nothing has been shown to be effective for CIPN, except maybe duloxetine (barely), and so it's not generalizable beyond that, and can be summarily ignored.

All this is valid, to be sure -- it's always important to not extrapolate research findings inappropriately, but honestly the reason I still prescribe ketamine sometimes has little to do with this, and has everything to do with the fact that I have observed ketamine to work and believe it works despite the evidence. Which is a pretty uncomfortable thing to admit, what with my beliefs in science, data, and evidence-based medicine.

Perhaps.

The challenge here is that when it comes to symptom treatments us clinicians are constantly faced with immediate and specific data from our patients as to whether our treatments are working. This is a very different situation than a lot of other clinical scenarios for which we lean heavily on research statistics to guide us. Note that it's not a bad thing we're confronted with this data (!), it just makes it difficult to interpret research sometimes.

Let's start with research which involves outcomes which are not immediate. E.g., does statin X, when given to patients for secondary prevention of myocardial infarction, actually reduce the number of myocardial infarctions (MI) or prolong survival?  We can only answer that question with research data because when I give statin X to an actual patient, I have zero way of knowing if it is 'working.' If they don't ever have another MI I'll feel good, but that may take years and decades to even find out. In fact, it's nearly incoherent to even talk about that outcome for my patient, because we think about those sorts of outcomes as population outcomes because that's how they are studied. E.g, we know that if we give statin X or placebo to a population of patients (who meet certain criteria) for Y number of years, that the statin X group will have some fewer number of MIs in it than the placebo group. That's what we know. And because some patients in the placebo group don't have MIs and some in the statin X group do have MIs, we actually cannot even conclude for our own patient whether statin X helped them, even if they never have another MI, because maybe they wouldn't have had an MI anyway. That is, it's a population-based treatment, with outcomes that only make sense on the population level, even though of course we and our patients very much hope that they individually are helped by the drug. Supposedly precision/personalized medicine is going to revolutionize all this, and maybe it will, but it hasn't yet.

Contrast this to symptom management. My patient is on chemotherapy and they are constantly nauseated. I prescribe a new antiemetic -- let's call it Vitamin O just for fun. Two days later I call them up, and they tell me: "Thanks doc, I feel a lot better, no more vomiting and I'm not having any side effects from the med." Or they tell me: "Doc, the Vitamin O just made me sleep all day and it didn't help the nausea one bit."  I have immediate, actionable, patient-specific, and patient-centered data at my fingertips to help me judge if the treatment is effective/tolerable/worth it. It feels very different than prescribing statin X, in which all I have is the population data to go by.

So then why do symptom research at all if all we have to do is just ask our patients?

Obviously, it's not that simple, and research is critically important. For one, placebo-effects are hugely important for symptom research, in fact, they dominate symptom research. Blinded and controlled studies are critical in helping us understand if interventions are helpful above and beyond placebo effects (we should all be skeptical/agnostic about any symptom intervention which is not studied in a blinded and adequately controlled manner). Research also helps us get a general idea of the magnitude of clinical effects of certain interventions. Comparative research (of which there's very little, but it's really important) helps guide us towards which interventions are most likely to be the most helpful to our patients. E.g., which antiemetic is most likely to help the largest number of my patients going through a certain situation (so as to avoid painful delays as we try out ineffective therapies)? Research also obviously helps us understand side effects, toxicities -- hugely important.

But...if I thought all of the above were sufficient, I'd still never prescribe ketamine, or for that matter methylphenidate, because the placebo-controlled, blinded studies don't actually indicate they are effective over placebo (let's be honest palliative people, when we actually read the high-quality methylphenidate studies, there's very little there to suggest we should ever prescribe it).

That leaves me though with this belief, based on patient observation, that it still works, damn the data. What do I make of that? I want to be clear, I don't prescribe ketamine a lot, just the opposite, but there are times when you are desperate, you are faced with a patient in an intractable, painful situation, and you're running out of moves to make to improve the patient's life, and the reality is I sometimes will prescribe ketamine then, and my observation is that it's sometimes hugely helpful, enough so that I keep on using it.

And I honestly don't know what this represents - is it that complex phenomenon called the placebo-effect that decides to show up every now and then (although for these patients you wonder why the placebo-effect didn't show up on the 5 prior treatments you threw at them)? Is it that I'm 'just' making them euphoric and I'm not actually helping their pain (although honestly, I think it's impossible to draw a hard line between the two)? Or is it the fact that for presumably complex genetic neurobiological reasons, while ketamine is ineffective toxic for the majority of patients out there, it is also really effective/well-tolerated for a minority of our patients, and that's the sort of thing that it's tough to parse out in trials, because the small number of responders is overwhelmed by the strong majority of non-responders.

I like to tell myself it's the latter, although I need to admit that probably a lot of the time it is placebo-effects. None of us should be happy about prescribing drugs with real side effects, and we must recognize the possibility for patient harm 'just' for placebo effects. (Which, incidentally, is why I'm perfectly ok using lidocaine patches sometimes even when I just assume it's a placebo - because of the near zero chance of harm to the patient. True confessions.)

But, to emphasize my point, if it is the latter (some drugs like ketamine and methlyphenidate do actually really help a minority of patients but are toxic to most and so it's tough to appreciate the impact based on clinical trial research), that emphasizes the critical observation about why high-quality clinical research is important - it helps us know which interventions we should be doing routinely and early, and which should be at the bottom of the bag, to be used rarely, and with great consideration.

But, given that this is true confessions day, I still don't think methylphenidate is something to be rarely used. In fact, it's one of the few things I do in which I routinely have patients/families enthusiastically tell me thank you that made a huge difference. (If you're curious those things are 1) talking with them empathetically and clearly about what's going on and what to expect with their serious illness, 2) starting or adjusting opioids for out of control pain, 3) olanzapine for nausea, and 4) methylphenidate.) Like, all the time. Like, they come back to see me in a couple weeks with a big smile on their face, so glad I started the methlyphenidate. Happens a lot (not all the time, but enough of the time). A lot more than with gabapentin or duloxetine or many other things I also prescribe all the time which have 'good evidence' behind them. It happens enough that I've asked myself What data would convince me to stop prescribing it to my patients? And I don't have an answer for that, apart from data suggesting serious harm/toxicity (which none of the RCTs have shown).

I'm very curious as to people's thoughts about all this and look forward to hearing from you in the comments!

Drew Rosielle, MD is a palliative care physician at the University of Minnesota Health in Minnesota. He founded Pallimed in 2005. You can occasionally find him on Twitter at @drosielle. For more Pallimed posts by Drew click here.

Friday, July 6, 2018 by Drew Rosielle MD ·

Tuesday, September 26, 2017

Lorazepam, Haloperidol, and Delirium

by Drew Rosielle

JAMA Internal Medicine has published a double-blind, randomized, placebo-controlled trial of adding lorazepam to haloperidol in patients with advanced cancer and agitated delirium. (We had a heads up about this trial because it was presented at ASCO earlier this year.) If there ever was a sort of consensus in HPM about how we should be treating delirium, my sense is that it’s been shattered by the recent RCT of low-dose haloperidol vs risperidone for delirium in Australian palliative care unit patients, showing those drugs worsened delirium symptoms. So, it seems like we should all see what we can learn from this newly published investigation.

The authors note that to be best of their knowledge there has never been a RCT involving a benzodiazepine compared with placebo for delirium. The one kind of famous (if you are a delirium geek) trial which looked at benzos, which was the trial I was directed to when I asked why not benzos for delirium when I was training, involving a 3-way comparison of lorazepam, haloperidol, and chlorpromazine for delirium in hospitalized patients with AIDS, and registered 6 (!) patients in the delirium arm (lorazepam patients did worse). It had no placebo arm.

In fact, there is a lack of high quality drug trials in the delirium world which involve genuine placebo arms, ie, an arm in which there was no active pharmacologic treatment. I have wondered if we’ve made a huge mistake by doing trials which assumed haloperidol was the standard of care, without robust data to actually support that, and so have just done comparison trials of haloperidol with other agents, or like this study, a benzo or placebo added to haloperidol, when then underlying question, does haloperidol or other antipsychotics actually help (compared with placebo) remains open. See for instance this recent review for a nice summary of what’s out there (noting that this even predated the damning low-dose haloperidol/risperidone/placebo trial): it’s not convincing, it’s not the sort of thing you’d read and say the book is closed on this question, we can no longer have equipoise about comparing antipsychotics to placebo in delirium trials.

I was going to get to this point later in the blog post, but I realize I’m already there, so I’ll say it now: delirium is an international health crisis, it is real, it can be devastating (lead to permanent cognitive changes), leads to far worse outcomes for our patients (longer hospital stays, not being discharged home), costs billions of dollars, sucks shit for the patients and families going through it, and we don’t have a real inkling about actual, effective drug treatment for it. There are some inklings about chemoprophylaxis, but nothing definitive. Multidimensional prevention programs seem good, I like those, I’m 100% for those, but we need a lot more. If we can do a bloody RCT of simvastatin for chemoprophylaxis of delirium, surely we can do large, multicenter, patient-diverse (dementia, surgical patients, ICU patients, advanced cancer patients, dying patients), high quality, placebo-controlled trials of a variety of drugs, drug-classes – especially testing the ones people are actually using, and dosing strategies to see if there are any effective disease-modifying agents out there!

Which is why I’m just delighted our friends at MD Anderson did this study as it was well-done, although small, and adds to our understanding.

The subjects (N~60) were patients at MD Anderson’s palliative care unit; all had advanced cancer, and all had agitated delirium (RASS score of 2 or more; it looks like they changed the protocol to 1 or more mid-study – there’s a lengthy supplement involving the protocol changes). All had received haloperidol as a primary treatment for delirium – it looks like they used a protocol of 2 mg IV q4h scheduled + 2 mg IV q1h as needed. The patients were followed with q2 hourly RASS assessments and then received 3mg IV lorazepam or placebo if they continued to have an agitated RASS (mean RASS score prior to being the study drug was 1.6). Importantly, the patients also received a 2 mg prn dose of haloperidol as well regardless of which group they were assigned too. Of note, all the patients studied had delirium for at least 2 days prior to enrollment – these were patients with persistent delirium who didn’t get better quickly after routine interventions. After enrollment the median observation time was 6.4 h (after a median of 6.4 h, the patients had an agitation episode leading to being given the study drug). Median haloperidol doses prior to receiving the study drug were 5-7 mg in the prior 24h.

The primary outcome was RASS score 8 h after the single dose of study drug.

The results are fascinating as hell. To begin with, lorazepam looks really, really good when you are looking simply at the primary outcome of reduction in the RASS score. The RASS score went down rapidly as you’d expect – it was markedly lower in the lorazepam group by half an hour – and remained markedly lower than the placebo group at hour 8: -2.5 for lorazepam, -0.5 for the placebo group (all this was very statistically significant by all the usual tests). Of note, -2 on the RASS is light sedation (briefly awakens with eye contact to voice, but less than 10 seconds); -3 is movement or eye opening to voice but no eye contact.

Ie, these patients were pretty sedated, and much more so than the haloperidol/placebo-only patients.

One may also observe that the placebo patients having a RASS of -0.5 at 8 h means the median state of the placebo patients at 8 hours was something between ‘alert & calm’ and ‘mildly drowsy.’

Got that? While the RASS was a lot lower in lorazepam, the placebo patients’ median RASS could be considered, in fact, a really good outcome, and arguably, a better outcome, than the lorazepam group.

Of course, the reality is much more complicated than that (most delirious patients just don’t go back to normal and stay there), but it’s a good reminder that when using something like the RASS as an outcome, lower is not necessarily better: 0 is normal, and for many patients would be the most desirable outcome. (Notably, the authors address all this explicitly in their discussion.)

Figure 2b in the study is probably more helpful in understanding what happened to these patients than the actual primary outcome graph. What you can see is that at 8 h, in the lorazepam group, basically half the patients were deeply sedated (RASS -3 or less), and half were mildly sedated. In the placebo group, about a third were agitated, and the most of the rest in the mildly to moderately sedated range. If you want, you can actually see what happened to individual patients in the supplement online.

In the secondary findings, they note that many more nurses and caregivers in the lorazepam group than the placebo group judged the patient to be comfortable after the study drug was administered (~80 vs ~30%). The lorazepam group also had fewer ongoing doses of rescue medicines also. Median survival for both groups was ~70 h. I.e., you can essentially understand this to be a study of patients with terminal delirium.

What does all this mean?

It means that lorazepam effectively and rapidly sedates people, better than haloperidol, at the doses studied.

We’ve discussed this on the blog a little before, but this study helps us think about delirium and delirium outcomes better - what outcome we are actually aiming for in these patients. As I implied in the opening discussion of this blog post, I myself am at a point at which I do not consider there to be any active, disease-modifying drug treatment for delirium that I can get behind. By disease-modifying I mean drugs which would return patients to a more normal state of consciousness (i.e. push people closer to 0 on the RASS), and/or reduce the duration of delirium, and/or its long-term adverse outcomes. I think there is hope for antipsychotics, especially used in higher doses than the Australian study, but I don’t think there are any available data which convince me these should be a routine part of our care for delirious patients. We need well-powered, meticulously designed, placebo-controlled, and multi-institutional studies of haloperidol/other antipsychotics.

Until then, it’s just hope, and I honestly don’t know what to do.

I want to be clear, though – with the above I am talking about disease-modifying treatments for delirium. We clearly have, however, rapid & effective ways for reducing the distressing behaviors of agitation by sedating patients.

And I think it’s important that we keep in mind there is a difference between these two goals (sedation vs disease-modification). For our patients near the end of life sedation is often appropriate & acceptable. For some of our patients and their families it is in fact desirable - as this study showed, caregivers as a whole really preferred the moderately to deeply sedated state lorazepam gave these agitated, dying patients. It’s what I would want for myself, or my close loved ones, when close to death.

Sedation however is just not a ‘treatment for delirium,’ in the way that I used to hope low-dose haloperidol was.

Lorazepam has had a bad rap for ‘delirium’ historically. All of us have seen patients become agitated after getting it. Undoubtedly it is a common cause of delirium in hospitalized patients. While I don’t have any data to support this claim, I think much of the bad rap comes from patients who were given small doses of lorazepam, anxiolytic doses, leading to confusion and disinhibition. They weren’t given sedating doses of lorazepam, which is not a hard thing to do, and quite predictably sedates most people. To me, what this study does is help clarify that lorazepam very much does have a role in agitated delirium in patients near the end of life, when the immediate therapeutic goal is sedation. When we do it, however, we should do it right, and use the 3 mg dose like they did in this study, after of course clarifying prognosis and treatment goals with appropriate surrogates.

Tuesday, September 26, 2017 by Drew Rosielle MD ·

Monday, September 25, 2017

Moving From Research to Implementation to Research in Palliative Care, Part 1

by Christian Sinclair

In 2003, I began my hospice and palliative medicine (HPM) fellowship in Winston-Salem, NC. I was a solo fellow in a new program, and as luck would have it, I had loads of time to dedicate myself to learning. Since my wife, Kelly, was beginning her pediatric emergency medicine fellowship in Kansas City at the same time, I only had my dog and my fellowship to worry about. I always enjoyed reading articles and imagined how it would apply in my own practice. But when it came down to it, I was never really able to implement much of what I was reading, let alone have the numbers to benchmark against the research.

Fast forward to Spring of 2016. With years of experience across multiple care settings, I finally had an opportunity to implement research tools into everyday clinical practice by using the Edmonton Symptom Assessment Scale (ESAS) in each visit and tracking how patients do over time. I had used the ESAS in a few visits over the years, but could never seem to use it reliably at every visit with every patient.

At KU, we have been using a modified ESAS (with Mild, Moderate Severe) on the inpatient side for a long time, but never the numbers-based ESAS that would be most applicable to research. In practice, my symptom assessments were always driven more by the narrative of the patient, winding indirectly as the patient told their story. I never pressed hard on getting the mild, moderate, severe directly from the patients mouth, but would interpret their story into the scale and document it. Eventually I would get a comprehensive view of symptoms and make a good clinical plan, but I was never going to be able to use that to demonstrate quality nor publish research.

Even admitting this publicly, has taken me some time to do. I figured that everybody was already getting patient-reported outcomes. Frankly, it feels kind of embarrassing to admit. But as I talked to more people, I realized that other HPM clinicians were also not able to apply tools like the ESAS universally. Sure they might get few numbers or severity scores, but to do that at EVERY visit and for EVERY patient, that takes more than just clinician will. It takes a system-based approach to change. And that is not easy.

So in 2016, I talked with the outpatient nurse navigators, Amy and Wendy, and I asked them to help make sure that EVERY patient at EVERY visit was getting an ESAS form and that we were documenting it in the chart. They were both game, which I look back on and count my blessings. In all my previous attempts, when moving from research to implementation that culture change step always worked for a week or two and then regressed to the baseline. Someone gets too busy, or behind and then the standardized thing you are trying to do feels like 'extra work' for no good reason.

To help ensure our success, we made it a focus to talk about the ESAS at the beginning of the clinic day, in between patients and a debrief at the end of clinic. At first, our language was probably inelegant as we introduced the ESAS concept. When people rebelled against 'one more form' or 'hating those damn numbers', we initially backed down, but we persisted and it paid off, because we refined our language and we discovered how to overcome the hesitation of our patients. We helped our patients see the ESAS numbers as a demonstration of their voice and experience. After one interesting conversation with a patient, we began to call these numbers 'our palliative care labs' because 'no blood draw is going to tell me that your nausea was awful last night.'

It took a while but we also recognized that just 'getting the numbers' was not enough. Going back  to get these numbers after the visit was over and the plan was made was showing the patients that the symptoms were not necessarily driving the plan. So we adjusted and worked to make sure the ESAS was one of the first things we discussed with the patient, which in turn became the spine of the visit and therefore drove the plan.

Once we began to get consecutive visits with ESAS scores, we were able to show the patients their numbers over time. The feedback was tremendous in demonstrating that we cared about their symptom experience, and as we have become more facile in applying the ESAS we have noticed the objections fall greatly.

And now we have lots of ESAS numbers over lots of visits, but (and this is a BIG BUT) they were all buried in the narrative/free text part of the chart. So we needed to find a way to get this data exported from the Electronic Health Record. I'll share how we did that in part two tomorrow, because when I tried to figure out how to accomplish that, there was no guidance online I found helpful. My hope is that these stories of my clinical transformation from research wanna-be to providing the founding blocks of research and quality improvement may help someone else see that it is possible.

If you want to join in the conversation, this Wednesday we will be hosting the September #hpm Tweet Chat on the topic of "Moving from Research to Implementation to Research in HPM." #hpm Tweet Chats are held on the last Wednesday of the month at 9p ET/6p PT. Sign up on hpmchat.org to be updated of the monthly topic.

Christian Sinclair, MD, FAAHPM is immediate past president of AAHPM, editor-in-chief of Pallimed and a palliative care doctor at the University of Kansas Health System. If he isn't reading about HPM research, you can find him reading board game rules.

Monday, September 25, 2017 by Christian Sinclair ·

Friday, July 14, 2017

Palliative Care & CHF: PAL-HF trial



The main results of PAL-HF - a randomized, controlled trial of specialty palliative care team involvement in advanced heart failure patients -  have just been published in the Journal of the American College of Cardiology (DOI: 10.1016/j.jacc.2017.05.030. Clinicaltrials.gov registration here). 

This is an important, well-done study, with encouraging results - specialty PC improved the quality of life of patients with HF. I'll discuss the results in more details in this post.

The study was done by a multi-disciplinary team of palliative & cardiology investigators at Duke. This week's publication looks at QOL results which were the main, pre-specified outcomes. Of note, in the clinicaltrials.gov registration for the study, they do pre-specify healthcare resource utilization outcomes as one of their secondary outcomes, but this paper doesn't  present those data - presumably those will take them longer to collect and will be forthcoming. 

PAL-HF enrolled 150 hospitalized patients with HF at high risk of mortality or rehospitalization, and randomized the patients to receiving specialty palliative care vs usual care. They identified patients using the ESCAPE risk model, which I hadn't heard of - key reference is here. The inclusion criteria involved the patients having an ESCAPE model predicted 6 month of mortality - notably in this study the actual 6 month mortality was 30%.

Intervention-arm patients received a palliative care visit, I think by a palliative NP - the methods section is a little cagey about who exactly saw the patient apart from a palliative NP, if anyone. The NP did a comprehensive palliative evaluation (physical, psychoemotional, spiritual evaluation), had a goals of care discussion, did advance care planning, and presumably made recs about what to do. The patients were followed for 6 months, which is the length of the data collected for this study. They methods say PC remained involved in the patients' care, although the exact nature of that involvement is opaque to me - eg it's not clear the patients actually saw PC in clinic or anything, and the role of PC may have been advisory to the cardiology team. The full methods of the trial were published in a different paper but even reading that it's not entirely clear to me who from the palliative team saw the patient in addition to the NP, nor the real nature of the 6 month planned follow-up. 
The primary outcomes were QOL on the Kansas City Cardiomyopathy Questionnaire and the FACIT-Pal QOL scales st 6 months. I need to say: 6 months, whoa. This is rare to see a study of this kind look at outcomes over such a long period, and it's one of the reasons I think PAL-HF will set a new standard in these sort of complex palliative-intervention trials. In the methods they note that a 5 point improvement in the KCCC and a 10 point improvement in the FACIT-Pal are believed to be clinically relevant outcomes. Their power estimate showed they needed 200 patients to show such a difference - they ended up enrolling only a 150 (but still showed a difference). It's not totally clear to me why they capped enrollment at 150 – they mention survival was better than anticipated. Nonetheless they met their primary outcome with less than the estimated number of patients (which is an argument for the robustness of their findings). Key here is that unlike a lot of  lower quality research they designed their outcomes to explicitly be patient-relevant, and powered their study as such. 

Patients had a mean age of about 70 years, ~50% were women, and ~40% Black. 

The outcomes at 6 months were good: improvement of 9-10 on the KCCC and 11 on the FACIT-Pal in the intervention group compared to the usual care group. These exceeded the thresholds for what is considered clinically relevant. Some secondary outcomes were improved too: depression and anxiety symptoms, and spiritual well-being (as measured using FACIT). 

As above, I am assuming the resource utilization outcomes are forthcoming. They do mention that survival and rehospitalization rates were similar between the groups (30% for each outcome at 6 months). 

My summary thoughts:

This is a well-designed and executed study - the sort of thing I read and say Gosh we need a lot more of these. Eg, fewer retrospective chart reviews and case control studies, more well-thought out, well-designed, prospective studies testing if and how palliative care improves our patients' lives. Besides wishing the methods write-up was a little clearer about the exact nature of the palliative intervention, I am few complaints about the paper. 

The limits of the study are intrinsic to its methods - and I want to talk about one of the major challenges we face in our research base, which is the challenge of generalization. Unlike a drug trial, team-based palliative care interventions are inherently complex, and presumably very sensitive to very local factors. Eg, what the good folks at the Duke palliative care program do may be somewhat different than what my teams do, and any other team in the country. It's well documented that while palliative care has become ubiquitous in larger American hospitals, that does not mean every program is populated by well-trained, competent, interprofessional teams. In fact, we know that many are not. What this means is that we know what the Duke team did really did improve these patients' quality of life. Generalizing, exporting, what they did to other programs is difficult. Ideally the next steps in research like this would be to do a similar study that is national, involving many regions and types of hospital populations (not just academic referral centers). This is not to criticize the PAL-HF trial, it's freaking great, but more to acknowledge that we can't just all go around claiming Palliative Care Improves QOL for HF Patients, Full Stop. As if palliative care is one simple thing, simple intervention, the same everywhere, etc. This is in contradistinction to drug trials. Generalization is a big issue with drug trials too, but it's mostly an issue of extrapolating results to different patient populations (eg community patients to academic center patients, etc). But it's not really a matter of thinking that there's something importantly different about, say, enoxaparin, administered in Loma Linda vs Durham NC. There may very well be important differences in palliative care between Loma Linda and Durham, however. 

I hope those sorts of multi-center trials become more common.

At the end of the day, this is a landmark study. I am really, really glad it was published in a heart journal. We have over a decade of decent research showing the improvements palliative care provides in cancer patients' QOL, but far less in other patient populations, including HF. PAL-HF is a big step towards making the belief that PC benefits patients with HF less up to debate.  

Friday, July 14, 2017 by Drew Rosielle MD ·

Thursday, June 8, 2017

Perusing ASCO 2017 - AKA Time for Lorazepam

by Drew Rosielle


The Annual Meeting of the American Society of Clinical Oncology was last week. It’s been my observation over the years that much of the best palliative-oncology and supportive-oncology research is presented at ASCO each year, before it’s actually published (if it ever gets published).  So I always dig through the palliative/EOL/supportive/psychooncology abstracts each year to see what's happening. Below is a gently annotated list of the abstracts that caught my eye the most, for your perusal and edification. Undoubtedly, these are my idiosyncratic choices, and if you want to dig through all of them you can browse the abstracts by category here. A couple additional comments first.

One of the big headline trials was a supportive oncology trial showing that regular tablet-based symptom assessment in cancer patients prolongs survival.  Christian promises me he's going to do a deep dive into the symptom tablet trial so I won't really talk about it here.

It’s interesting however to compare it to one of the other major headlines which was about abiraterone for advanced prostate cancer. People went nuts for this study, although if you dig into the results they’re pretty modest (3 year survival 83% in the abiraterone vs 76% in the placebo group), but in cancer trials that’s typical. I’m not knocking the study, they are good results, I’d undoubtedly do abiraterone myself, but there’s often a big disconnect in the headline findings in cancer research and the actual, real, patient-relevant results. Lots of money to be made and spent on abiraterone, which is why it’s gotten so much press. Full paper here:http://www.nejm.org/doi/full/10.1056/NEJMoa1702900#t=abstract


The symptom-assessment trial got great press, to be fair, but far less than abiraterone (see this WaPo puff piece which totally ignores the symptom trial, but does talk about abiraterone and the gobs of industry money slushing around ASCO, which makes me, and I hope many, many oncologists, nauseated).  


Here are the other abstracts which caught my eye, loosely organized, and mildly annotated. (I should note that my annotations are summaries of the findings - keep in mind these are abstracts, not full publications that have been through peer review, we can't really look at the methods, so when I say that the abstract shows that X is effective for Y, that's me summarizing the abstract, not endorsing the veracity of the findings.) Also, if you're an author, and I misrepresented your findings, shame me in the comments and I'll append edits in the permanent post. 

1. A RCT of pretty high doses of lorazepam vs placebo, plus haloperidol for EOL agitation, showing that the addition of lorazepam helped. This got a lot of chatter on Twitter, especially about how it compared to the RCT of low dose haloperidol/risperidol for delirium.  I think it’s validation of the idea that it’s imperative to keep in mind the therapeutic goals with regard to delirium and agitation. Ie are we trying to sedate someone (=suppress the agitation behavior) or are we trying to improve the delirium? The first we can do, as this abstract shows, quite easily with a good dose of a benzodiazepine; the second we still lack any convincing data about any effective strategy in our late-stage patients, despite the widespread observation (belief?) that haloperidol & similar agents help.  Good stuff and I hope it's published in full soon:  http://abstracts.asco.org/199/AbstView_199_181607.html
  
2. A study looking at chemotherapy and palliative consultation in ICUs:  http://abstracts.asco.org/199/AbstView_199_192587.html 

3. Another study showing helpful effects of early palliative consultation in hospitalized cancer patients:   http://abstracts.asco.org/199/AbstView_199_190938.html 

4. A study looking at the relative stability of treatment preferences in advanced cancer patients over time:  http://abstracts.asco.org/199/AbstView_199_192725.html 

5. A study looking at Latinos & EOL preferences, including the generational effects after immigration: http://abstracts.asco.org/199/AbstView_199_193461.html 

6. A study about patient-caregiver agreement about goals: http://abstracts.asco.org/199/AbstView_199_192587.html 

6. A study looking at the natural history of fatigue in breast cancer survivors for 6 months. I wished they followed for even longer and hope they come out with data at years 1, 2, 3 and beyond: http://abstracts.asco.org/199/AbstView_199_182648.html 

7. A mobile CBT app for anxiety in cancer patients does very little: http://abstracts.asco.org/199/AbstView_199_194370.html 

8. A study looking at what healthy people say about whether they'd want 'palliative' vs curative chemo for AML, hypothetically speaking. Interestingly, responses seemed to be more fixed (fixed beliefs about whether chemo is worth it or not) than based on the information provided about different levels of side effects. This sort of research is fascinating, but I always worry that what healthy people say in a survey about a hypothetical question is very different from what they do when actually facing a life-threatening disease. The same problem with statements people make when they are healthy, and even put into health care directives. "Uncle Joe would never want to go to a nursing home." That sort of stuff - ie does it actually mean Uncle Joe would rather choose to die this month than go to a nursing home - how do we actually interpret the prior statements, etc. Anyway:  http://abstracts.asco.org/199/AbstView_199_192439.html 

9. A fascinating study about potential interactions between depression, and depression treatment, and length of stay in  hospitalized cancer patients:  http://abstracts.asco.org/199/AbstView_199_188306.html 

10. A cocoa-based balm for onycholysis in chemo patients. There were 2 onycholysis abstracts this year. Why not?  http://abstracts.asco.org/199/AbstView_199_186790.html 

11. A mildly promising pilot study lactoferrin for chemo dysgeusia:  http://abstracts.asco.org/199/AbstView_199_191178.html 

12. Several studies of olanzapine for chemo nausea/vomiting (CINV). One showing it's more effective for emesis than nausea?:  http://abstracts.asco.org/199/AbstView_199_185558.html. More data for olanzapine: http://abstracts.asco.org/199/AbstView_199_181353.html. And in case there was any doubt, here's a metaanalysis of olanzapine for CINV demonstrating its effectiveness:  http://abstracts.asco.org/199/AbstView_199_187470.html 

13. A follow-up, with longer term data, from the RCT of palliative care for stem cell transplant patients showing improvements in depression and PTSD, but not other outcomes, at 6 mo: http://abstracts.asco.org/199/AbstView_199_188285.html. Earlier publication here: https://www.ncbi.nlm.nih.gov/pubmed/27893130 

14. Predictors of aberrant drug behavior in a cancer center population (helpful, and it’s exactly what you’d expect it to be, because they are the same predictors in the healthy population): http://abstracts.asco.org/199/AbstView_199_192505.html  

15. Yes, transbuccal fentanyl helps for dyspnea: http://abstracts.asco.org/199/AbstView_199_181614.html 

16. A RCT of minocycline for chronic myeloma pain (!) which showed promising results (phase II, better trials are needed). I vaguely had a sense minocycline had antiinflammatory effects, but apparently it could have analgesic effects too. Really looking forward to a study which hopefully looks at long-term safety and efficacy: http://abstracts.asco.org/199/AbstView_199_186197.html 

17. I hadn’t known this but there is actually a RCT showing that l-carnitine WORSENS taxane CIPN. Ugh. I have never used it due to lack of data showing efficacy, but hadn't realized it was probably toxic and still see patients on it sometimes. If one needed reminding that all these herbs, supplements, and so-called alternative treatments aren't these bland, safe, anodynes this is a good reminder. Science-based medicine is what our patients need and deserve. This abstract is a follow-up to the study showing it was poison:  http://abstracts.asco.org/199/AbstView_199_184547.html 

18. A deeper look at the truly nasty neurotoxicities of anti-PDL1 drugs (the major class of cancer immunotherapies). Little is known about this (I've now seen one case) and we will see more and more of this as these drugs are more widely used:  http://abstracts.asco.org/199/AbstView_199_191534.html 

19. Finally, and whoa -- single fraction is as good as multi-fraction radiation for cord compression. At least in patients with poor long-term survival (median survival was 12 weeks in this cohort). I look forward discussing this with my rad onc colleagues, as it would be a very welcome option for patients with less than 3 months to live so they didn't have to spend 2+ weeks of that time getting radiation: http://abstracts.asco.o/199/AbstView_199_186591.html

Thursday, June 8, 2017 by Drew Rosielle MD ·

Wednesday, October 5, 2016

Five Tips for Effective Quality Improvement in Palliative Care (#3 will blow you away)

by Arif Kamal

Apologies for the “clickbait” title to the blog post; scouring the internet it seems that hyperbole works to get readers’ attention, certainly among entertainment sites and maybe increasingly within presidential politics. But it seems I had little choice; the fifth word of my title is “Quality”, which excites very few people. Bear with me, I promise this will get good.

Quality improvement is critical for palliative care organizations to build and sustain success within their clinical missions. Those who are watching and evaluating us, including patients, caregivers, health systems, regulators, and payers, are increasingly expecting a consistent product, delivered in close alignment with our growing evidence base. Further, rapid evolutions in the health care delivery and payment ecosystem require palliative care organizations to masterfully deploy quality improvement initiatives to solve problems. This requires a facile understanding of key steps needed to transition from identifying a problem to sustaining the change.

I’ve spent much of the past five years working as a Quality Improvement Coach for the American Society of Clinical Oncology’s (ASCO) Quality Training Program and ASCO/AAHPM Virtual Learning Collaborative and have come away with a few pearls that may be helpful. I also highly recommend “The Improvement Guide” by Langley et al., seen by many as the definitive textbook for healthcare quality improvement.

Below I offer Five Tips, by no means an exhaustive review, but a decent place to start.

Tip #1: Define the problem – Have a problem statement. This is one or two sentences that covers the Who, What, When, and Where of the problem (but not Why or How). Add a Harm to this statement to give it some punch. For example, “At the Mustard Clinic, from January through July 2016 the outpatient palliative care clinic no-show rate was 40%, missing critical opportunities for patients to receive timely symptom management, goals of care discussions, and possibly reduce time in the hospital during an unwanted readmission.”

Tip #2: Define the problem, again – Quality improvement committees are like family meetings, everyone’s inherently and not unexpectedly on different pages. When starting a quality improvement committee meeting, go around the room and ask everyone what they think the problem is you’re trying to solve. Marvel at the variation, and the “scope creep” and “scope drift” that occur over time. And then insert your excellent family meeting skills to get everyone on the same page. Lack of consensus on the specific problem will sink you.

Tip #3: Problem first, solutions (much) later – If your problem statement sounds something like this, “Because of high 30-day readmission rates at our hospital, we need more palliative care” then you’ve put the cart before the horse. All quality improvement starts with a specific, agreed-upon problem – not a solution. Starting a palliative care clinic, growing a palliative care service, applying disease-based triggers for consultation, etc. are all solutions. Implementing your solution is not the point of quality improvement, it’s solving a problem. Our practice is to not speak of solutions until at least the fourth meeting of our quality improvement committee.

Tip #4: Have an aim statement. What is the goal of your quality improvement project? Be specific, and think of the Who, What, When, and Where (but not How). For example, “By July 2017 we will decrease the outpatient palliative care clinic no-show rate to 25% at the Mustard Clinic.” You cannot yet know the “How”, because it’s dependent on the “Why”. And you can’t understand the “Why” without exploring the drivers of the problem, and the process by which the problem occurs.

Tip #5: Explore the “Why”. Be curious, open-minded, and solicit opinions of all stakeholders. The above fictitious problem of clinic no-show rates is complex, and not easy to solve (or people would have solved it already). If any part of your brain is saying, “Isn’t it obvious, what they need to do is….” then you’re like the 99% of us (very much including me) who must practice exploring the process, and getting input from all stakeholders. I could imagine this committee would solicit input from patients, caregivers, front desk staff, phone triage personnel, appointment schedulers, nurses, physicians, and financial counselors. Can you think of others? Drawing the process from start to finish is also very helpful. How does a patient go from being referred to the clinic to successfully coming? Where are all the places the process could go wrong? What data is needed to quantify the shortfalls? The point here is try not to go down a path of implementing solutions until you’re confident of the problem, have an understanding of where the process is breaking down, and have tailored the solution to that breakdown.

I’ll be speaking more about this topic, and will be joined by several other national leaders including Drs. Diane Meier, Steve Pantilat, and Phil Rodgers during the 2nd Annual Palliative Care Quality Matters Conference on October 20th from 12-5PM EST. The Conference is hosted by the Global Palliative Care Quality Alliance (www.gpcqa.org), a multi-site, volunteer collaboration of healthcare organizations with a passion for improving quality in palliative care.

The virtual conference presented via Webex is open to all colleagues with complimentary registration and CME/CNE. Register at www.gpcqa.org/qmc

Additionally, this Wednesday evening October 5th at 9PM EST/6PM PST I’ll be hosting a Tweetchat. Would love your input on the following questions:

T1: What makes performing quality improvement challenging in palliative care and hospice?

T2: Most quality improvement projects don’t work. Name an epic failure you were part of. What did you learn?

T3: Change my Top Five to a Top Ten list. What tips could you add?

T4: How could we help each other in our field do better quality improvement? What’s the role of AAHPM, HPNA, NHPCO and other membership societies?

What: #hpm (hospice and palliative med/care) chat on Twitter
When: Wed 10/5/2016 - 9p ET/ 6p PT
Host: Dr. Arif Kamal @arifkamalMD

 and go to www.hpmchat.org for up to date info.

If you are new to Tweetchats, you do not need a Twitter account to follow along. Try using the search function on Twitter. If you do have a Twitter account, we recommend using tchat.io for ease of following. You can also check out the new site dedicated to #hpm chat - www.hpmchat.org

For more on past tweetchats, see our archive here.

Arif Kamal MD MBA MHS is the Physician Quality and Outcomes Officer and Assistant Professor of Medicine (Oncology and Palliative Care) at Duke University. He is a diehard Kansas City Chiefs football fan, which has prepared him for discussions regarding futility and complicated grief with his patients.

Wednesday, October 5, 2016 by Meredith MacMartin ·

Tuesday, May 19, 2015

Research and Mentorship in Palliative Care

by Tom Leblanc

Each time I attend an AAHPM Annual Assembly, I’m amazed at the growing number of attendees. Amid those thousands of people, it’s easy to forget that we have a pretty serious workforce problem on our hands. But then I attend the Research Special Interest Group (or “SIG”) meeting, and I’m quickly brought down from my blissfully ignorant orbit to face another striking reality: clinical workforce issues also signal challenges for research and mentorship in our field.

We don’t often talk about this, but shortages in the workforce limit our ability to do high-impact, innovative, important research. Even the largest, most research-oriented centers may have just one senior researcher on faculty, and some prominent institutions have none. In this restrictive environment, how can a student, or trainee, or junior faculty member even get started doing palliative care research? How can we meet the mentorship and “start-up” needs of developing researchers in our field?

In our Research SIG discussions, much of our time is spent talking about how to find mentors or how to get involved in palliative care research. This is clearly an area of need in our field. So as this year’s Research SIG Chair, and as someone who spends much of my time thinking about (and trying to do!) research in palliative care, I thought it might be helpful to host this TweetChat on research-related issues in palliative care. Let’s talk about challenges, opportunities, how to get started, how to find a mentor, and even discuss ideas about important research priorities for the field.

Join me @tomleblancMD this Wednesday night at 6pm PST to explore and discuss issues in #hpm research!

Topics for the chat

T1 – What advice would you give to someone who wants to get started in #hpm research?

T2 – What are the most important priorities for research in #hpm?

T3 – In what clinical scenarios do you find yourself wishing you had good data? #hpm

Some useful resources:
AAHPM Research page
AAHPM Research Scholars program
AAHPM year-long mentoring program
AAHPM Research SIG page

A few interesting papers about mentorship:
Having the right chemistry: a qualitative study of mentoring in academic medicine.”
Making the most of mentors: a guide for mentees

Some resources on research priorities in our field:
NINR’s “Innovative Questions” Project
NINR’s “Innovative Questions in End-of-Life and Palliative Care”
“Palliative care research—priorities and the way forward”
“Research priorities in pediatric palliative care: a Delphi study”
“A national agenda for social work research in palliative and end-of-life care”
“Research priorities for palliative and end-of-life care in the emergency setting”
“Research priorities in geriatric palliative care: an introduction to a new series”  (note that there are several articles in this series)

What: #hpm chat on Twitter
When: Wed 5/20/2015 - 9p ET/ 6p PT
Host: Tom LeBlanc 
Facebook Event Listing: https://www.facebook.com/events/584716504995684

If you are new to Tweetchats, you do not need a Twitter account to follow along. Try using the search function on Twitter. If you do have a Twitter account, we recommend using tchat.io for ease of following.

You can find Chat Transcript and Chat Analytics courtesy of @Symplur


Tuesday, May 19, 2015 by Pallimed Editor ·

Pallimed | Blogger Template adapted from Mash2 by Bloggermint