Monday, February 27, 2006
Two articles about research methods, both of which relate to recent blog posts:
Journal of Clinical Oncology has an intriguing article about using an administrative database to define who is "dying of" breast cancer. The context of the article has to do with an ongoing debate about using administrative data, retrospectively, to study services used for dying patients. Example: you're studying where people with breast cancer die. You get a database of 1000 breast cancer deaths using administrative data. You find that 75% of those patients died in the hospital. Knowing that most people want to die at home, you conclude "palliative care" wasn't employed soon or well enough or patients' wishes aren't being honored, etc. One of the many problems here is that you can't tell whether the people in your study were terminally ill per se when they died & so for some number of these patients dying in the acute setting was very appropriate. (This situation is similar to what is discussed in this blog post.) So what this study in JCO tries to do is develop an algorithm, using only administrative data ( i.e. insurance records, hospital discharge codes, etc.) to differentiate a group of women with breast cancer into those who died of breast cancer versus those who died with breast cancer. The algorithm essentially divided the breast cancer women into localized vs. disseminated disease and then further analyzed the disseminated group based on a bunch of other diagnoses to distinguish the died of's from the died with's. They then validated their algorithm using a chart review, and found the algorithm had a sensitivity of 95% (for classifying those who died of cancer). There are major problems with their efforts that I'm not going to belabor (the way they performed their chart review to validate the algorithm in particular was not very convincing). This is a strong first step, and with further validation methods like these may become quite useful.
There's also an editorial about this, which appears to be available in free full-text.
(The same issue of JCO also looks at race, communication, and trust in the doctor-patient relationship in a group of veterans with lung cancer.)
Annals of Internal Medicine has published a trial about side effect assessment in clinical trials. Not surprisingly it showed if you use symptom checklists (as opposed to open-ended questions) patients will report adverse events at much higher rates (77% of patients reporting adverse events with a checklist vs. 14% with open-ended inquiry). It's the whole question of symptom incidentalomas all over again.