Medical students are socialized to feel like we don't understand clinical practice well enough to have strong opinions about it. This happens despite the wisdom, thoughtfulness, and good intentions of medical educators; it happens because of basic features of medical education. First, the structure of medical school makes medical students feel younger—and correspondingly less competent, reasonable, and mature—than we are. Two-thirds of medical students take gap years between college and medical school, so many of us go from living in apartments to living in dorms; from working full-time jobs to attending mandatory 8am lectures; from freely scheduling doctors' appointments to being unable to make plans because we haven't received our schedules. I once found myself moving into a new apartment at 11pm because my lease ended that day, but I had been denied permission to leave class early. Professionalism assessments also play a role. "Professionalism" is not well defined, and as a result, "behaving professionally" has more or less come to mean "adhering to the norms of the medical profession," or even just "adhering to the norms the people evaluating you have decided to enforce." These include ethical norms, behavioral norms, etiquette norms, and any other norms you might imagine. For instance, a few months into medical school, my class received this email: "Moving the bedframes violates the lease agreement that you signed upon entering [your dorm]. You may have heard a rumor from more senior students that this is an acceptable practice. Unfortunately, it is not... If you have moved or accepted a bed, and we do not hear from you it will be seen as a professionalism issue and be referred to the appropriate body." (emphasis added) When someone tells you early in your training that something is a professionalism issue, your reaction may be "hm, I don't really see why moving beds is an issue that's relevant to the medical profession, but maybe I'll come to understand." First-year medical students are inclined to be deferential because we recognize how little we know about the medical profession. We do not understand the logic behind, for instance, rounds, patient presentations, and 28-hour shifts. Many of these norms eventually start to make sense. I've gone from wondering why preceptors harped so emphatically on being able to describe a patient in a single sentence to appreciating the efficiency and clarity of a perfect one-liner. But plenty of norms in medicine are just bad. Some practices are manifestations of paternalism (e.g., answering patients' questions in a vague, non-committal way), racism or sexism (e.g., undertreating Black patients' pain), antiquated traditions (e.g., wearing coats that may transmit disease), or the brokenness of the US health care system (e.g., not telling patients the cost of their care). The bad practices are often subtle, and even when they aren't, it can take a long time to realize they aren't justifiable. It took me seeing multiple women faint from pain during gynecologic procedures before I felt confident enough to tentatively suggest that we do things differently. My default stance was "there must be some reason they're doing things this way," and it required an overwhelming amount of evidence to change my mind.
Other professions undoubtedly have a similar problem: new professionals in any field may not feel that they can question established professional norms until they've been around long enough that the norms have become, well, normalized. As a result, it may often be outsiders who push for change. For instance, it was initially parents—not teachers—who lobbied for the abolition of corporal punishment in schools. Similarly, advocacy groups have created helplines to support patients appealing surprise medical bills, even as hospitals have illegally kept prices opaque. The challenge for actors outside of medicine, though, is that medicine is a complicated and technical field, and it is hard to challenge norms that you do not fully understand. Before I was a medical student, I had a doctor who repeatedly "examined" my injured ankle over my shoe. I didn't realize until my first year of medical school that you can't reliably examine an ankle this way. In some ways, medical students are uniquely well-positioned to form opinions about which practices are good and bad. This is because we are both insiders and outsiders. We have some understanding of how medicine works, but haven't yet internalized all of its norms. We're expected to learn, so can ask "Why is this done that way?" and evaluate the rationale. And we rotate through different specialties, so can compare across them, assessing which practices are pervasive and which are specific to a given context. Our insider/outsider status could be both a weakness and a strength: we may not know medicine, but our time spent working outside of medicine has left us with other knowledge; we may not understand clinical practice yet, but we haven't been numbed to it, either. One of the hardest things medical students have to do is remain open and humble enough to recognize that many practices will one day make sense, while remaining clear-eyed and critical about those that won't. But the concept of professionalism blurs our vision. It gives us a strong incentive not to form our own opinions because we are being graded on how well we emulate norms. Assuming there are good reasons for these norms resolves cognitive dissonance, while asking hard questions about them risks calling other doctors' professionalism into question. Thus, professionalism makes us less likely to trust our opinions about the behaviors we witness in the following way. First, professionalism is defined so broadly that norms only weakly tied to medicine fall under its purview. Second, we know we do not understand the rationales underlying many professional norms, so are inclined to defer to more senior clinicians about them. In combination, we are set up to place little stock in the opinions we form about the things we observe in clinical settings, including those we're well-positioned to form opinions about. In the absence of criticism and pushback, entrenched norms are liable to remain entrenched.
0 Comments
This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
There has been too little evaluation of ethics courses in medical education in part because there is not consensus on what these courses should be trying to achieve. Recently, I argued that medical school ethics courses should help trainees to make more ethical decisions. I also reviewed evidence suggesting that we do not know whether these courses improve decision making in clinical practice. Here, I consider ways to assess the impact of ethics education on real-world decision making and the implications these assessments might have for ethics education. The Association of American Medical Colleges (AAMC) includes “adherence to ethical principles” among the “clinical activities that residents are expected to perform on day one of residency.” Notably, the AAMC does not say graduates should merely understand ethical principles; rather, they should be able to abide by them. This means that if ethics classes impart knowledge and skills — say, an understanding of ethical principles or improved moral reasoning — but don’t prepare trainees to behave ethically in practice, they have failed to accomplish their overriding objective. Indeed, a 2022 review on the impact of ethics education concludes that there is a “moral obligation” to show that ethics curricula affect clinical practice. Unfortunately, we have little sense of whether ethics courses improve physicians’ ethical decision-making in practice. Ideally, assessments of ethics curricula should focus on outcomes that are clinically relevant, ethically important, and measurable. Identifying such outcomes is hard, primarily because many of the goals of ethics curricula cannot be easily measured. For instance, ethics curricula may improve ethical decision-making by increasing clinicians’ awareness of the ethical issues they encounter, enabling them to either directly address these dilemmas or seek help. Unfortunately, this skill cannot be readily assessed in clinical settings. But other real-world outcomes are more measurable. Consider the following example: Physicians regularly make decisions about which patients have decision-making capacity (“capacity”). This determination matters both clinically and ethically, as it establishes whether patients can make medical decisions for themselves. (Notably, capacity is not a binary: patients can retain capacity to make some decisions but not others, or can retain capacity to make decisions with the support of a surrogate.) Incorrectly determining that a patient has or lacks capacity can strip them of fundamental rights or put them at risk of receiving care they do not want. It is thus important that clinicians correctly determine which patients possess capacity and which do not. However, although a large percentage of hospitalized patients lack capacity, physicians often do not feel confident in their ability to assess capacity, fail to recognize that most patients who do not have capacity lack it, and often disagree on which patients have capacity. Finally, although capacity is challenging to assess, there are relatively clear and widely agreed upon criteria for assessing it, and evaluation tools with high interrater reliability. Given this, it would be both possible and worthwhile to determine whether medical trainees’ ability to assess capacity in clinical settings is enhanced by ethics education. Here are two potential approaches to evaluating this: first, medical students might perform observed capacity assessments on their psychiatry rotations, just as they perform observed neurological exams on their neurology rotations. Students’ capacity assessments could be compared to a “gold standard,” or the assessments of physicians who have substantial training and experience in evaluating capacity using structured interviewing tools. Second, residents who consult psychiatry for capacity assessments could be asked to first determine whether they think a patient has capacity and why. This determination could be compared with the psychiatrist’s subsequent assessment. Programs could then randomize trainees to ethics training — or to a given type of ethics training — to determine the effect of ethics education on the quality and accuracy of trainees’ capacity assessments. Of course, ethics curricula should do much more than make trainees good at assessing capacity. But measuring one clinically and ethically significant endpoint could provide insight into other aspects of ethics education in two important ways. First, if researchers were to determine that trainees do a poor job of assessing capacity because they have too little time, or cannot remember the right questions to ask, or fail to check capacity in the first place, this would point to different solutions — some of which education could help with, and others of which it likely would not. Second, if researchers were to determine that trainees generally do a poor job of assessing capacity because of a given barrier, this could have implications for other kinds of ethical decisions. For instance, if researchers were to find that trainees fail to perform thorough capacity assessments primarily because of time constraints, other ethical decisions would likely be impacted as well. Moreover, this insight could be used to improve ethics curricula. After all, ethics classes should teach clinicians how to respond to the challenges they most often face. Not all (or perhaps even most) aspects of clinicians’ ethical decision-making are amenable to these kinds of evaluations in clinical settings, meaning other types of evaluations will play an important role as well. But many routine practices — assessing capacity, acquiring informed consent, advance care planning, and allocating resources, for instance — are. And given the importance of these endpoints, it is worth determining whether ethics education improves clinicians’ decision making across these domains. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Medical students spend a lot of time learning about conditions they will likely never treat. This weak relationship between what students are taught and what they will treat has negative implications for patient care. Recently, I looked into discrepancies between U.S. disease burden in 2016 and how often conditions are mentioned in the 2020 edition of First Aid for the USMLE Step 1, an 832 page book sometimes referred to as the medical student’s bible. The content of First Aid provides insight into the material emphasized on Step 1 — the first licensing exam medical students take, and one that is famous for testing doctors on Googleable minutia. This test shapes medical curricula and students’ independent studying efforts — before Step 1 became pass-fail, students would typically study for it for 70 hours a week for seven weeks, in addition to all the time they spent studying before this dedicated period. My review identified broad discrepancies between disease burden and the relative frequency with which conditions were mentioned in First Aid. For example, pheochromocytoma, a rare tumor that occurs in about one out of every 150,000 people per year — is mentioned 16 times in First Aid. By contrast, low back pain — the fifth leading cause of disability-adjusted life years, or DALYs, in the U.S., and a condition that has affected one in four Americans in the last three months — is mentioned only nine times. (Disease burden is commonly measured in DALYs, which combine morbidity and mortality into one metric. The leading causes of DALYs in the U.S. include major contributors to mortality, like ischemic heart disease and lung cancer, as well as major causes of morbidity, like low back pain.) Similarly, neck pain, which is the eleventh leading cause of DALYs, is mentioned just twice. Both neck and back pain are also often mentioned as symptoms of other conditions (e.g., multiple sclerosis and prostatitis), rather than as issues in and of themselves. Opioid use disorder, the seventh leading cause of DALYs in 2016 and a condition that killed more than 75,000 Americans last year, is mentioned only three times. Motor vehicle accidents are mentioned only four times, despite being the fifteenth leading cause of DALYs. There are some good reasons why Step 1 content is not closely tied to disease burden. The purpose of the exam is to assess students’ understanding and application of basic science principles to clinical practice. This means that several public health problems that cause significant disease burden — like motor vehicle accidents or gun violence — are barely tested. But it is not clear Step 2, an exam meant to “emphasize health promotion and disease prevention,” does much better. Indeed, in First Aid for the USMLE Step 2, back pain again is mentioned fewer times than pheochromocytoma. Similarly, despite dietary risks posing the greatest health threat to Americans (including smoking), First Aid for the USMLE Step 2 says next to nothing about how to reduce these risks. More broadly, there may also be good reasons why medical curricula should not perfectly align with disease burden. First, more time should be devoted to topics that are challenging to understand or that teach broader physiologic lessons. Just as researchers can gain insights about common diseases by studying rare ones, students can learn broader lessons by studying diseases that cause relatively little disease burden. Second, after students begin their clinical training, their educations will be more closely tied to disease burden. When completing a primary care rotation, students will meet plenty of patients with back and neck pain. But the reasons some diseases are emphasized and taught about more than others often may be indefensible. Medical curricula seem to be greatly influenced by how well understood different conditions are, meaning curricula can wind up reflecting research funding disparities. For instance, although eating disorders cause substantial morbidity and mortality, research into them has been underfunded. As a result, no highly effective treatments targeting anorexia or bulimia nervosa have emerged, and remission rates are relatively low. Medical schools may not want to emphasize the limitations of medicine or devote resources to teaching about conditions that are multifactorial and resist neat packaging, meaning these disorders are often barely mentioned. But, although eating disorders are not well understood, thousands of papers have been written about them, meaning devoting a few hours to teaching medical students about them would still barely scratch the surface. And even when a condition is understudied or not well understood, it is worth explaining why. For instance, if heart failure with reduced ejection fraction is discussed more than heart failure with preserved ejection fraction, students may wrongly conclude this has to do with the relative seriousness of these conditions, rather than with the inherent challenge of conducting clinical trials with the latter population (because their condition is less amenable to objective inclusion criteria). Other reasons for curricular disparities may be even more insidious: for instance, the lack of attention to certain diseases may reflect the medical community’s perceived importance of these conditions, or whether they tend to affect more empowered or marginalized populations. The weak link between medical training and disease burden matters: if medical students are not taught about certain conditions, they will be less equipped to treat these conditions. They may also be less inclined to specialize in treating them or to conduct research on them. Thus, although students will encounter patients with back pain or who face dietary risks, if they and the physicians supervising them have not been taught much about caring for these patients, these patients likely will not receive optimal treatment. And indeed, there is substantial evidence that physicians feel poorly prepared to counsel patients on nutrition, despite this being one of the most common topics patients inquire about. If the lack of curricular attention reflects research and health disparities, failing to emphasize certain conditions may also compound these disparities. Addressing this problem requires understanding it. Researchers could start by assessing the link between disease burden and Step exam questions, curricular time, and other resources medical students rely on (like the UWorld Step exam question banks). Organizations that influence medical curricula — like the Association of American Medical Colleges and the Liaison Committee on Medical Education—should do the same. Medical schools should also incorporate outside resources to cover topics their curricula do not explore in depth, as several medical schools have done with nutrition education. But continuing to ignore the relationship between disease burden and curricular time does a disservice to medical students and to the patients they will one day care for. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
I recently argued that we need to evaluate medical school ethics curricula. Here, I explore how ethics courses became a key component of medical education and what we do know about them. Although ethics had been a recognized component of medical practice since Hippocrates’ time, ethics education is a more recent innovation. In the 1970s, the medical community was shaken by several high-profile lawsuits alleging unethical behavior by physicians. As medical care advanced — and categories like “brain death” emerged — doctors found themselves facing challenging new dilemmas and old ones more often. In response to this, in 1977, The Johns Hopkins University School of Medicine became the first medical school to incorporate ethics education into its curriculum. Throughout the 1980s and 1990s, medical schools increasingly began to incorporate ethics education into their curricula. By 2002, approximately 79 percent of U.S. medical schools offered a formal ethics course. Today, the Association of American Medical Colleges (AAMC) includes “adherence to ethical principles” among the competencies required of medical school graduates. As a result, all U.S. medical schools — and many medical schools around the world — require ethics training. There is some consensus on the content ethics courses should cover. The AAMC requires medical school graduates to “demonstrate a commitment to ethical principles pertaining to provision or withholding of care, confidentiality, informed consent.” Correspondingly, most medical school ethics courses review issues related to consent, end-of-life care, and confidentiality. But beyond this, the scope of these courses varies immensely (in part because many combine teaching in ethics and professionalism, and there is little consensus on what “professionalism” means). The format and design of medical school ethics courses also varies. A wide array of pedagogical approaches are employed: most rely on some combination of lectures, case-based learning, and small group discussions. But others employ readings, debates, or simulations with standardized patients. These courses also receive differing degrees of emphasis within medical curricula, with some schools spending less than a dozen hours on ethics education and others spending hundreds. (Notably, much of the research on the state of ethics education in U.S. medical schools is nearly twenty years old, though there is little reason to suspect that ethics education has converged during that time, given that medical curricula have in many ways become more diverse.) Finally, what can seem like consensus in approaches to ethics education can mask underlying differences. For instance, although many medical schools describe their ethics courses as “integrated,” schools mean different things by this (e.g., in some cases “integrated” means “interdisciplinary,” and in other cases it means “incorporated into other parts of the curriculum”). A study from this year reviewed evidence on interventions aimed at improving ethical decision-making in clinical practice. The authors identified eight studies of medical students. Of these, five used written tools to evaluate students’ ethical reasoning and decision-making, while three assessed students’ interactions with standardized patients or used objective structured clinical examinations (OSCEs). Three of these eight studies assessed U.S. students, the most recent of which was published in 1998. These studies found mixed results. One study found that an ethics course led recipients to engage in more thorough — but not necessarily better — reasoning, while another found that evaluators disagreed so often that it was nearly impossible to achieve consensus about students’ performances. The authors of a 2017 review assessing the effectiveness of ethics education note that it is hard to draw conclusions from the existing data, describing the studies as “vastly heterogeneous,” and bearing “a definite lack of consistency in teaching methods and curriculum,” The authors conclude, “With such an array, the true effectiveness of these methods of ethics teaching cannot currently be well assessed especially with a lack of replication studies.” The literature on ethics education thus has several gaps. First, many of the studies assessing ethics education in the U.S. are decades old. This matters because medical education has changed significantly during the 21st century. (For instance, many medical schools have substantially restructured their curricula and many students do not regularly attend class in person.) These changes may have implications for the efficacy of ethics curricula. Second, there are very few head-to-head comparisons of ethics education interventions. This is notable because ethics curricula are diverse. Finally, and most importantly, there is almost no evidence that these curricula lead to better decision-making in clinical settings — where it matters. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Academia often treats all areas of research as important, and two projects that will publish equally well as equally worthy of exploring. I believe this is misguided. Instead, we should strive to create an academic culture where researchers consider and discuss a project’s likely impact on the world when deciding what to work on. Though this view is at odds with current norms in academia, there are four reasons why a project’s potential to improve the world should be an explicit consideration. First, research projects can have massively different impacts, ranging from altering the course of human history to collecting dust. To the extent that we can do work that does more to improve people’s lives without imposing major burdens on ourselves, we should. Second, the choice of a research project affects a researcher’s career trajectory, and as some have argued, deciding how to spend one’s career is the most important ethical decision many will ever make. Third, most academic researchers are supported by public research dollars or work at tax exempt institutions. To the extent that researchers are benefitting from public resources, they have an obligation to use research resources in socially valuable ways. Fourth, most researchers come from advantaged backgrounds. If researchers pick projects primarily based on their own interests, the research agenda will provide too few benefits to populations underrepresented in academia. One might push back on this view by arguing that the research enterprise functions as an effective market. Perhaps academic researchers already have strong incentives to choose projects that do more to improve the world, given these projects will yield more funding, publications, and job opportunities. On this view, researchers have no reason to consider a project’s likely positive impact; journal editors, grant reviewers, and hiring committees will do this for them. But the academic marketplace is riddled with market failures: some diseases receive far more research funding than other, comparably severe ones; negative findings and replication studies are less likely to get published; research funders don’t always consider the magnitude of a problem in deciding which grants to fund; and so on. And although these problems warrant structural solutions, individual researchers can also help mitigate their effects. One might also argue that pursuing a career in academia is hard enough even when you’re studying the thing you are most passionate about. (I did a year of Zoom PhD; you don’t have to convince me of this.) On this view, working on the project you’re most interested in should crowd out all other considerations. But while this may be the case for some people, this view doesn’t accord with most conversations I’ve had with fellow graduate students. Many students enter their PhDs unsure about what to work on, and wind up deciding between multiple research areas or projects. Given that most have only worked in a few research areas prior to embarking on a PhD, the advice “choose something you’re passionate about” often isn’t very useful. For many students, adding “choose a project that you think can do more to improve the world” to the list of criteria for selecting a research topic would represent a helpful boundary, rather than an oppressive constraint. Of course, people will have different understandings of what constitutes an impactful project. Some may aim to address certain inequities; others may want to help as many people as possible. People also will disagree about which projects matter most even according to the same metrics: a decade ago, many scientists thought developing an mRNA vaccine platform was a good idea, but not one of the most important projects of our time. But the inherent uncertainty about which research is more impactful does not leave us completely in the dark: most would agree that curing a disease that kills millions is more important than curing a disease that causes mild symptoms in a few. In practice, identifying more beneficial research questions involves guessing at several hard to estimate parameters—e.g., How many people are affected by a given problem and how badly? Are there already enough talented people working on this such that my contributions will be less important? And will the people who read my paper be able to do anything about this? The more basic the research, the harder these kinds of questions are to answer. My goal here, though, is not to provide practical advice. (Fortunately, groups like Effective Thesis do.) My point is that researchers should approach the question of how much a given project will improve the world with the same rigor they bring to the work of the project itself. Researchers do not need to arrive at the best or most precise answer every time: over the course of their careers, seriously considering which projects are more important and, to some extent, picking projects on this basis will produce a body of work that does more good. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Health professions students are often required to complete training in ethics. But these curricula vary immensely in terms of their stated objectives, time devoted to them, when during training students complete them, who teaches them, content covered, how students are assessed, and instruction model used. Evaluating these curricula on a common set of standards could help make them more effective. In general, it is good to evaluate curricula. But there are several reasons to think it may be particularly important to evaluate ethics curricula. The first is that these curricula are incredibly diverse, with one professor noting that the approximately 140 medical schools that offer ethics training do so “in just about 140 different ways.” This suggests there is no consensus on the best way to teach ethics to health professions students. The second is that time in these curricula is often quite limited and costly, so it is important to make these curricula efficient. Third, when these curricula do work, it would be helpful to identify exactly how and why they work, as this could have broader implications for applied ethics training. Finally, it is possible that some ethics curricula simply don’t work very well. In order to conclude ethics curricula work, at least two things would have to be true: first, students would have to make ethically suboptimal decisions without these curricula, and second, these curricula would have to cause students to make more ethical decisions. But it’s not obvious both these criteria are satisfied. After all, ethics training is different from other kinds of training health professions students receive. Because most students come in with no background in managing cardiovascular disease, effectively teaching students how to do this will almost certainly lead them to provide better care. But students do enter training with ideas about how to approach ethical issues. If some students’ approaches are reasonable, these students may not benefit much from further training (and indeed, bad training could lead them to make worse decisions). Additionally, multiple studies have found that even professional ethicists do not behave more morally than non-ethicists. If a deep understanding of ethics does not translate into more ethical behavior, providing a few weeks of ethics training to health professions students may not lead them to make more ethical decisions in practice — a primary goal of these curricula. One challenge in evaluating ethics curricula is that people often disagree on their purpose. For instance, some have emphasized “[improving] students’ moral reasoning about value issues regardless of what their particular set of moral values happens to be.” Others have focused on a variety of goals, from increasing students’ awareness of ethical issues, to learning fundamental concepts in bioethics, to instilling certain virtues. Many of these objectives would be challenging to evaluate: for instance, how does one assess whether an ethics curriculum has increased a student’s “commitment to clinical competence and lifelong education”? And if the goals of ethics curricula differ across institutions, would it even be possible to develop a standardized assessment tool that administrators across institutions would be willing to use? These are undoubtedly challenges. But educators likely would agree upon at least one straightforward and assessable objective: these curricula should cause health professions students to make more ethical decisions more of the time. This, too, may seem like an impossible standard to assess: after all, if people agreed on the “more ethical” answers to ethical dilemmas, would these classes need to exist in the first place? But while medical ethicists disagree in certain cases about what these “more ethical” decisions are, in most common cases, there is consensus. For instance, the overwhelming majority of medical ethicists agree that, in general, capacitated patients should be allowed to make decisions about what care they want, people should be told about the major risks and benefits of medical procedures, patients should not be denied care because of past unrelated behavior, resources should not be allocated in ways that primarily benefit advantaged patients, and so on. In other words, there is consensus on how clinicians should resolve many of the issues they will regularly encounter, and trainees’ understanding of this consensus can be assessed. (Of course, clinicians also may encounter niche or particularly challenging cases over their careers, but building and evaluating ethics curricula on the basis of these rare cases would be akin to building an introductory class on cardiac physiology around rare congenital anomalies.) Ideally, ethics curricula could be evaluated via randomized controlled trials, but it would be challenging to randomize some students to take a course and others not to. However, at some schools, students could be randomized to completing ethics training at different times of year, and assessments could be done before all students had completed the training and after some students had completed it. There are also questions about how to assess whether students will make more ethical decisions in practice. More schools could consider using simulations of common ethical scenarios, where they might ask students to perform capacity assessments or seek informed consent for procedures. But simulations are expensive and time-consuming, so some schools could start by simply conducting a standard pre- and post-course survey assessing how students plan to respond to ethical situations they are likely to face. Of course, saying you will do something on a survey does not necessarily mean you will do that thing in practice, but this could at least give programs a general sense of whether their ethics curricula work and how they compare to other schools’. Most health professions programs provide training in ethics. But simply providing this training does not ensure it will lead students to make more ethical decisions in practice. Thus, health professions programs across schools should evaluate their curricula using a common set of standards. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
The financial barriers associated with becoming a bioethicist make the field less accessible, undermining the quality and relevance of bioethics research. Because the boundaries of the field are poorly defined, credentials often serve as a gatekeeping mechanism. For instance, the recent creation of the Healthcare Ethics Consultant-Certified (HEC-C) program, which “identifies and assesses a national standard for the professional practice of clinical healthcare ethics consulting” is a good idea in theory. But the cost of the exam starts at $495. There is no fee assistance. Given that 99 percent of those who have taken the exam have passed, the exam seems to largely serve as a financial barrier to becoming an ethics consultant. A related problem arises with Masters of Bioethics programs. These degrees can cost more than $80,000 just for tuition — often more than the tuitions of medical schools at the same universities. And although many programs entice students with partial scholarships, these degrees likely still cost tens of thousands of dollars. Students who earn these degrees may correspondingly come from advantaged backgrounds or take on substantial debt pursuing them. We don’t actually know, as most of these programs provide little information about their students’ backgrounds, graduates’ debt burden, or the proportion of students who attain jobs in bioethics after earning their degrees. It is possible that some of these programs do provide substantial support for disadvantaged students, but because few programs publicize this information, it is impossible to tell. This lack of transparency is worrisome given that many humanities masters programs enrich universities while driving students into debt they cannot escape from. These financial barriers aren’t unique to bioethics: students routinely take on significant debt to pursue careers in other humanities disciplines. But financial barriers in bioethics are particularly concerning for three reasons. First, the pervasiveness of financial barriers is hypocritical. Bioethics is a field that is supposed to be concerned with inequities. Many bioethicists conduct research on the fair distribution of health resources, health disparities, and barriers to healthcare access. If bioethics itself is inaccessible to people who aren’t wealthy, this is at odds with the field’s core tenets. People who want to pursue careers in bioethics should be able to do so, regardless of their financial backgrounds. Second, people’s backgrounds influence what they choose to work on. Bioethics has long been criticized for focusing on “issues of affluence,” or niche problems that affect only a handful of advantaged people. But if bioethics is primarily comprised of wealthy people, then it is unsurprising that the field devotes too little attention to the problems of people who are not wealthy. Finally, bioethicists often adopt paternalistic attitudes towards people who are poor. For instance, many bioethicists have argued against allowing risky research (e.g., human challenge studies) or compensating people who take on certain risks (e.g., by donating kidneys) because poor people would feel pressure to participate. But it’s odd for bioethicists — most of whom are advantaged — to make claims, usually without citing empirical evidence, about how poor people would feel about or react to policies pertaining to them. Putting aside the strength of the arguments on both sides of these debates, we should be concerned that the people affected by the policies bioethicists help shape are not well represented in bioethics. These problems are entrenched, but they are solvable. A first step would be to provide fee assistance for all bioethics exams, conferences, applications, and events. For instance, students can attend the American Society for Bioethics and Humanities (ASBH) Annual Conference for only $60 and should similarly be able to take the HEC-C exam at a discount. A second step would be to create greater financial transparency around bioethics training programs, and in particular, Masters of Bioethics degrees. Programs should publicize information about their students’ backgrounds, how students finance these programs, how much debt students typically incur because of these programs, and the job prospects of graduates. This would put pressure on programs with few disadvantaged students to recruit and support those students. Bioethics organizations — like ASBH and the Hastings Center — should ask universities to release financial information about their Masters of Bioethics programs and should consolidate this information so students interested in bioethics can make informed decisions about where to pursue training. Finally, there are several paid fellowships that provide excellent, hands-on training for people interested in bioethics. The field should work to create and fund more programs like these. Large hospitals — which often have ethics committees and revenues in the billions — could help finance them. Ultimately, bioethics will be a better field if it becomes a more accessible one. Bioethics should thus work to address financial barriers to entry. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
The Biden administration plans to greatly increase funding for the National Institutes of Health (NIH) in 2022, presenting the agency with new opportunities to better align research funding with public health needs. The NIH has long been criticized for disproportionately devoting its research dollars to the study of conditions that affect a small and advantaged portion of the global population. For instance, three times as many people have sickle cell disease — which disproportionately affects Black people — than cystic fibrosis — which disproportionately affects white people. Despite this, the NIH devotes comparable research funding to both diseases. These disparities are further compounded by differences in research funding from non-governmental organizations, with philanthropies spending seventy-five times more per patient on cystic fibrosis research than on sickle cell disease research. Diseases that disproportionately affect men also receive more NIH funding than those that primarily affect women. This disparity can be seen in the lagging funding for research on gynecologic cancers. The NIH presently spends eighteen times as much on prostate cancer than ovarian cancer per person-years of life lost for every case, and although this difference is partly explained by the fact that prostate cancer is far more prevalent than ovarian cancer, this disparity persists even after prevalence is accounted for. Making matters even worse, funding for research on gynecological cancers has fallen, even as overall NIH funding has increased. Disparities in what research is funded are further compounded by disparities in who gets funded. Black scientists are also about half as likely to receive NIH funding than white scientists, and this discrepancy holds constant across academic ranks (e.g., between Black and white scientists who are full professors). This disparity is partly driven by topic choice, with grant applications from Black scientists focusing more frequently on “health disparities and patient-focused interventions,” which are topics that are less likely to be funded. Recent calls to address structural racism in research funding have led the NIH to commit $90 million to combatting health disparities and researching the health effects of discrimination, although this would represent less than two percent of the Biden administration’s proposed NIH budget. The disconnect between research funding and public health needs is also driven by the fact that the NIH tends to fund relatively little social science research. For instance, police violence is a pressing public health problem: in 2019, more American men were killed by police violence than by Hodgkin lymphoma or testicular cancer. But unlike Hodgkin lymphoma and testicular cancer, which receive tens of millions of dollars of research funding from the NIH every year and additional funding from non-governmental organizations and private companies, the NIH funds little research on police violence. For instance, in 2021, only six NIH funded projects mentioned “police violence,” “police shooting,” or “police force” in their title, abstract, or project terms, while 119 mentioned “Hodgkin lymphoma” and 24 mentioned “testicular cancer.” While many view the NIH as an organization focused exclusively on basic science research, its mandate is much broader. Indeed, the NIH’s mission is “to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.” Epidemiologists, health economists, and other social science researchers studying how societies promote or undermine health should thus receive NIH funding that is more proportionate to the magnitude of the health problems they research. Research funding disparities have multiple causes and warrant different solutions, from prioritizing work conducted by scientists from underrepresented backgrounds, to ensuring that there is gender parity in the size of NIH grants awarded to first-time Principal Investigators. To address the broader problem of scientific priorities not reflecting the size of health problems, the NIH should instruct grant reviewers to consider how many people are affected by a health problem, how serious that health problem is for each person affected by it, and whether a disease primarily affects marginalized populations. In addition, the NIH should commit to funding more research on public health problems — like police violence — that cause substantial harm but receive relatively little attention from the health research enterprise. As the NIH prepares for a massive influx of funding, it must follow through on its commitment to address health research funding disparities. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Clinicians across medical settings commonly euphemize or understate pain — a practice that has concerning implications for patient trust, consent, and care quality. Intrauterine device (IUD) placement offers an instructive case study, and highlights the need for transparency in describing painful medical procedures. IUDs are one of the most effective forms of birth control, and tens of millions of women have them placed every year. Clinicians typically describe the pain associated with IUD insertion as “uncomfortable… but short lived” or “three quick cramps.” But patients who have undergone IUD insertion have described it as “on a cosmic level,” “such blinding agony I could barely see,” and “like someone shocking my cervix with a taser.” Although some patients experience little pain with IUD insertion, most patients experience moderate to severe pain. Clinicians’ language thus paints a picture of pain that is less severe than that which many patients experience – and this is not unique to IUD placement — often, clinicians refer to pain as “pressure” or call a painful procedure “uncomfortable.” There are likely several reasons why physicians downplay pain. First, clinicians may not know how much a procedure hurts. Second, clinicians may understate pain to alleviate patients’ anxiety or reduce the pain they experience. Third, if clinicians believe a procedure is in a patient’s best interest, they may downplay the pain associated with it to increase a patient’s likelihood of giving consent. In some cases, clinicians may not know how painful a procedure is. For instance, one study found that physicians assess IUD insertion to be about half as painful as patients report it to be. But after performing a procedure many times, clinicians should have a reasonable sense of the range of reactions patients typically have. If they’re unsure, they should ask patients, or read accounts of patients who have undergone that procedure. There is also evidence that describing the pain associated with a procedure can increase patients’ reported pain and anxiety. Downplaying expectations may thus reduce pain. This is an important consideration and justifies not disclosing how painful a procedure is for patients who do not want to know. But there are several problems with this approach. For one, patients can talk to their friends or access information online about other patients’ experiences of a procedure, undermining any potential analgesic effects of downplaying pain. In addition, there are other ways to reduce pain that do not involve misleading patients. Finally, while failing to disclose how painful a procedure is may marginally reduce how much it hurts, there are often substantial discrepancies between patients’ and clinicians’ assessments of pain. Indeed, most studies that have looked at this question have found that clinicians underestimate pain relative to patients, and that for more painful procedures, clinicians’ underestimation is even more pronounced. So patients may still experience more severe pain than expected, undermining their long-term trust in their clinicians. Clinicians may also recognize that telling a patient how painful a procedure is decreases the likelihood a patient will consent to that procedure. Clinicians may accordingly understate the unpleasantness of procedures that they believe are in a patient’s best interest. For instance, IUDs work extremely well (and are the most commonly used birth control among physicians). For most patients, the pain associated with IUD insertion lasts for minutes, while IUDs last for years. Clinicians may thus feel that IUDs are the best option for many patients and may downplay the pain of having one placed to increase the likelihood that patients will choose to get one. While this approach is understandable, patients are entitled to choose what care is in their interest. Some may reasonably decide that they do not want to risk incurring severe short-term pain, even if the long-term benefits are substantial. In addition to undermining trust and the validity of a patient’s consent, downplaying pain may lead clinicians to undertreat it. Indeed, clinicians frequently underrecognize and undertreat pain, and women and people of color are particularly likely to have their pain overlooked. Admittedly, the pain associated with IUD insertion has proven challenging to treat. For instance, many patients are advised to take Advil before the procedure, despite evidence that it doesn’t help. Other interventions also don’t work very well, leading the American College of Obstetricians and Gynecologists to conclude that “more research is needed to identify effective options to reduce pain for IUD insertion.” But in highly resourced settings, where anesthesia, narcotics, and anti-anxiety medications are available, clinicians can control patients’ procedural pain. For instance, the default is to sedate patients for colonoscopies, even though many patients who have received colonoscopies without sedation report that it is not very painful. Conversely, patients are rarely offered sedation for IUD insertion, even though this would eliminate their pain. Sedation comes with its own risks, but patients are generally given the option of taking on these risks when pain is viewed as sufficiently severe. When pain can be managed and it is not, this reflects value judgments (i.e., about a patient’s ability to tolerate a given level of pain). Being able to place IUDs in an office without sedation works well for many patients and makes IUDs more accessible. But fear of pain may lead some people to choose a less effective form of birth control or forgo it entirely. Given how well IUDs work, this is unfortunate and preventable. Ultimately, for IUDs and other important medical procedures, clinicians should ask patients how much they want to know about a procedure and describe the range of pain most patients experience for those who wish to know. If patients are concerned about the level of pain they might experience, clinicians should provide them with a range of effective options for managing that pain. Failing to engage in these conversations risks undermining trust, compromising the validity of consent, and undertreating pain. |
Archives
December 2023
Categories
All
Posts
All
|