Where is it?
Biounethical.com, or search Bio(un)ethical on normal podcasting platforms. (Apple, Spotify, Google.) What is it? On each episode, we interview an expert on an important and controversial issue in bioethics. For instance, in our first few episodes, we consider the following questions: Should there be risk limits on research involving consenting participants? Does the IRB system do more harm than good? Should patients with intractable mental illness have access to medical aid in dying? What is the point of teaching ethics to professional school students? Who are we? I (Leah) am an MD/PhD candidate at Harvard Medical School and the Harvard T.H. Chan School of Public Health. Sophie is a PhD candidate in philosophy at MIT. Previously, we both spent two years as pre-doctoral fellows in the NIH Department of Bioethics. We have different academic backgrounds, different research interests, and different ethical commitments. Despite these differences, in our own conversations, we often wind up agreeing on how we should approach various bioethical issues. And when we continue to disagree, we are generally able to identify the specific sources of disagreement, while finding substantial common ground. We hope to do something similar on our podcast, and to thereby progress debates on controversial issues in bioethics. Who is the podcast for? We aim to have conversations at a high level and to meaningfully engage with the issues we discuss, which is why the episodes are ~90 minutes long. That said, we hope the podcast will be accessible to anyone interested in bioethical issues, including those without a background in any given subject area. To this end, we provide some context at the start of every episode and define concepts and terms as they arise. We think you might be especially interested in the podcast if you’re interested in bioethics, philosophy, or effective altruism. Why are we doing this? Generally speaking, we hope to improve discourse around bioethical issues in three ways. First, we are skeptical that certain norms in medicine, science, and public health are the right ones. We want to question them, and hopefully incite further discourse about them. Second, we aim to build bridges between different communities working independently on related issues. For instance, people in the bioethics community and people in the effective altruism community have researched and written extensively on human challenge trials, resource allocation, pandemic risk, and so on, but there has been relatively limited productive discourse between these communities. We hope the podcast will encourage members of different camps to talk to each other (by, for instance, showing that individuals in each share many of the same ethical commitments and empirical assumptions, even when they arrive at different conclusions about what policies we should implement or actions we should take). Meanwhile, many bioethicists believe philosophers have no meaningful role to play in contemporary bioethics, and many philosophers look down on bioethics as a sub-discipline that requires little philosophical skill. We hope the podcast will demonstrate that philosophy is vital to making progress in bioethics, and that bioethics involves more than the “simple” application of existing concepts and principles to real-world cases. Third, we want to make high-quality bioethics discourse more accessible to non-bioethicists. Currently, bioethics discourse exists in two main forms. First, bioethicists often give short quotes to journalists or write brief op-eds on hot-button issues. You can only say so much in a few hundred words, which makes it challenging to discuss bioethical issues in a nuanced way in this format. While important, this kind of black-and-white discussion of bioethical issues (“we shouldn’t do challenge trials because they’re too risky”), can contribute to polarization. Second, bioethicists often convey the deeper versions of their ideas in lengthy journal articles. But non-bioethicists may not have access, the desire, or the time to read thousands of words on a narrow bioethical issue. And we don’t think someone should have to delve deeply into the bioethics literature to understand a bioethicist’s perspective on an issue, or what the bioethics community thinks about it. We thus hope to create a discourse that occurs at a “medium” depth (deeper than a newspaper article, but shallower than a journal article) for people who are thinking seriously about bioethical issues, but want to engage with them more casually. Logistical stuff In Season 1, we will release episodes every other Tuesday for ten weeks, between August and December. You can sign up for our mailing list to be notified when new episodes are released, or you can follow us on Twitter (leah_pierson and sophiehgibert). Our podcast is supported by a grant from Amplify Creative Grants. Final thoughts There’s a learning curve associated with figuring out how to do a podcast well, so you’ll hear us learn and grow as hosts as the episodes progress That said, we’d be eager to hear your feedback and suggestions (including for topics and guests), which you can submit at biounethical.com. Please also rate and review the podcast on normal podcasting platforms. Thanks in advance for your support. We look forward to sharing these conversations with you.
0 Comments
There is a new editorial in Nature arguing that discussions pertaining to risks posed by AI—including biased decision-making, the elimination of jobs, autocratic states’ use of facial recognition, and algorithmic bias in how social goods are distributed—are “being starved of oxygen” by debates about AI existential risk. Specifically, the authors assert that: “like a magician’s sleight of hand, [attention to AI x-risk] draws attention away from the real issue.” The authors encourage us to "stop talking about tomorrow's AI doomsday when AI poses risks today."
People have been making this claim a lot, and while different versions have been put forward, the general take seems to be something like:
Versions of this claim rely on the assumption that there is a specific kind of relationship between discussions about AI x-risks and other AI risks, namely: A parasitic relationship: Discussing AI x-risk causes us to discuss other AI risks less. However, the relationship could instead be: A mutual relationship: Discussing AI x-risk causes us to discuss other AI risks more. Or: A commensal relationship: Discussing AI x-risk doesn't cause us to discuss other AI risks less or more. It’s not clear that we generally starve issues of oxygen by discussing related ones: for instance, the Nature editorial does not suggest that discussions about privacy risks come at the expense of discussions about misinformation risks. And there are reasons to think there could be a mutual relationship between discussions about different kinds of AI risks. Some of the risks posed by AI that are discussed in the Nature editorial may lie on the pathway to AI x-risk. Talking about AI x-risk may help direct attention and resources to mitigating present risks, and addressing certain present risks may help mitigate x-risks. Indeed, there are significant overlaps in the regulatory solutions to these issues: that is why Bruce and I suggest that we should discuss and address the existing and emerging risks posed by misaligned AI in tandem. There are also reasons to think that the relationship between discussions of AI x-risk and other risks could be commensal. Public discourse is messy and complicated. It isn’t a limited resource in the way that kidneys or taxpayer dollars are. Writing a blog post about AI x-risk doesn’t lead to one fewer blog posts about other AI risks. While discourse is partly comprised of limited resources (e.g., article space in a print newspaper), even here, there may not be direct tradeoffs between discussions of AI x-risk and discussions of other AI risks. Indeed, many of the articles I have read about AI x-risk also discuss other risks posed by AI. To assess this, I did a quick Google search for “AI existential risk” and reviewed the first five articles that came up under “News,” all of which mention risks that are not existential risks—those described above in Nature, algorithmic bias in Wired, weapons that are “not an existential threat” in Barron’s, mass unemployment in the Register, and discriminatory hiring and misinformation in MIT Tech Review (see screenshot below). So even if it is the case that newspapers are devoting more article space to AI x-risk, and this leaves less article space for everything else, it’s not clear that this leads to fewer words being written about other AI risks. My point is that we should not accept at face value the claim that discussing AI x-risk causes us to pay insufficient attention to other, extremely important risks related to AI. We should aim to resolve uncertainty about these (potential) tradeoffs with data—i.e., by tracking what articles have been published on each issue over time, looking at what articles are getting cited and shared, assessing what government bodies are debating and what policies they are issuing, and so on. Asserting that there are direct tradeoffs where there may not be risks undermining efforts to have productive discussions about the important, intersecting risks posed by AI. Medical students are socialized to feel like we don't understand clinical practice well enough to have strong opinions about it. This happens despite the wisdom, thoughtfulness, and good intentions of medical educators; it happens because of basic features of medical education. First, the structure of medical school makes medical students feel younger—and correspondingly less competent, reasonable, and mature—than we are. Two-thirds of medical students take gap years between college and medical school, so many of us go from living in apartments to living in dorms; from working full-time jobs to attending mandatory 8am lectures; from freely scheduling doctors' appointments to being unable to make plans because we haven't received our schedules. I once found myself moving into a new apartment at 11pm because my lease ended that day, but I had been denied permission to leave class early. Professionalism assessments also play a role. "Professionalism" is not well defined, and as a result, "behaving professionally" has more or less come to mean "adhering to the norms of the medical profession," or even just "adhering to the norms the people evaluating you have decided to enforce." These include ethical norms, behavioral norms, etiquette norms, and any other norms you might imagine. For instance, a few months into medical school, my class received this email: "Moving the bedframes violates the lease agreement that you signed upon entering [your dorm]. You may have heard a rumor from more senior students that this is an acceptable practice. Unfortunately, it is not... If you have moved or accepted a bed, and we do not hear from you it will be seen as a professionalism issue and be referred to the appropriate body." (emphasis added) When someone tells you early in your training that something is a professionalism issue, your reaction may be "hm, I don't really see why moving beds is an issue that's relevant to the medical profession, but maybe I'll come to understand." First-year medical students are inclined to be deferential because we recognize how little we know about the medical profession. We do not understand the logic behind, for instance, rounds, patient presentations, and 28-hour shifts. Many of these norms eventually start to make sense. I've gone from wondering why preceptors harped so emphatically on being able to describe a patient in a single sentence to appreciating the efficiency and clarity of a perfect one-liner. But plenty of norms in medicine are just bad. Some practices are manifestations of paternalism (e.g., answering patients' questions in a vague, non-committal way), racism or sexism (e.g., undertreating Black patients' pain), antiquated traditions (e.g., wearing coats that may transmit disease), or the brokenness of the US health care system (e.g., not telling patients the cost of their care). The bad practices are often subtle, and even when they aren't, it can take a long time to realize they aren't justifiable. It took me seeing multiple women faint from pain during gynecologic procedures before I felt confident enough to tentatively suggest that we do things differently. My default stance was "there must be some reason they're doing things this way," and it required an overwhelming amount of evidence to change my mind.
Other professions undoubtedly have a similar problem: new professionals in any field may not feel that they can question established professional norms until they've been around long enough that the norms have become, well, normalized. As a result, it may often be outsiders who push for change. For instance, it was initially parents—not teachers—who lobbied for the abolition of corporal punishment in schools. Similarly, advocacy groups have created helplines to support patients appealing surprise medical bills, even as hospitals have illegally kept prices opaque. The challenge for actors outside of medicine, though, is that medicine is a complicated and technical field, and it is hard to challenge norms that you do not fully understand. Before I was a medical student, I had a doctor who repeatedly "examined" my injured ankle over my shoe. I didn't realize until my first year of medical school that you can't reliably examine an ankle this way. In some ways, medical students are uniquely well-positioned to form opinions about which practices are good and bad. This is because we are both insiders and outsiders. We have some understanding of how medicine works, but haven't yet internalized all of its norms. We're expected to learn, so can ask "Why is this done that way?" and evaluate the rationale. And we rotate through different specialties, so can compare across them, assessing which practices are pervasive and which are specific to a given context. Our insider/outsider status could be both a weakness and a strength: we may not know medicine, but our time spent working outside of medicine has left us with other knowledge; we may not understand clinical practice yet, but we haven't been numbed to it, either. One of the hardest things medical students have to do is remain open and humble enough to recognize that many practices will one day make sense, while remaining clear-eyed and critical about those that won't. But the concept of professionalism blurs our vision. It gives us a strong incentive not to form our own opinions because we are being graded on how well we emulate norms. Assuming there are good reasons for these norms resolves cognitive dissonance, while asking hard questions about them risks calling other doctors' professionalism into question. Thus, professionalism makes us less likely to trust our opinions about the behaviors we witness in the following way. First, professionalism is defined so broadly that norms only weakly tied to medicine fall under its purview. Second, we know we do not understand the rationales underlying many professional norms, so are inclined to defer to more senior clinicians about them. In combination, we are set up to place little stock in the opinions we form about the things we observe in clinical settings, including those we're well-positioned to form opinions about. In the absence of criticism and pushback, entrenched norms are liable to remain entrenched. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
There has been too little evaluation of ethics courses in medical education in part because there is not consensus on what these courses should be trying to achieve. Recently, I argued that medical school ethics courses should help trainees to make more ethical decisions. I also reviewed evidence suggesting that we do not know whether these courses improve decision making in clinical practice. Here, I consider ways to assess the impact of ethics education on real-world decision making and the implications these assessments might have for ethics education. The Association of American Medical Colleges (AAMC) includes “adherence to ethical principles” among the “clinical activities that residents are expected to perform on day one of residency.” Notably, the AAMC does not say graduates should merely understand ethical principles; rather, they should be able to abide by them. This means that if ethics classes impart knowledge and skills — say, an understanding of ethical principles or improved moral reasoning — but don’t prepare trainees to behave ethically in practice, they have failed to accomplish their overriding objective. Indeed, a 2022 review on the impact of ethics education concludes that there is a “moral obligation” to show that ethics curricula affect clinical practice. Unfortunately, we have little sense of whether ethics courses improve physicians’ ethical decision-making in practice. Ideally, assessments of ethics curricula should focus on outcomes that are clinically relevant, ethically important, and measurable. Identifying such outcomes is hard, primarily because many of the goals of ethics curricula cannot be easily measured. For instance, ethics curricula may improve ethical decision-making by increasing clinicians’ awareness of the ethical issues they encounter, enabling them to either directly address these dilemmas or seek help. Unfortunately, this skill cannot be readily assessed in clinical settings. But other real-world outcomes are more measurable. Consider the following example: Physicians regularly make decisions about which patients have decision-making capacity (“capacity”). This determination matters both clinically and ethically, as it establishes whether patients can make medical decisions for themselves. (Notably, capacity is not a binary: patients can retain capacity to make some decisions but not others, or can retain capacity to make decisions with the support of a surrogate.) Incorrectly determining that a patient has or lacks capacity can strip them of fundamental rights or put them at risk of receiving care they do not want. It is thus important that clinicians correctly determine which patients possess capacity and which do not. However, although a large percentage of hospitalized patients lack capacity, physicians often do not feel confident in their ability to assess capacity, fail to recognize that most patients who do not have capacity lack it, and often disagree on which patients have capacity. Finally, although capacity is challenging to assess, there are relatively clear and widely agreed upon criteria for assessing it, and evaluation tools with high interrater reliability. Given this, it would be both possible and worthwhile to determine whether medical trainees’ ability to assess capacity in clinical settings is enhanced by ethics education. Here are two potential approaches to evaluating this: first, medical students might perform observed capacity assessments on their psychiatry rotations, just as they perform observed neurological exams on their neurology rotations. Students’ capacity assessments could be compared to a “gold standard,” or the assessments of physicians who have substantial training and experience in evaluating capacity using structured interviewing tools. Second, residents who consult psychiatry for capacity assessments could be asked to first determine whether they think a patient has capacity and why. This determination could be compared with the psychiatrist’s subsequent assessment. Programs could then randomize trainees to ethics training — or to a given type of ethics training — to determine the effect of ethics education on the quality and accuracy of trainees’ capacity assessments. Of course, ethics curricula should do much more than make trainees good at assessing capacity. But measuring one clinically and ethically significant endpoint could provide insight into other aspects of ethics education in two important ways. First, if researchers were to determine that trainees do a poor job of assessing capacity because they have too little time, or cannot remember the right questions to ask, or fail to check capacity in the first place, this would point to different solutions — some of which education could help with, and others of which it likely would not. Second, if researchers were to determine that trainees generally do a poor job of assessing capacity because of a given barrier, this could have implications for other kinds of ethical decisions. For instance, if researchers were to find that trainees fail to perform thorough capacity assessments primarily because of time constraints, other ethical decisions would likely be impacted as well. Moreover, this insight could be used to improve ethics curricula. After all, ethics classes should teach clinicians how to respond to the challenges they most often face. Not all (or perhaps even most) aspects of clinicians’ ethical decision-making are amenable to these kinds of evaluations in clinical settings, meaning other types of evaluations will play an important role as well. But many routine practices — assessing capacity, acquiring informed consent, advance care planning, and allocating resources, for instance — are. And given the importance of these endpoints, it is worth determining whether ethics education improves clinicians’ decision making across these domains. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Medical students spend a lot of time learning about conditions they will likely never treat. This weak relationship between what students are taught and what they will treat has negative implications for patient care. Recently, I looked into discrepancies between U.S. disease burden in 2016 and how often conditions are mentioned in the 2020 edition of First Aid for the USMLE Step 1, an 832 page book sometimes referred to as the medical student’s bible. The content of First Aid provides insight into the material emphasized on Step 1 — the first licensing exam medical students take, and one that is famous for testing doctors on Googleable minutia. This test shapes medical curricula and students’ independent studying efforts — before Step 1 became pass-fail, students would typically study for it for 70 hours a week for seven weeks, in addition to all the time they spent studying before this dedicated period. My review identified broad discrepancies between disease burden and the relative frequency with which conditions were mentioned in First Aid. For example, pheochromocytoma, a rare tumor that occurs in about one out of every 150,000 people per year — is mentioned 16 times in First Aid. By contrast, low back pain — the fifth leading cause of disability-adjusted life years, or DALYs, in the U.S., and a condition that has affected one in four Americans in the last three months — is mentioned only nine times. (Disease burden is commonly measured in DALYs, which combine morbidity and mortality into one metric. The leading causes of DALYs in the U.S. include major contributors to mortality, like ischemic heart disease and lung cancer, as well as major causes of morbidity, like low back pain.) Similarly, neck pain, which is the eleventh leading cause of DALYs, is mentioned just twice. Both neck and back pain are also often mentioned as symptoms of other conditions (e.g., multiple sclerosis and prostatitis), rather than as issues in and of themselves. Opioid use disorder, the seventh leading cause of DALYs in 2016 and a condition that killed more than 75,000 Americans last year, is mentioned only three times. Motor vehicle accidents are mentioned only four times, despite being the fifteenth leading cause of DALYs. There are some good reasons why Step 1 content is not closely tied to disease burden. The purpose of the exam is to assess students’ understanding and application of basic science principles to clinical practice. This means that several public health problems that cause significant disease burden — like motor vehicle accidents or gun violence — are barely tested. But it is not clear Step 2, an exam meant to “emphasize health promotion and disease prevention,” does much better. Indeed, in First Aid for the USMLE Step 2, back pain again is mentioned fewer times than pheochromocytoma. Similarly, despite dietary risks posing the greatest health threat to Americans (including smoking), First Aid for the USMLE Step 2 says next to nothing about how to reduce these risks. More broadly, there may also be good reasons why medical curricula should not perfectly align with disease burden. First, more time should be devoted to topics that are challenging to understand or that teach broader physiologic lessons. Just as researchers can gain insights about common diseases by studying rare ones, students can learn broader lessons by studying diseases that cause relatively little disease burden. Second, after students begin their clinical training, their educations will be more closely tied to disease burden. When completing a primary care rotation, students will meet plenty of patients with back and neck pain. But the reasons some diseases are emphasized and taught about more than others often may be indefensible. Medical curricula seem to be greatly influenced by how well understood different conditions are, meaning curricula can wind up reflecting research funding disparities. For instance, although eating disorders cause substantial morbidity and mortality, research into them has been underfunded. As a result, no highly effective treatments targeting anorexia or bulimia nervosa have emerged, and remission rates are relatively low. Medical schools may not want to emphasize the limitations of medicine or devote resources to teaching about conditions that are multifactorial and resist neat packaging, meaning these disorders are often barely mentioned. But, although eating disorders are not well understood, thousands of papers have been written about them, meaning devoting a few hours to teaching medical students about them would still barely scratch the surface. And even when a condition is understudied or not well understood, it is worth explaining why. For instance, if heart failure with reduced ejection fraction is discussed more than heart failure with preserved ejection fraction, students may wrongly conclude this has to do with the relative seriousness of these conditions, rather than with the inherent challenge of conducting clinical trials with the latter population (because their condition is less amenable to objective inclusion criteria). Other reasons for curricular disparities may be even more insidious: for instance, the lack of attention to certain diseases may reflect the medical community’s perceived importance of these conditions, or whether they tend to affect more empowered or marginalized populations. The weak link between medical training and disease burden matters: if medical students are not taught about certain conditions, they will be less equipped to treat these conditions. They may also be less inclined to specialize in treating them or to conduct research on them. Thus, although students will encounter patients with back pain or who face dietary risks, if they and the physicians supervising them have not been taught much about caring for these patients, these patients likely will not receive optimal treatment. And indeed, there is substantial evidence that physicians feel poorly prepared to counsel patients on nutrition, despite this being one of the most common topics patients inquire about. If the lack of curricular attention reflects research and health disparities, failing to emphasize certain conditions may also compound these disparities. Addressing this problem requires understanding it. Researchers could start by assessing the link between disease burden and Step exam questions, curricular time, and other resources medical students rely on (like the UWorld Step exam question banks). Organizations that influence medical curricula — like the Association of American Medical Colleges and the Liaison Committee on Medical Education—should do the same. Medical schools should also incorporate outside resources to cover topics their curricula do not explore in depth, as several medical schools have done with nutrition education. But continuing to ignore the relationship between disease burden and curricular time does a disservice to medical students and to the patients they will one day care for. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
I recently argued that we need to evaluate medical school ethics curricula. Here, I explore how ethics courses became a key component of medical education and what we do know about them. Although ethics had been a recognized component of medical practice since Hippocrates’ time, ethics education is a more recent innovation. In the 1970s, the medical community was shaken by several high-profile lawsuits alleging unethical behavior by physicians. As medical care advanced — and categories like “brain death” emerged — doctors found themselves facing challenging new dilemmas and old ones more often. In response to this, in 1977, The Johns Hopkins University School of Medicine became the first medical school to incorporate ethics education into its curriculum. Throughout the 1980s and 1990s, medical schools increasingly began to incorporate ethics education into their curricula. By 2002, approximately 79 percent of U.S. medical schools offered a formal ethics course. Today, the Association of American Medical Colleges (AAMC) includes “adherence to ethical principles” among the competencies required of medical school graduates. As a result, all U.S. medical schools — and many medical schools around the world — require ethics training. There is some consensus on the content ethics courses should cover. The AAMC requires medical school graduates to “demonstrate a commitment to ethical principles pertaining to provision or withholding of care, confidentiality, informed consent.” Correspondingly, most medical school ethics courses review issues related to consent, end-of-life care, and confidentiality. But beyond this, the scope of these courses varies immensely (in part because many combine teaching in ethics and professionalism, and there is little consensus on what “professionalism” means). The format and design of medical school ethics courses also varies. A wide array of pedagogical approaches are employed: most rely on some combination of lectures, case-based learning, and small group discussions. But others employ readings, debates, or simulations with standardized patients. These courses also receive differing degrees of emphasis within medical curricula, with some schools spending less than a dozen hours on ethics education and others spending hundreds. (Notably, much of the research on the state of ethics education in U.S. medical schools is nearly twenty years old, though there is little reason to suspect that ethics education has converged during that time, given that medical curricula have in many ways become more diverse.) Finally, what can seem like consensus in approaches to ethics education can mask underlying differences. For instance, although many medical schools describe their ethics courses as “integrated,” schools mean different things by this (e.g., in some cases “integrated” means “interdisciplinary,” and in other cases it means “incorporated into other parts of the curriculum”). A study from this year reviewed evidence on interventions aimed at improving ethical decision-making in clinical practice. The authors identified eight studies of medical students. Of these, five used written tools to evaluate students’ ethical reasoning and decision-making, while three assessed students’ interactions with standardized patients or used objective structured clinical examinations (OSCEs). Three of these eight studies assessed U.S. students, the most recent of which was published in 1998. These studies found mixed results. One study found that an ethics course led recipients to engage in more thorough — but not necessarily better — reasoning, while another found that evaluators disagreed so often that it was nearly impossible to achieve consensus about students’ performances. The authors of a 2017 review assessing the effectiveness of ethics education note that it is hard to draw conclusions from the existing data, describing the studies as “vastly heterogeneous,” and bearing “a definite lack of consistency in teaching methods and curriculum,” The authors conclude, “With such an array, the true effectiveness of these methods of ethics teaching cannot currently be well assessed especially with a lack of replication studies.” The literature on ethics education thus has several gaps. First, many of the studies assessing ethics education in the U.S. are decades old. This matters because medical education has changed significantly during the 21st century. (For instance, many medical schools have substantially restructured their curricula and many students do not regularly attend class in person.) These changes may have implications for the efficacy of ethics curricula. Second, there are very few head-to-head comparisons of ethics education interventions. This is notable because ethics curricula are diverse. Finally, and most importantly, there is almost no evidence that these curricula lead to better decision-making in clinical settings — where it matters. A slightly longer version of this piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Recently, Derek Thompson pointed out in the Atlantic that the U.S. has adopted myriad policies that limit the supply of doctors despite the fact that there aren’t enough. And the maldistribution of physicians — with far too few pursuing primary care or working in rural areas — is arguably an even bigger problem. The American Medical Association (AMA) bears substantial responsibility for the policies that led to physician shortages. Twenty years ago, the AMA lobbied for reducing the number of medical schools, capping federal funding for residencies, and cutting a quarter of all residency positions. Promoting these policies was a mistake, but an understandable one: the AMA believed an influential report that warned of an impending physician surplus. To its credit, in recent years, the AMA has largely reversed course. For instance, in 2019, the AMA urged Congress to remove the very caps on Medicare-funded residency slots it helped create. But the AMA has held out in one important respect. It continues to lobby intensely against allowing other clinicians to perform tasks traditionally performed by physicians, commonly called “scope of practice” laws. Indeed, in 2020 and 2021, the AMA touted more advocacy efforts related to scope of practice that it did for any other issue — including COVID-19. The AMA’s stated justification for its aggressive scope of practice lobbying is, roughly, that allowing patients to be cared for by providers with less than a decade of training compromises patient safety and increases health care costs. But while it may be reasonable for the AMA to lobby against some legislation expanding the scope of non-physicians, the AMA is currently playing whack-a-mole with these laws, fighting them as they come up, indiscriminately. This general approach isn’t well supported by data — the removal of scope-of-practice restrictions has not been linked to worse care — and undermines the AMA’s credibility. The AMA’s own scope of practice website hardly bolsters its case. Under a heading that states “scope expansion does not equal expanding access to care,” the AMA claims only that “nonphysician providers (such as [nurse practitioners]) are more likely to practice in the same geographic locations as physicians” and “despite the rising number of [nurse practitioners] across the country, health care shortages still persist.” But unsurprisingly, both physicians and nurse practitioners (NPs) are more likely to be found in geographic locations with more people, although NPs do represent a larger share of the primary care workforce in rural areas. The AMA’s scope of practice lobbying is particularly frustrating because the Association could improve both the supply and allocation of physicians in a more evidence-based way: by reforming U.S. medical education. In other countries, physicians receive fewer years of training but provide comparable care. Instead of insisting that NPs and other clinicians get more training, the AMA should be working to make U.S. medical education more efficient by pushing for the creation of more three-year medical degrees, more combined undergraduate and medical school programs, and shorter pathways into highly needed specialties. The AMA could also make careers in primary care and rural areas more accessible by lobbying for more loan forgiveness programs, scholarships, and training opportunities for medical students interested in these paths. Now is a pivotal time for the AMA to reconsider its aggressive scope of practice lobbying. Temporary regulations allowing NPs and other clinicians to do more during the COVID-19 pandemic could be evaluated and, if safe and cost-effective, expanded. At a time when the health care workforce is facing an unprecedented crisis, the AMA should fully atone for the workforce shortages it helped create. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Academia often treats all areas of research as important, and two projects that will publish equally well as equally worthy of exploring. I believe this is misguided. Instead, we should strive to create an academic culture where researchers consider and discuss a project’s likely impact on the world when deciding what to work on. Though this view is at odds with current norms in academia, there are four reasons why a project’s potential to improve the world should be an explicit consideration. First, research projects can have massively different impacts, ranging from altering the course of human history to collecting dust. To the extent that we can do work that does more to improve people’s lives without imposing major burdens on ourselves, we should. Second, the choice of a research project affects a researcher’s career trajectory, and as some have argued, deciding how to spend one’s career is the most important ethical decision many will ever make. Third, most academic researchers are supported by public research dollars or work at tax exempt institutions. To the extent that researchers are benefitting from public resources, they have an obligation to use research resources in socially valuable ways. Fourth, most researchers come from advantaged backgrounds. If researchers pick projects primarily based on their own interests, the research agenda will provide too few benefits to populations underrepresented in academia. One might push back on this view by arguing that the research enterprise functions as an effective market. Perhaps academic researchers already have strong incentives to choose projects that do more to improve the world, given these projects will yield more funding, publications, and job opportunities. On this view, researchers have no reason to consider a project’s likely positive impact; journal editors, grant reviewers, and hiring committees will do this for them. But the academic marketplace is riddled with market failures: some diseases receive far more research funding than other, comparably severe ones; negative findings and replication studies are less likely to get published; research funders don’t always consider the magnitude of a problem in deciding which grants to fund; and so on. And although these problems warrant structural solutions, individual researchers can also help mitigate their effects. One might also argue that pursuing a career in academia is hard enough even when you’re studying the thing you are most passionate about. (I did a year of Zoom PhD; you don’t have to convince me of this.) On this view, working on the project you’re most interested in should crowd out all other considerations. But while this may be the case for some people, this view doesn’t accord with most conversations I’ve had with fellow graduate students. Many students enter their PhDs unsure about what to work on, and wind up deciding between multiple research areas or projects. Given that most have only worked in a few research areas prior to embarking on a PhD, the advice “choose something you’re passionate about” often isn’t very useful. For many students, adding “choose a project that you think can do more to improve the world” to the list of criteria for selecting a research topic would represent a helpful boundary, rather than an oppressive constraint. Of course, people will have different understandings of what constitutes an impactful project. Some may aim to address certain inequities; others may want to help as many people as possible. People also will disagree about which projects matter most even according to the same metrics: a decade ago, many scientists thought developing an mRNA vaccine platform was a good idea, but not one of the most important projects of our time. But the inherent uncertainty about which research is more impactful does not leave us completely in the dark: most would agree that curing a disease that kills millions is more important than curing a disease that causes mild symptoms in a few. In practice, identifying more beneficial research questions involves guessing at several hard to estimate parameters—e.g., How many people are affected by a given problem and how badly? Are there already enough talented people working on this such that my contributions will be less important? And will the people who read my paper be able to do anything about this? The more basic the research, the harder these kinds of questions are to answer. My goal here, though, is not to provide practical advice. (Fortunately, groups like Effective Thesis do.) My point is that researchers should approach the question of how much a given project will improve the world with the same rigor they bring to the work of the project itself. Researchers do not need to arrive at the best or most precise answer every time: over the course of their careers, seriously considering which projects are more important and, to some extent, picking projects on this basis will produce a body of work that does more good. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Health professions students are often required to complete training in ethics. But these curricula vary immensely in terms of their stated objectives, time devoted to them, when during training students complete them, who teaches them, content covered, how students are assessed, and instruction model used. Evaluating these curricula on a common set of standards could help make them more effective. In general, it is good to evaluate curricula. But there are several reasons to think it may be particularly important to evaluate ethics curricula. The first is that these curricula are incredibly diverse, with one professor noting that the approximately 140 medical schools that offer ethics training do so “in just about 140 different ways.” This suggests there is no consensus on the best way to teach ethics to health professions students. The second is that time in these curricula is often quite limited and costly, so it is important to make these curricula efficient. Third, when these curricula do work, it would be helpful to identify exactly how and why they work, as this could have broader implications for applied ethics training. Finally, it is possible that some ethics curricula simply don’t work very well. In order to conclude ethics curricula work, at least two things would have to be true: first, students would have to make ethically suboptimal decisions without these curricula, and second, these curricula would have to cause students to make more ethical decisions. But it’s not obvious both these criteria are satisfied. After all, ethics training is different from other kinds of training health professions students receive. Because most students come in with no background in managing cardiovascular disease, effectively teaching students how to do this will almost certainly lead them to provide better care. But students do enter training with ideas about how to approach ethical issues. If some students’ approaches are reasonable, these students may not benefit much from further training (and indeed, bad training could lead them to make worse decisions). Additionally, multiple studies have found that even professional ethicists do not behave more morally than non-ethicists. If a deep understanding of ethics does not translate into more ethical behavior, providing a few weeks of ethics training to health professions students may not lead them to make more ethical decisions in practice — a primary goal of these curricula. One challenge in evaluating ethics curricula is that people often disagree on their purpose. For instance, some have emphasized “[improving] students’ moral reasoning about value issues regardless of what their particular set of moral values happens to be.” Others have focused on a variety of goals, from increasing students’ awareness of ethical issues, to learning fundamental concepts in bioethics, to instilling certain virtues. Many of these objectives would be challenging to evaluate: for instance, how does one assess whether an ethics curriculum has increased a student’s “commitment to clinical competence and lifelong education”? And if the goals of ethics curricula differ across institutions, would it even be possible to develop a standardized assessment tool that administrators across institutions would be willing to use? These are undoubtedly challenges. But educators likely would agree upon at least one straightforward and assessable objective: these curricula should cause health professions students to make more ethical decisions more of the time. This, too, may seem like an impossible standard to assess: after all, if people agreed on the “more ethical” answers to ethical dilemmas, would these classes need to exist in the first place? But while medical ethicists disagree in certain cases about what these “more ethical” decisions are, in most common cases, there is consensus. For instance, the overwhelming majority of medical ethicists agree that, in general, capacitated patients should be allowed to make decisions about what care they want, people should be told about the major risks and benefits of medical procedures, patients should not be denied care because of past unrelated behavior, resources should not be allocated in ways that primarily benefit advantaged patients, and so on. In other words, there is consensus on how clinicians should resolve many of the issues they will regularly encounter, and trainees’ understanding of this consensus can be assessed. (Of course, clinicians also may encounter niche or particularly challenging cases over their careers, but building and evaluating ethics curricula on the basis of these rare cases would be akin to building an introductory class on cardiac physiology around rare congenital anomalies.) Ideally, ethics curricula could be evaluated via randomized controlled trials, but it would be challenging to randomize some students to take a course and others not to. However, at some schools, students could be randomized to completing ethics training at different times of year, and assessments could be done before all students had completed the training and after some students had completed it. There are also questions about how to assess whether students will make more ethical decisions in practice. More schools could consider using simulations of common ethical scenarios, where they might ask students to perform capacity assessments or seek informed consent for procedures. But simulations are expensive and time-consuming, so some schools could start by simply conducting a standard pre- and post-course survey assessing how students plan to respond to ethical situations they are likely to face. Of course, saying you will do something on a survey does not necessarily mean you will do that thing in practice, but this could at least give programs a general sense of whether their ethics curricula work and how they compare to other schools’. Most health professions programs provide training in ethics. But simply providing this training does not ensure it will lead students to make more ethical decisions in practice. Thus, health professions programs across schools should evaluate their curricula using a common set of standards. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Today, the average medical student graduates with more than $215,000 of debt from medical school alone. The root cause of this problem — rising medical school tuitions — can and must be addressed. In real dollars, a medical degree costs 750 percent more today than it did seventy years ago, and more than twice as much as it did in 1992. These rising costs are closely linked to rising debt, which has more than quadrupled since 1978 after accounting for inflation. Physicians with more debt are more likely to experience to burnout, substance use disorders, and worse mental health. And, as the cost of medical education has risen, the share of medical students hailing from low-income backgrounds has fallen precipitously, compounding inequities in medical education. These changes are bad for patients, who benefit from having doctors who hail from diverse backgrounds and who aren’t burned out. But the high cost of medical education is bad for patients in other ways too, as physicians who graduate with more debt are more likely to pursue lucrative specialties, rather than lower-paying but badly needed ones, such as primary care. Doctors with more debt are also less likely to practice in underserved areas. The high cost of medical education also is bad for the public. A substantial portion of medical school loans are financed by the government, and nearly 40 percent of medical students plan to pursue programs like Public Service Loan Forgiveness (PSLF). When students succeed at having their loans forgiven, taxpayers wind up footing a portion of the bill, and the higher these loans are, the larger the bill is. The public benefits from programs like PSLF to the extent that such programs incentivize physicians to pursue more socially valuable careers. But these programs don’t address the underlying cause of rising debt: rising medical school tuitions. Despite the detrimental effects of the rising cost of medical education, little has been done to address the issue. There are several reasons for this. First, most physicians make a lot of money. Policymakers may correspondingly view medical students’ debt — which most students can repay — as a relatively minor problem, particularly when compared to other students’ debt. The American Association of Medical Colleges (AAMC) employed similar logic last year when it advised prospective medical students to not “let debt stop your dreams,” writing: “Despite the expense, medical school remains an outstanding investment. The average salary for physicians is around $313,000, up from roughly $210,000 in 2011.” Although this guidance may make medical students feel better, the AAMC’s guidance should hardly reassure the public, as to some extent, doctors’ salaries contribute to high health care costs. Another challenge to reducing the cost of medical education is the lack of transparency about how much it costs to educate medical students. Policymakers tend to defer to medical experts about issues related to medicine, meaning medical schools and medical organizations are largely responsible for regulating medical training. Unsurprisingly, medical schools — the institutions that set tuitions and benefit from tuition increases — have taken relatively few steps to justify or contain rising costs. Perhaps more surprisingly, the organization responsible for accrediting medical schools, the Liaison Committee on Medical Education (LCME), requires medical schools to provide students with “with effective financial aid and debt management counseling,” but does not require medical schools to limit tuition increases or to demonstrate that tuitions reflect the cost of training students. This is worrisome, as some scholars have noted that the price students pay may not reflect the cost of educating them. After all, medical schools have tremendous power to set prices, as most prospective students will borrow as much money as they need to in order to attend: college students spend years preparing to apply to medical school, most applicants are rejected, and many earn admission to only one school. And although some medical school faculty claim that medical schools lose money on medical students, experts dispute this, with one dean suggesting that it costs far less to educate students than students presently pay, and the tuition students pay instead “supports unproductive faculty.” Medical schools should take several steps to reduce students’ debt burdens. First, schools could reduce tuitions by reducing training costs. Schools could do so by relying more on external curricular resources, rather than generating all resources internally. More than a third of medical students already “almost never” attend lectures, instead favoring resources that are orders of magnitude cheaper than medical school tuitions. The fact that students opt to use these resources — often instead of attending classes they paid tens of thousands of dollars for — suggests students find these resources to be effective teaching tools. Schools should thus replace more expensive and inefficient internal resources with outside ones. Schools could also reduce the cost of a medical degree by decreasing the time it takes to earn one. More schools could give students the option of pursuing a three-year medical degree, as many medical students do very little during their fourth year. A second possibility would be to shift more of the medical school curriculum into students’ undergraduate educations. For instance, instead of requiring pre-medical students to take two semesters of physics, medical schools could instead require students to take one semester of physics and one semester of physiology, as some schools have done. Finally, medical schools could simply reduce the amount they charge students, as the medical schools affiliated with NYU, Cornell, and Columbia have done. Because tuition represents only a tiny fraction of medical schools’ revenues — as one dean put it, a mere “rounding error” — reducing the cost of attendance would only marginally affect schools’ bottom lines. Rather than eliminating tuition across the board, medical schools should focus on reducing the tuitions of students who commit to doing lower paying but valuable specialties or working in underserved areas. Unfortunately, most medical schools have demonstrated little willingness to take these steps. It is therefore likely that outside actors, like the LCME and the government, will need to intervene to improve financial transparency, ensure tuitions match the cost of training, and contain rising debt. |
Archives
August 2023
Categories
All
Posts
All
|