This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Academia often treats all areas of research as important, and two projects that will publish equally well as equally worthy of exploring. I believe this is misguided. Instead, we should strive to create an academic culture where researchers consider and discuss a project’s likely impact on the world when deciding what to work on. Though this view is at odds with current norms in academia, there are four reasons why a project’s potential to improve the world should be an explicit consideration. First, research projects can have massively different impacts, ranging from altering the course of human history to collecting dust. To the extent that we can do work that does more to improve people’s lives without imposing major burdens on ourselves, we should. Second, the choice of a research project affects a researcher’s career trajectory, and as some have argued, deciding how to spend one’s career is the most important ethical decision many will ever make. Third, most academic researchers are supported by public research dollars or work at tax exempt institutions. To the extent that researchers are benefitting from public resources, they have an obligation to use research resources in socially valuable ways. Fourth, most researchers come from advantaged backgrounds. If researchers pick projects primarily based on their own interests, the research agenda will provide too few benefits to populations underrepresented in academia. One might push back on this view by arguing that the research enterprise functions as an effective market. Perhaps academic researchers already have strong incentives to choose projects that do more to improve the world, given these projects will yield more funding, publications, and job opportunities. On this view, researchers have no reason to consider a project’s likely positive impact; journal editors, grant reviewers, and hiring committees will do this for them. But the academic marketplace is riddled with market failures: some diseases receive far more research funding than other, comparably severe ones; negative findings and replication studies are less likely to get published; research funders don’t always consider the magnitude of a problem in deciding which grants to fund; and so on. And although these problems warrant structural solutions, individual researchers can also help mitigate their effects. One might also argue that pursuing a career in academia is hard enough even when you’re studying the thing you are most passionate about. (I did a year of Zoom PhD; you don’t have to convince me of this.) On this view, working on the project you’re most interested in should crowd out all other considerations. But while this may be the case for some people, this view doesn’t accord with most conversations I’ve had with fellow graduate students. Many students enter their PhDs unsure about what to work on, and wind up deciding between multiple research areas or projects. Given that most have only worked in a few research areas prior to embarking on a PhD, the advice “choose something you’re passionate about” often isn’t very useful. For many students, adding “choose a project that you think can do more to improve the world” to the list of criteria for selecting a research topic would represent a helpful boundary, rather than an oppressive constraint. Of course, people will have different understandings of what constitutes an impactful project. Some may aim to address certain inequities; others may want to help as many people as possible. People also will disagree about which projects matter most even according to the same metrics: a decade ago, many scientists thought developing an mRNA vaccine platform was a good idea, but not one of the most important projects of our time. But the inherent uncertainty about which research is more impactful does not leave us completely in the dark: most would agree that curing a disease that kills millions is more important than curing a disease that causes mild symptoms in a few. In practice, identifying more beneficial research questions involves guessing at several hard to estimate parameters—e.g., How many people are affected by a given problem and how badly? Are there already enough talented people working on this such that my contributions will be less important? And will the people who read my paper be able to do anything about this? The more basic the research, the harder these kinds of questions are to answer. My goal here, though, is not to provide practical advice. (Fortunately, groups like Effective Thesis do.) My point is that researchers should approach the question of how much a given project will improve the world with the same rigor they bring to the work of the project itself. Researchers do not need to arrive at the best or most precise answer every time: over the course of their careers, seriously considering which projects are more important and, to some extent, picking projects on this basis will produce a body of work that does more good.
0 Comments
This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Health professions students are often required to complete training in ethics. But these curricula vary immensely in terms of their stated objectives, time devoted to them, when during training students complete them, who teaches them, content covered, how students are assessed, and instruction model used. Evaluating these curricula on a common set of standards could help make them more effective. In general, it is good to evaluate curricula. But there are several reasons to think it may be particularly important to evaluate ethics curricula. The first is that these curricula are incredibly diverse, with one professor noting that the approximately 140 medical schools that offer ethics training do so “in just about 140 different ways.” This suggests there is no consensus on the best way to teach ethics to health professions students. The second is that time in these curricula is often quite limited and costly, so it is important to make these curricula efficient. Third, when these curricula do work, it would be helpful to identify exactly how and why they work, as this could have broader implications for applied ethics training. Finally, it is possible that some ethics curricula simply don’t work very well. In order to conclude ethics curricula work, at least two things would have to be true: first, students would have to make ethically suboptimal decisions without these curricula, and second, these curricula would have to cause students to make more ethical decisions. But it’s not obvious both these criteria are satisfied. After all, ethics training is different from other kinds of training health professions students receive. Because most students come in with no background in managing cardiovascular disease, effectively teaching students how to do this will almost certainly lead them to provide better care. But students do enter training with ideas about how to approach ethical issues. If some students’ approaches are reasonable, these students may not benefit much from further training (and indeed, bad training could lead them to make worse decisions). Additionally, multiple studies have found that even professional ethicists do not behave more morally than non-ethicists. If a deep understanding of ethics does not translate into more ethical behavior, providing a few weeks of ethics training to health professions students may not lead them to make more ethical decisions in practice — a primary goal of these curricula. One challenge in evaluating ethics curricula is that people often disagree on their purpose. For instance, some have emphasized “[improving] students’ moral reasoning about value issues regardless of what their particular set of moral values happens to be.” Others have focused on a variety of goals, from increasing students’ awareness of ethical issues, to learning fundamental concepts in bioethics, to instilling certain virtues. Many of these objectives would be challenging to evaluate: for instance, how does one assess whether an ethics curriculum has increased a student’s “commitment to clinical competence and lifelong education”? And if the goals of ethics curricula differ across institutions, would it even be possible to develop a standardized assessment tool that administrators across institutions would be willing to use? These are undoubtedly challenges. But educators likely would agree upon at least one straightforward and assessable objective: these curricula should cause health professions students to make more ethical decisions more of the time. This, too, may seem like an impossible standard to assess: after all, if people agreed on the “more ethical” answers to ethical dilemmas, would these classes need to exist in the first place? But while medical ethicists disagree in certain cases about what these “more ethical” decisions are, in most common cases, there is consensus. For instance, the overwhelming majority of medical ethicists agree that, in general, capacitated patients should be allowed to make decisions about what care they want, people should be told about the major risks and benefits of medical procedures, patients should not be denied care because of past unrelated behavior, resources should not be allocated in ways that primarily benefit advantaged patients, and so on. In other words, there is consensus on how clinicians should resolve many of the issues they will regularly encounter, and trainees’ understanding of this consensus can be assessed. (Of course, clinicians also may encounter niche or particularly challenging cases over their careers, but building and evaluating ethics curricula on the basis of these rare cases would be akin to building an introductory class on cardiac physiology around rare congenital anomalies.) Ideally, ethics curricula could be evaluated via randomized controlled trials, but it would be challenging to randomize some students to take a course and others not to. However, at some schools, students could be randomized to completing ethics training at different times of year, and assessments could be done before all students had completed the training and after some students had completed it. There are also questions about how to assess whether students will make more ethical decisions in practice. More schools could consider using simulations of common ethical scenarios, where they might ask students to perform capacity assessments or seek informed consent for procedures. But simulations are expensive and time-consuming, so some schools could start by simply conducting a standard pre- and post-course survey assessing how students plan to respond to ethical situations they are likely to face. Of course, saying you will do something on a survey does not necessarily mean you will do that thing in practice, but this could at least give programs a general sense of whether their ethics curricula work and how they compare to other schools’. Most health professions programs provide training in ethics. But simply providing this training does not ensure it will lead students to make more ethical decisions in practice. Thus, health professions programs across schools should evaluate their curricula using a common set of standards. |
Archives
August 2023
Categories
All
Posts
All
|