This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Health professions students are often required to complete training in ethics. But these curricula vary immensely in terms of their stated objectives, time devoted to them, when during training students complete them, who teaches them, content covered, how students are assessed, and instruction model used. Evaluating these curricula on a common set of standards could help make them more effective. In general, it is good to evaluate curricula. But there are several reasons to think it may be particularly important to evaluate ethics curricula. The first is that these curricula are incredibly diverse, with one professor noting that the approximately 140 medical schools that offer ethics training do so “in just about 140 different ways.” This suggests there is no consensus on the best way to teach ethics to health professions students. The second is that time in these curricula is often quite limited and costly, so it is important to make these curricula efficient. Third, when these curricula do work, it would be helpful to identify exactly how and why they work, as this could have broader implications for applied ethics training. Finally, it is possible that some ethics curricula simply don’t work very well. In order to conclude ethics curricula work, at least two things would have to be true: first, students would have to make ethically suboptimal decisions without these curricula, and second, these curricula would have to cause students to make more ethical decisions. But it’s not obvious both these criteria are satisfied. After all, ethics training is different from other kinds of training health professions students receive. Because most students come in with no background in managing cardiovascular disease, effectively teaching students how to do this will almost certainly lead them to provide better care. But students do enter training with ideas about how to approach ethical issues. If some students’ approaches are reasonable, these students may not benefit much from further training (and indeed, bad training could lead them to make worse decisions). Additionally, multiple studies have found that even professional ethicists do not behave more morally than non-ethicists. If a deep understanding of ethics does not translate into more ethical behavior, providing a few weeks of ethics training to health professions students may not lead them to make more ethical decisions in practice — a primary goal of these curricula. One challenge in evaluating ethics curricula is that people often disagree on their purpose. For instance, some have emphasized “[improving] students’ moral reasoning about value issues regardless of what their particular set of moral values happens to be.” Others have focused on a variety of goals, from increasing students’ awareness of ethical issues, to learning fundamental concepts in bioethics, to instilling certain virtues. Many of these objectives would be challenging to evaluate: for instance, how does one assess whether an ethics curriculum has increased a student’s “commitment to clinical competence and lifelong education”? And if the goals of ethics curricula differ across institutions, would it even be possible to develop a standardized assessment tool that administrators across institutions would be willing to use? These are undoubtedly challenges. But educators likely would agree upon at least one straightforward and assessable objective: these curricula should cause health professions students to make more ethical decisions more of the time. This, too, may seem like an impossible standard to assess: after all, if people agreed on the “more ethical” answers to ethical dilemmas, would these classes need to exist in the first place? But while medical ethicists disagree in certain cases about what these “more ethical” decisions are, in most common cases, there is consensus. For instance, the overwhelming majority of medical ethicists agree that, in general, capacitated patients should be allowed to make decisions about what care they want, people should be told about the major risks and benefits of medical procedures, patients should not be denied care because of past unrelated behavior, resources should not be allocated in ways that primarily benefit advantaged patients, and so on. In other words, there is consensus on how clinicians should resolve many of the issues they will regularly encounter, and trainees’ understanding of this consensus can be assessed. (Of course, clinicians also may encounter niche or particularly challenging cases over their careers, but building and evaluating ethics curricula on the basis of these rare cases would be akin to building an introductory class on cardiac physiology around rare congenital anomalies.) Ideally, ethics curricula could be evaluated via randomized controlled trials, but it would be challenging to randomize some students to take a course and others not to. However, at some schools, students could be randomized to completing ethics training at different times of year, and assessments could be done before all students had completed the training and after some students had completed it. There are also questions about how to assess whether students will make more ethical decisions in practice. More schools could consider using simulations of common ethical scenarios, where they might ask students to perform capacity assessments or seek informed consent for procedures. But simulations are expensive and time-consuming, so some schools could start by simply conducting a standard pre- and post-course survey assessing how students plan to respond to ethical situations they are likely to face. Of course, saying you will do something on a survey does not necessarily mean you will do that thing in practice, but this could at least give programs a general sense of whether their ethics curricula work and how they compare to other schools’. Most health professions programs provide training in ethics. But simply providing this training does not ensure it will lead students to make more ethical decisions in practice. Thus, health professions programs across schools should evaluate their curricula using a common set of standards.
0 Comments
This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Today, the average medical student graduates with more than $215,000 of debt from medical school alone. The root cause of this problem — rising medical school tuitions — can and must be addressed. In real dollars, a medical degree costs 750 percent more today than it did seventy years ago, and more than twice as much as it did in 1992. These rising costs are closely linked to rising debt, which has more than quadrupled since 1978 after accounting for inflation. Physicians with more debt are more likely to experience to burnout, substance use disorders, and worse mental health. And, as the cost of medical education has risen, the share of medical students hailing from low-income backgrounds has fallen precipitously, compounding inequities in medical education. These changes are bad for patients, who benefit from having doctors who hail from diverse backgrounds and who aren’t burned out. But the high cost of medical education is bad for patients in other ways too, as physicians who graduate with more debt are more likely to pursue lucrative specialties, rather than lower-paying but badly needed ones, such as primary care. Doctors with more debt are also less likely to practice in underserved areas. The high cost of medical education also is bad for the public. A substantial portion of medical school loans are financed by the government, and nearly 40 percent of medical students plan to pursue programs like Public Service Loan Forgiveness (PSLF). When students succeed at having their loans forgiven, taxpayers wind up footing a portion of the bill, and the higher these loans are, the larger the bill is. The public benefits from programs like PSLF to the extent that such programs incentivize physicians to pursue more socially valuable careers. But these programs don’t address the underlying cause of rising debt: rising medical school tuitions. Despite the detrimental effects of the rising cost of medical education, little has been done to address the issue. There are several reasons for this. First, most physicians make a lot of money. Policymakers may correspondingly view medical students’ debt — which most students can repay — as a relatively minor problem, particularly when compared to other students’ debt. The American Association of Medical Colleges (AAMC) employed similar logic last year when it advised prospective medical students to not “let debt stop your dreams,” writing: “Despite the expense, medical school remains an outstanding investment. The average salary for physicians is around $313,000, up from roughly $210,000 in 2011.” Although this guidance may make medical students feel better, the AAMC’s guidance should hardly reassure the public, as to some extent, doctors’ salaries contribute to high health care costs. Another challenge to reducing the cost of medical education is the lack of transparency about how much it costs to educate medical students. Policymakers tend to defer to medical experts about issues related to medicine, meaning medical schools and medical organizations are largely responsible for regulating medical training. Unsurprisingly, medical schools — the institutions that set tuitions and benefit from tuition increases — have taken relatively few steps to justify or contain rising costs. Perhaps more surprisingly, the organization responsible for accrediting medical schools, the Liaison Committee on Medical Education (LCME), requires medical schools to provide students with “with effective financial aid and debt management counseling,” but does not require medical schools to limit tuition increases or to demonstrate that tuitions reflect the cost of training students. This is worrisome, as some scholars have noted that the price students pay may not reflect the cost of educating them. After all, medical schools have tremendous power to set prices, as most prospective students will borrow as much money as they need to in order to attend: college students spend years preparing to apply to medical school, most applicants are rejected, and many earn admission to only one school. And although some medical school faculty claim that medical schools lose money on medical students, experts dispute this, with one dean suggesting that it costs far less to educate students than students presently pay, and the tuition students pay instead “supports unproductive faculty.” Medical schools should take several steps to reduce students’ debt burdens. First, schools could reduce tuitions by reducing training costs. Schools could do so by relying more on external curricular resources, rather than generating all resources internally. More than a third of medical students already “almost never” attend lectures, instead favoring resources that are orders of magnitude cheaper than medical school tuitions. The fact that students opt to use these resources — often instead of attending classes they paid tens of thousands of dollars for — suggests students find these resources to be effective teaching tools. Schools should thus replace more expensive and inefficient internal resources with outside ones. Schools could also reduce the cost of a medical degree by decreasing the time it takes to earn one. More schools could give students the option of pursuing a three-year medical degree, as many medical students do very little during their fourth year. A second possibility would be to shift more of the medical school curriculum into students’ undergraduate educations. For instance, instead of requiring pre-medical students to take two semesters of physics, medical schools could instead require students to take one semester of physics and one semester of physiology, as some schools have done. Finally, medical schools could simply reduce the amount they charge students, as the medical schools affiliated with NYU, Cornell, and Columbia have done. Because tuition represents only a tiny fraction of medical schools’ revenues — as one dean put it, a mere “rounding error” — reducing the cost of attendance would only marginally affect schools’ bottom lines. Rather than eliminating tuition across the board, medical schools should focus on reducing the tuitions of students who commit to doing lower paying but valuable specialties or working in underserved areas. Unfortunately, most medical schools have demonstrated little willingness to take these steps. It is therefore likely that outside actors, like the LCME and the government, will need to intervene to improve financial transparency, ensure tuitions match the cost of training, and contain rising debt. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
The financial barriers associated with becoming a bioethicist make the field less accessible, undermining the quality and relevance of bioethics research. Because the boundaries of the field are poorly defined, credentials often serve as a gatekeeping mechanism. For instance, the recent creation of the Healthcare Ethics Consultant-Certified (HEC-C) program, which “identifies and assesses a national standard for the professional practice of clinical healthcare ethics consulting” is a good idea in theory. But the cost of the exam starts at $495. There is no fee assistance. Given that 99 percent of those who have taken the exam have passed, the exam seems to largely serve as a financial barrier to becoming an ethics consultant. A related problem arises with Masters of Bioethics programs. These degrees can cost more than $80,000 just for tuition — often more than the tuitions of medical schools at the same universities. And although many programs entice students with partial scholarships, these degrees likely still cost tens of thousands of dollars. Students who earn these degrees may correspondingly come from advantaged backgrounds or take on substantial debt pursuing them. We don’t actually know, as most of these programs provide little information about their students’ backgrounds, graduates’ debt burden, or the proportion of students who attain jobs in bioethics after earning their degrees. It is possible that some of these programs do provide substantial support for disadvantaged students, but because few programs publicize this information, it is impossible to tell. This lack of transparency is worrisome given that many humanities masters programs enrich universities while driving students into debt they cannot escape from. These financial barriers aren’t unique to bioethics: students routinely take on significant debt to pursue careers in other humanities disciplines. But financial barriers in bioethics are particularly concerning for three reasons. First, the pervasiveness of financial barriers is hypocritical. Bioethics is a field that is supposed to be concerned with inequities. Many bioethicists conduct research on the fair distribution of health resources, health disparities, and barriers to healthcare access. If bioethics itself is inaccessible to people who aren’t wealthy, this is at odds with the field’s core tenets. People who want to pursue careers in bioethics should be able to do so, regardless of their financial backgrounds. Second, people’s backgrounds influence what they choose to work on. Bioethics has long been criticized for focusing on “issues of affluence,” or niche problems that affect only a handful of advantaged people. But if bioethics is primarily comprised of wealthy people, then it is unsurprising that the field devotes too little attention to the problems of people who are not wealthy. Finally, bioethicists often adopt paternalistic attitudes towards people who are poor. For instance, many bioethicists have argued against allowing risky research (e.g., human challenge studies) or compensating people who take on certain risks (e.g., by donating kidneys) because poor people would feel pressure to participate. But it’s odd for bioethicists — most of whom are advantaged — to make claims, usually without citing empirical evidence, about how poor people would feel about or react to policies pertaining to them. Putting aside the strength of the arguments on both sides of these debates, we should be concerned that the people affected by the policies bioethicists help shape are not well represented in bioethics. These problems are entrenched, but they are solvable. A first step would be to provide fee assistance for all bioethics exams, conferences, applications, and events. For instance, students can attend the American Society for Bioethics and Humanities (ASBH) Annual Conference for only $60 and should similarly be able to take the HEC-C exam at a discount. A second step would be to create greater financial transparency around bioethics training programs, and in particular, Masters of Bioethics degrees. Programs should publicize information about their students’ backgrounds, how students finance these programs, how much debt students typically incur because of these programs, and the job prospects of graduates. This would put pressure on programs with few disadvantaged students to recruit and support those students. Bioethics organizations — like ASBH and the Hastings Center — should ask universities to release financial information about their Masters of Bioethics programs and should consolidate this information so students interested in bioethics can make informed decisions about where to pursue training. Finally, there are several paid fellowships that provide excellent, hands-on training for people interested in bioethics. The field should work to create and fund more programs like these. Large hospitals — which often have ethics committees and revenues in the billions — could help finance them. Ultimately, bioethics will be a better field if it becomes a more accessible one. Bioethics should thus work to address financial barriers to entry. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
The Biden administration plans to greatly increase funding for the National Institutes of Health (NIH) in 2022, presenting the agency with new opportunities to better align research funding with public health needs. The NIH has long been criticized for disproportionately devoting its research dollars to the study of conditions that affect a small and advantaged portion of the global population. For instance, three times as many people have sickle cell disease — which disproportionately affects Black people — than cystic fibrosis — which disproportionately affects white people. Despite this, the NIH devotes comparable research funding to both diseases. These disparities are further compounded by differences in research funding from non-governmental organizations, with philanthropies spending seventy-five times more per patient on cystic fibrosis research than on sickle cell disease research. Diseases that disproportionately affect men also receive more NIH funding than those that primarily affect women. This disparity can be seen in the lagging funding for research on gynecologic cancers. The NIH presently spends eighteen times as much on prostate cancer than ovarian cancer per person-years of life lost for every case, and although this difference is partly explained by the fact that prostate cancer is far more prevalent than ovarian cancer, this disparity persists even after prevalence is accounted for. Making matters even worse, funding for research on gynecological cancers has fallen, even as overall NIH funding has increased. Disparities in what research is funded are further compounded by disparities in who gets funded. Black scientists are also about half as likely to receive NIH funding than white scientists, and this discrepancy holds constant across academic ranks (e.g., between Black and white scientists who are full professors). This disparity is partly driven by topic choice, with grant applications from Black scientists focusing more frequently on “health disparities and patient-focused interventions,” which are topics that are less likely to be funded. Recent calls to address structural racism in research funding have led the NIH to commit $90 million to combatting health disparities and researching the health effects of discrimination, although this would represent less than two percent of the Biden administration’s proposed NIH budget. The disconnect between research funding and public health needs is also driven by the fact that the NIH tends to fund relatively little social science research. For instance, police violence is a pressing public health problem: in 2019, more American men were killed by police violence than by Hodgkin lymphoma or testicular cancer. But unlike Hodgkin lymphoma and testicular cancer, which receive tens of millions of dollars of research funding from the NIH every year and additional funding from non-governmental organizations and private companies, the NIH funds little research on police violence. For instance, in 2021, only six NIH funded projects mentioned “police violence,” “police shooting,” or “police force” in their title, abstract, or project terms, while 119 mentioned “Hodgkin lymphoma” and 24 mentioned “testicular cancer.” While many view the NIH as an organization focused exclusively on basic science research, its mandate is much broader. Indeed, the NIH’s mission is “to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.” Epidemiologists, health economists, and other social science researchers studying how societies promote or undermine health should thus receive NIH funding that is more proportionate to the magnitude of the health problems they research. Research funding disparities have multiple causes and warrant different solutions, from prioritizing work conducted by scientists from underrepresented backgrounds, to ensuring that there is gender parity in the size of NIH grants awarded to first-time Principal Investigators. To address the broader problem of scientific priorities not reflecting the size of health problems, the NIH should instruct grant reviewers to consider how many people are affected by a health problem, how serious that health problem is for each person affected by it, and whether a disease primarily affects marginalized populations. In addition, the NIH should commit to funding more research on public health problems — like police violence — that cause substantial harm but receive relatively little attention from the health research enterprise. As the NIH prepares for a massive influx of funding, it must follow through on its commitment to address health research funding disparities. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
Clinicians across medical settings commonly euphemize or understate pain — a practice that has concerning implications for patient trust, consent, and care quality. Intrauterine device (IUD) placement offers an instructive case study, and highlights the need for transparency in describing painful medical procedures. IUDs are one of the most effective forms of birth control, and tens of millions of women have them placed every year. Clinicians typically describe the pain associated with IUD insertion as “uncomfortable… but short lived” or “three quick cramps.” But patients who have undergone IUD insertion have described it as “on a cosmic level,” “such blinding agony I could barely see,” and “like someone shocking my cervix with a taser.” Although some patients experience little pain with IUD insertion, most patients experience moderate to severe pain. Clinicians’ language thus paints a picture of pain that is less severe than that which many patients experience – and this is not unique to IUD placement — often, clinicians refer to pain as “pressure” or call a painful procedure “uncomfortable.” There are likely several reasons why physicians downplay pain. First, clinicians may not know how much a procedure hurts. Second, clinicians may understate pain to alleviate patients’ anxiety or reduce the pain they experience. Third, if clinicians believe a procedure is in a patient’s best interest, they may downplay the pain associated with it to increase a patient’s likelihood of giving consent. In some cases, clinicians may not know how painful a procedure is. For instance, one study found that physicians assess IUD insertion to be about half as painful as patients report it to be. But after performing a procedure many times, clinicians should have a reasonable sense of the range of reactions patients typically have. If they’re unsure, they should ask patients, or read accounts of patients who have undergone that procedure. There is also evidence that describing the pain associated with a procedure can increase patients’ reported pain and anxiety. Downplaying expectations may thus reduce pain. This is an important consideration and justifies not disclosing how painful a procedure is for patients who do not want to know. But there are several problems with this approach. For one, patients can talk to their friends or access information online about other patients’ experiences of a procedure, undermining any potential analgesic effects of downplaying pain. In addition, there are other ways to reduce pain that do not involve misleading patients. Finally, while failing to disclose how painful a procedure is may marginally reduce how much it hurts, there are often substantial discrepancies between patients’ and clinicians’ assessments of pain. Indeed, most studies that have looked at this question have found that clinicians underestimate pain relative to patients, and that for more painful procedures, clinicians’ underestimation is even more pronounced. So patients may still experience more severe pain than expected, undermining their long-term trust in their clinicians. Clinicians may also recognize that telling a patient how painful a procedure is decreases the likelihood a patient will consent to that procedure. Clinicians may accordingly understate the unpleasantness of procedures that they believe are in a patient’s best interest. For instance, IUDs work extremely well (and are the most commonly used birth control among physicians). For most patients, the pain associated with IUD insertion lasts for minutes, while IUDs last for years. Clinicians may thus feel that IUDs are the best option for many patients and may downplay the pain of having one placed to increase the likelihood that patients will choose to get one. While this approach is understandable, patients are entitled to choose what care is in their interest. Some may reasonably decide that they do not want to risk incurring severe short-term pain, even if the long-term benefits are substantial. In addition to undermining trust and the validity of a patient’s consent, downplaying pain may lead clinicians to undertreat it. Indeed, clinicians frequently underrecognize and undertreat pain, and women and people of color are particularly likely to have their pain overlooked. Admittedly, the pain associated with IUD insertion has proven challenging to treat. For instance, many patients are advised to take Advil before the procedure, despite evidence that it doesn’t help. Other interventions also don’t work very well, leading the American College of Obstetricians and Gynecologists to conclude that “more research is needed to identify effective options to reduce pain for IUD insertion.” But in highly resourced settings, where anesthesia, narcotics, and anti-anxiety medications are available, clinicians can control patients’ procedural pain. For instance, the default is to sedate patients for colonoscopies, even though many patients who have received colonoscopies without sedation report that it is not very painful. Conversely, patients are rarely offered sedation for IUD insertion, even though this would eliminate their pain. Sedation comes with its own risks, but patients are generally given the option of taking on these risks when pain is viewed as sufficiently severe. When pain can be managed and it is not, this reflects value judgments (i.e., about a patient’s ability to tolerate a given level of pain). Being able to place IUDs in an office without sedation works well for many patients and makes IUDs more accessible. But fear of pain may lead some people to choose a less effective form of birth control or forgo it entirely. Given how well IUDs work, this is unfortunate and preventable. Ultimately, for IUDs and other important medical procedures, clinicians should ask patients how much they want to know about a procedure and describe the range of pain most patients experience for those who wish to know. If patients are concerned about the level of pain they might experience, clinicians should provide them with a range of effective options for managing that pain. Failing to engage in these conversations risks undermining trust, compromising the validity of consent, and undertreating pain. This piece was originally published on Bill of Health, the blog of Petrie-Flom Center at Harvard Law School.
In my junior year of college, my pre-medical advisor instructed me to take time off after graduating and before applying to medical school. I was caught off guard. At 21, it had already occurred to me that completing four years of medical school, at least three years of residency, several more years of fellowship, and a PhD, would impact my ability to start a family. I was wary of letting my training expand even further, but this worry felt so vague and distant that I feared expressing it would signal a lack of commitment to my career. I now see that this worry was well-founded: the length of medical training unnecessarily compromises trainees’ ability to balance their careers with starting families. In the United States, medical training has progressively lengthened. At the beginning of the 20th century, medical schools increased their standards for admission, requiring students to take premedical courses prior to matriculating, either as undergraduates or during post-baccalaureate years. Around the same time, the length of medical school increased from two to four years. In recent decades, the percentage of physicians who pursue training after residency has increased in many fields. And, like me, a growing proportion of trainees are taking time off between college and medical school, pursuing dual degrees, or taking non-degree research years, further prolonging training. The expansion of medical training is understandable. We know far more about medicine than we did a hundred years ago, and there is correspondingly more to teach. The increased structure and regulation of medical training have made medical training safer for patients and ensured a standard of competency for physicians. Getting into medical school and residency have also become more competitive, meaning many trainees feel they must spend additional time bolstering their resumes. However, medical training is now longer than it needs to be. Many medical students do very little during their fourth year of medical school. Residency programs also require trainees to serve for a set number of years, rather than until they have mastered the skills their fields require, inflating training times. And many medical schools and residency programs require trainees to spend time conducting research, whether or not they are interested in academic careers. While many trainees appreciate these opportunities, they should not be compulsory. There are many arguments for shortening medical training. In other parts of the world, trainees complete six-year medical degrees, compared to the eight years required of most trainees in the U.S. (four years of college and four years of medical school). The length of medical training increases physician debt and healthcare costs. In addition, it decreases the supply of fully trained physicians, a serious problem in a country facing physician shortages. Since burnout is prevalent among trainees, shortening training could also mitigate burnout. And critically, the length of medical training makes it challenging for physicians to start families. Because medical trainees work long hours, have physically demanding jobs, and are burdened by substantial debt, they are sometimes advised or pressured to wait until they complete their training before having children. But high rates of infertility and pregnancy complications among female physicians who defer childbearing suggest this is a treacherous path. Starting a family during training hardly feels like a safer option. In my first year of medical school, I attended a panel of surgical subspecialty residents. I asked whether their residencies would be compatible with starting a family. The four men looked at each other before passing the microphone to the sole woman resident, who told us she could not imagine having a child during residency, as she didn’t even have time to do laundry, so instead ordered new socks each week. In my second year of medical school, a physician spoke to us about having a child during residency. I asked how she and her husband afforded full-time child care on resident salaries. She confided that they had maxed out their credit cards and added this debt to their medical school debt to pay for daycare. There have been encouraging anecdotes, too, but these have often involved healthy parents relocating, significant financial resources, or partners with less intense careers. Admittedly, shortening medical training would not be a magic bullet. Physicians who have completed their training and have children still face myriad challenges, and these challenges disproportionately affect women. Maternal discrimination is common, and compared to men, women physicians with children spend more than eight hours per week on parenting and domestic work and correspondingly spend seven fewer hours on their paid work. Nearly 40 percent of women physicians leave full-time practice within six years of completing residency, with most citing family as the reason why. However, shortening medical training would enable more trainees to defer childbearing until the completion of their training. This would help in several ways. For instance, while most trainees are required to work certain rotations (e.g., night shifts), attending physicians often have more flexibility in choosing when and how much they work. In addition, attending physicians earn much more money than trainees, expanding the child care options they can afford. In recent years, many changes have been made to support trainees with children, from expanding access to lactation rooms, to increasing parental leave, to creating more child care facilities at hospitals. But long training times remain a persistent and reduceable barrier. To address this, medical schools could only require students to take highly relevant coursework, reducing the number of applicants who would need to complete additional coursework after college. Medical schools could also increase the number of three-year medical degree pathways and make research requirements optional. Residency and fellowship programs could create more opportunities for integrated residency and fellowship training and could similarly make research time optional. It will be challenging to create efficient paths that provide excellent training without creating impossibly grueling schedules. But this is a challenge that must be confronted: physicians should be able to balance starting a family with pursuing their careers, and streamlining medical training will facilitate this. Haley Sullivan, Emma Pierson, Niko Adamstein, and Sam Doernberg gave helpful feedback on this post.
Many medical journals will publish ~1,000-word opinion pieces written by medical students. There is a lot of luck involved in getting these published, but here are tips others have given me, as well as lessons I have learned on how to do this:
|
Archives
December 2023
Categories
All
Posts
All
|