Saturday, December 28, 2013

How can a health care system lead not to ruin but to, actually, health?

After a year of reporting in a series of articles in the New York Times (several of which I have commented on in this blog) on the crisis in health care, Elisabeth Rosenthal summarized her conclusions in a Times piece on December 22, 2013, “Health Care’s Road to Ruin”. As the title makes clear, those conclusions are not positive. She summarizes highlights from her investigations that look at the extremely high cost of health care in the US compared to other countries, the extreme variability in pricing depending upon where you are in the US, and the opaque and incomprehensible methods of coming up with pricing and the regulatory incentives that are continually gamed by providers. On the other end of the spectrum, she summarizes both the poor health outcomes at a population level in the US compared to other countries, and the more personal, poignant and dispiriting stories of individuals who die, are bankrupted, or both by our health “system”.

As Ms. Rosenthal notes, the stories that she tells could be “Extreme anecdotes, perhaps. But the series has prompted more than 10,000 comments of outrage and frustration — from patients, doctors, politicians, even hospital and insurance executives.”  She goes on to discuss the potential solutions that those commenters, and others, have suggested, including regulating prices, making medical schools cheaper or free, not paying fee-for-service that rewards volume rather than quality. But, she says, “the nation is fundamentally handicapped in its quest for cheaper health care: All other developed countries rely on a large degree of direct government intervention, negotiation or rate-setting to achieve lower-priced medical treatment for all citizens. That is not politically acceptable here.”

In reality, however, the idea that the health industry is somehow, before or after the ACA (“Obamacare”) an exemplar of the free market and the success (or not) of private enterprise, is entirely a myth, a facile construct that is used by those making lots of money on the current system to block change. Medicare, as I have discussed (e.g., Outing the RUC: Medicare reimbursement and Primary Care, February 2, 2011), sets the rates that they will pay for Medicare patients, and private insurers pay multiples of Medicare rates. Services as mostly fee-for-service, except in HMOs, and integrated health systems (such as Kaiser), and for Medicare inpatient admissions which are paid at set fees based on the diagnosis (through a system called Diagnosis-Related Groups, or DRGs). The entire system of what is profitable for a health care provider (meaning a hospital or other health care facility or a doctor or group) is based on this policy; it is profitable to provide cancer care because Medicare (and thus other health insurers) pay an enormous amount to administer chemotherapy drugs. Cardiac care, orthopedics and neurosurgical interventions are also very profitable. (Oh! Is that why my hospital chooses to focus on these areas instead of psychiatry, obstetrics and pediatrics?!) The doctors who do all these things want you to think (and think themselves) that it is because what they do is so hard or that they work so hard; it is in fact a regulatory policy glitch. In addition, a majority of the money spent on “health care” is public funds, not private, if you add Medicare, Medicaid, federal, state and local government employees and retirees, and add in the tax break for employer contributions to health insurance (i.e., taxes forgone because this employee reimbursement is not counted as regular income).

So the majority of the money being spent on health care is public money, and the system is already highly government influenced with government policies setting reimbursement rates. The only thing “private” about it is the ownership and profit, both by providers and insurance companies. In other words, it is a parallel to our financial services industry: private enterprise is given a license to make money from everyone, and the government finances it. The only difference is that for financial services, the government steps in to bail them out only after they have already stolen all our money, while in health services the profit margin is built in from the start. Thus, Rosenthal’s comments, and quotations from others such as Dr. Steven Schroeder of the University of California at San Francisco: “People in fee-for-service are very clever — they stay one step ahead of the formulas to maximize revenue.” But, of course, we the people, through our elected representatives and regulators, allow them to do so. And, therefore, the arcane network of incentives and disincentives built into the ACA to try to get reasonable results at reasonable cost – and still ensure insurance companies make lots of profit.

The solution is very simple; emulating one or the other systems in place in every other Western democracy. The simplest is closest to us is Canada, and a single-payer system, essentially putting everyone into Medicare. Voilá! We are all covered by the same system, providers can provide care to people based upon their disease, not their insurance status, and rates can be set at the level that we as a people are able to tolerate, or willing to pay, for the health care we want and need. The clout of the empowered will bring along benefit for everyone. There will be no more gaming the system, trying to attract certain patients with certain insurance rather than other. Or, in a more complex fashion, we could follow the example of other countries; Switzerland, for example, has multiple private insurance companies rather than a single payer, but they are highly regulated and non-profit; they are told by the government what they can charge and what they must cover.

The argument that Americans will not accept major government involvement and regulation is pretty flawed, both because of the involvement of the government in regulating the health system already (mostly to ensure profit for providers and insurers) and because regular people see the advantage of universal health care. Rosenthal writes that “All other developed countries rely on a large degree of direct government intervention, negotiation or rate-setting to achieve lower-priced medical treatment for all citizens. That is not politically acceptable here.” Study after study has shown strong support for a universal health care system from the American people; however, certain very powerful vested interests would likely lose out:  “’A lot of the complexity of the Affordable Care Act arises from the political need in the U.S. to rely on the private market to provide health care access,’ said Dr. David Blumenthal, a former adviser to President Obama and president of the Commonwealth Fund, a New York-based foundation that focuses on health care.“

The political need is for the wealthy and powerful. This is why ACA ensured that insurance companies would get their cut. Elisabeth Rosenthal does not say so in so many words, but she does say that “…after a year spent hearing from hundreds of patients like Mr. Abrahams, Mr. Landman and Mr. Miller, I know, too, that reforming the nation’s $2.9 trillion health system is urgent, and will not be accomplished with delicate maneuvers at the margins. There are many further interventions that we know will help contain costs and rein in prices. And we’d better start making choices fast.”

A universal health care program, Medicare for all, in which everyone was automatically enrolled just as current Medicare recipients are now, would be just fine.

Saturday, December 21, 2013

Roosevelt University: A commitment to diversity and social justice

On December 13, 2013, I attended the winter Commencement ceremonies at Roosevelt University in Chicago. As a new member of the University’s Board of Trustees, it was my first such event at Roosevelt; the Board had met the day before. I have, of course, been to other graduations. Some have been of family and friends, but most have been as a faculty member in medical school. I have sat on the stage looking out at the assembled graduates and families before, but never in the role of a Trustee, and never at Roosevelt.

Graduations are pretty special events. At the medical school graduation ceremony, we look on as our future colleagues march across the stage, many of them people we know and have taught, while their families watch and clap and sometimes cheer. We have pride in them, and also wonder how fast the time goes, remembering when they were just starting a few short years earlier. But the Roosevelt graduation was different, and not just because it was not a medical school and not just because I was there as a Trustee.


For starters, it was in Chicago’s beautiful Auditorium Theater, in the Auditorium Building designed by Louis Sullivan, opened in 1889 and about to celebrate its 125th anniversary next year. I have been there before but only in the audience; sitting on the stage looking out at this gorgeous auditorium whose balconies soar 6 or 7 stories, filled with 4,000 people, was amazing. Roosevelt owns the Auditorium, and the building has long been its home, but recently the 40-story Wabash Building has been built next to it, rising 40 stories, the top 27 dorms with priceless views, its own architectural splendor complementing in a very different way that of Sullivan.

There were also some special events during the graduation. The honorary degree recipient was Joe Segal, a Roosevelt alumnus who for 60 years has run Chicago’s Jazz Showcase, bringing all of the great jazz artists of those years to perform at a series of venues; I began attending his shows in the 1970s. Danielle Smith, graduating with a bachelor’s degree in Special Education (and a minor in Spanish) was the first-ever current student to be commencement speaker. She was joined on the stage by Sheree Williams, receiving a Master’s in Early Childhood Education, who was the 85,000th graduate of the school (it took 60 years to get to 65,000 and only 6 for the next 20,000).

Those of you who read my last post, Suicide: What can we say?, know that the date, December 13, was also the 11th anniversary of my son Matt’s suicide. While the two facts are coincidental, they are not unrelated; my presence on the Board and thus at the graduation was entirely about Matt. A few years after leaving his first (quite elite) college and then obtaining an associate’s degree, Matt moved back to Chicago and enrolled at Roosevelt. He loved it. It was, and is, a school, originally established to focus on returning GIs and people of color, that both educates young (and older) people from all backgrounds and prides itself on its diversity, and its explicit commitment to social justice. This resonated with Matt, and does with me. I later met President Charles Middleton through the sponsorship that Matt’s mother and I do of the annual Matthew Freeman Lecture in Social Justice (see, most recently, Matthew Freeman Lecture and Awards, 2013, April 26, 2013), and later when he hosted my group of American Council on Education fellows at the university. Dr. Middleton calls Roosevelt the “most diverse private university in the Midwest”, and sitting there as the graduates cross the stage it is not hard to believe. Virtually every race and ethnicity was represented by the graduating students, many obviously first-generation Americans, and the pride in their faces was unmistakable.
 
In her speech, Ms. Smith spoke about coming to Roosevelt from an all-white, middle-class, suburb, in large part to play tennis – which she did. She also, however, learned about diversity, and met fellow students from all races, religions, ethnicities, and socioeconomic groups, and made them her friends. She talked about a concept that she had never heard of before but was omnipresent at Roosevelt, social justice, which she says will guide the rest of her life. Ms. Williams’ presence on the stage, as 85,000th graduate, may seem like a quirk, but she also is “typical” of Roosevelt; an African-American woman who received her bachelor’s in education there and now her Master’s, and will be teaching second grade in Chicago, before, she plans, to get her doctorate. Wow.

President Middleton, in his closing address, asked several groups to stand. They included the international students, who had to add learning English in addition to their studies, and the families, friends and other supporters who jammed the Auditorium. Most impressive, to me, however, was when he asked all the graduates who were the first members of their families to get a degree at their level to stand. Some were getting doctorates and master’s degrees, but the large majority of the graduates were receiving bachelors. Two-thirds of the graduates stood, to rousing cheers.

There are plenty of colleges that offer the opportunity for students from working-class and poorer backgrounds to get an education, for first-generation students to learn. They include the our community colleges (I still remember a talk at the 2008 ACE Conference by the president of LaGuardia College in NYC, where she said -- as I remember it -- “there are two kinds of colleges; those that try to select the students who will be the best fit at their institutions, and community colleges, that welcome students”), and our state universities. And some are private schools, like Roosevelt. And others may have the explicit commitment to social justice that Roosevelt does.

But I am proud to be associated with one that so overtly and clearly demonstrates it.

--------------------------------------------------------------------------


Thursday, December 12, 2013

Suicide: What can we say?

On Sunday, September 8, 2013, we participated in the annual Suicide Remembrance Walk in Kansas City’s Loose Park, organized by Suicide Awareness Survivor Support of Missouri and Kansas (SASS/Mo-Kan). An article previewing the walk and interviewing Bonnie and Mickey Swade, our friends who established SASS/Mo-Kan, ran in the Kansas City Star on September 7: “What to say, and not, to those left behind by suicide”. Bonnie and Mickey became our friends because we are members of a club none of us would wish to be in: suicide survivors. Their son Brett completed suicide about a year after our son Matt did, and we were in a support group together before the Swades started their own. Matt’s suicide was on December 13, 2002, which I never thought of as being “Friday the 13th” until I realized that because of the vagaries of leap years, this year, 11 years later, is the first Friday December 13th since then. Thus, this post several months later.

The Remembrance Walk around Loose Park in Kansas City was well-attended on a hot morning, and culminated in all of us standing in a very large circle holding long-stemmed flowers as a distressingly long list of names was read. We counted 7 times when two (and in one case 3) last names were repeated; the list was not in alphabetical order, so this was not coincidence. As much pain it is to have one person you love having committed suicide, two or more is unfathomable. Finally, white doves were released, and the ceremony ended to the strains of “Somewhere over the rainbow”.

I have written about suicide before (July 29, 2009, “Prevention and the “Trap of Meaning”), in which I discussed an article that had recently appeared in JAMA by by Constantine Lyketsos and Margaret Chisholm titled “The trap of meaning: a public health tragedy[1] ). The thrust of that piece was that people -- families, lay persons, psychiatrists, psychologists, philosophers, and others -- search for “meaning”, “reasons” for suicide, and that this is, essentially, pointless at best and, devastating at worst. Suicide is the fatal result of the disease of depression, a disease which is very common and not usually fatal, but can be. It may often be precipitated by a specific event or set of events (as the final episode of chronic heart or lung disease is often preceded by a viral infection) but those are not the cause. The strongest prima facie evidence is that most people in the same circumstances (whether victims or perpetrators of bad things) do not kill themselves. But enough do to have made a long list to have read at the ceremony in Loose Park.

Like everyone else, each person who kills themselves is unique, and their histories differ. Some have made previous attempts, often many times; others gave no clue. Some have been hospitalized, often many times; others never. Some have family who were sitting on the edge, awaiting the suicidal act, trying their best to help to prevent it but helpless to really do so. The families and friends of others had no idea it might happen. While those who attempt or complete suicide are depressed, some very overtly manifest that depression and some not so much. While many people who have depression never attempt suicide, some complete suicide when things are looking, to others, good. Overall, access to effective weapons increases the probability of “success”; the “lethality” (the probability that you will die from an attempt) is about 95% from guns, and only 3% from pills. Therefore, easy access to guns is associated with a higher successful suicide rate; in young men 16-24 the success rate is nearly 10 times higher in low gun control states than in high. I doubt these young men are more depressed, but they have quick and effective methods of turning what may have been relatively transient suicidal thoughts into permanent death. Of course, not all suicides are classified as such; while it is often obvious, sometime it is not: how many one-car accidents, for example, are really suicides? And, because “unsuccessful” suicide attempts are grossly under-reported, the lack of an accurate denominator makes “success” rates very hard pin down.

On one hand, the fact that most suicide attempts are not hospitalized and given intensive treatment seems to me to be a bad idea. Since the greatest predictor of a suicide attempt is a previous suicide attempt, if there is any likelihood that a suicide can be prevented it would be best to intervene at that time and try to treat the depression. On the other hand, I am not sure that there is any good evidence that treatment is terribly effective in preventing suicide. Yes, there are many people who have attempted suicide once and never again, but this may be a result of treatment or the natural history of their disease. There are people who are under intensive treatment when they complete suicide, often when least expected. Indeed, there is evidence that treatment of depression may sometimes paradoxically increase the risk of suicide by getting a person whose depression was so severe that they were unable to act better enough that they can. And, conversely, there is no way of knowing how many times, before a suicide is completed, a planned attempt was put off by an intervention that may not have even been intended, by demonstrating love and letting the person know they were needed.

It doesn’t always work. If the person is unwilling to share their symptoms and is determined to complete suicide, there is no prevention that is effective. My son was 24, deeply loved, lived in a state with strict gun control laws and probably never held a gun before. But he was able to drive to a low gun-control state, buy a carbine and bullets, and complete his suicide. He took his time and planned it, and it is unlikely to have been preventable. But many suicide attempts are not as well planned, are more impulsive, and efforts to prevent these might be successful in many cases. In a classic 1975 article in the Western Journal of Medicine[2] David Rosen interviewed 6 survivors of jumps from the Golden Gate bridge. The emphasis in these interviews is on transcendence and “spiritual rebirth”, but all agreed that putting a “suicide fence” in place might have deterred them and might deter others.

For all of us who wish mightily to prevent disease and death, suicide may be seen as the greatest affront because the death is seen as “unnecessary” and often involves people who were “healthy” (except for their depression), young, and had a future before them – sometimes (as I like to think of Matt’s) a truly promising future. But too often we, in our desire to prevent death and disease, choose to focus on the least effective interventions to do so. We will take unproven drugs (especially if they are “natural” or non-prescription), and clamor for our “right” to have marginally useful or even ineffective screening tests, but there is a vocal movement against immunizations, one of the few preventive interventions that are known to be effective. We decry mass murders in school after school, and bemoan the loss of our young people to both suicide and homicide, but resist regulation of the most effective instruments of death, guns. We all take our shoes off each time we fly because of one failed “shoe-bomber”, but ignore the thousands of deaths on our city streets.

I wish my son had not killed himself. I wish I knew how to have prevented it. I wish I could tell those of you who worry about a loved one how you can prevent it. I wish even more that I could tell those of you who don’t suspect it that you can be secure because in the absence of definite warning signs you can feel safe. I can’t do that. When there are warning signs, take whatever action you can, but the reality is that it may not be effective. When there are no signs, hope that it is because there is no risk.

As individuals, we hope and do what we can. As a society, we should decide on our priorities, and we should be guided by the evidence, not by our fantasies, hopes, or magical thinking.



[1] Lyketsos CG, Chishom MS, “The Trap of Meaning: A Public Health Tragedy”, JAMA. 2009;302(4):432-433. doi:10.1001/jama.2009.1059.
[2] Rosen DH, “Suicide Survivors: A Follow-up Study of Persons Who Survived Jumping from the Golden Gate and San Francisco-Oakland Bay Bridges, West J Med. 1975 April; 122(4): 289–294. PMCID: PMC1129714


Wednesday, December 4, 2013

Medicaid expansion or not: everyone needs coverage

In an echo of my blog post of November 17, 2013, “Dead Man Walking: People still die from lack of health insurance”, the New York Times’ lead article on November 29, 2013 was “Medicaid growth could aggravate doctor shortage”. The main point in my blog was that, to the degree that there is a doctor shortage exacerbated by increasing the number of people who have health insurance (from Medicaid expansion or insurance exchanges or any other reason), the shortage was already there. If the reason that it was not felt earlier was because people, not having health insurance, did not seek care, does not change the fact that these people were here and were as sick as they were or are. To the extent that they were not getting health care because they were uninsured is a scandal. If anything, that people will now have coverage and thus seek care is an unmasking of an extant but unmet need.

The Times article looks particularly at Medicaid because many doctors will not see Medicaid patients since the payments do not cover their costs (or, in many cases, because they can fill their schedules with people who have better-paying health insurance). Those physicians who do accept Medicaid often feel that they will not be able to take more Medicaid patients for the same reason, and it is unlikely that those who are already not accepting Medicaid will begin to. The problem is significant for primary care, even for institutions like Los Angeles’ White Memorial Hospital that already care for large numbers of Medicaid patients. In the NY Times article, my friend Dr. Hector Flores, Chair of the Family Medicine Department at White Memorial, notes that his group’s practice already has 26,000 Medicaid patients and simply does not have capacity to absorb a potential 10,000 more that they anticipate will obtain coverage in their area.

The problem for access to specialists may be even greater. There are already limited numbers of specialists caring for Medicaid patients in California and elsewhere, for the reasons described above: they have enough well-insured patients, and Medicaid (Medi-Cal in California) pays poorly. It is also possible that some specialists have less of a sense of social responsibility (even to care for a small proportion of patients who have Medicaid or are uninsured), and their expectations for income are may be higher. The San Diego ENT physician featured at the start of the Times article, Dr. Ted Mazer, is one of the relatively small number of subspecialists who do take Medicaid, but indicates that he will not be able to take more because of the low reimbursement.

Clearly, Dr. Mazer and Dr. Flores’ group are not the problem, although it is likely that they will bear a great deal of the pressure under Medicaid expansion; if their practices have been accepting of Medicaid up until now, they are likely to get more people coming. The Beverly Hills subspecialists (see: ads in any airline magazine!) who have never seen Medicaid, uninsured, or poor people up until now are unlikely to find them walking into their offices. And, if they call, will not schedule them. So what, in fact, is the real problem?

That depends a bit upon where you sit and how narrow or holistic your viewpoint is. From the point of view of doctors, or the health systems in which they work, the problem is inadequate reimbursement. As a director of a family medicine practice, I know that you have to pay the physicians and the staff. For providers working for salaries, it is the system they work for that needs to make money to pay them. The article notes that community clinics may be able to provide primary care, but does not note that many of them are Federally-Qualified Health Centers (FQHCs) which receive much higher reimbursement for Medicaid and Medicare patients than do other providers. The Affordable Care Act (ACA) will reimburse primary care providers an enhanced amount for Medicaid for two years, through 2014, and yet not only is there no assurance that this will continue, but in many cases has yet to be put into place. And the specialists are not receiving this enhanced reimbursement (although the truth is that many of them already received significantly higher reimbursement for their work than primary care physicians).

From a larger system point of view, Medicaid pays poorly because the federal and state governments that pay for it (although the federal government will pay 100% of the expansion for 4 years and 90% after that) want to spend less. However, they do not want to be perceived as allowing lower quality of care for the patients covered by Medicaid, so they often put in requirements for quality that increase costs to providers which increases the resistance of those already reluctant to accept it. Another factor to be considered is that Medicaid has historically not covered all poor people; rather it mainly covers young children and their mothers, a generally low-risk group. (It also covers nursing home expenses for poor people, which generally consumes a higher percent of the budget.) Expansion of Medicaid to everyone who makes 133% of poverty means that childless adults, including middle-aged people under 65 who have chronic diseases but have been uninsured, will now have coverage.

While the main impact of Medicaid expansion is in states like California that actually have expanded the program, even in states like mine (Kansas), which have not, Medicaid enrollment has gone up because of all the publicity, which has led people already eligible but not enrolled to become aware of their eligibility (called, by experts, the “woodwork effect”). The Kansas Hospital Association has lobbied very hard for Medicaid expansion, but this has not occurred because the state has prioritized its political opposition to “Obamacare”. The problem for hospitals is that the structure of ACA relies on the concurrent implementation of a number of different programs. Medicare reimbursements have been cut, as have “disproportionate share” (DSH) payments to hospitals providing a larger than average portion of unreimbursed care. This was supposed to have been made up for because now formerly uninsured people would be covered by Medicaid (that is hospitals would get something); however, with the requirement that piece removed (thanks to the Supreme Court decision and the political beliefs of governors and state legislatures), the whole operation is unstable. That is, the Medicare and DSH payments are down without increases in Medicaid.

From a larger point of view, of course, the problem is that the whole system is flawed, and while the ACA will help a lot more people, it is incomplete and is dependent on a lot of parts to work correctly and complementarily – and this does not always happen, as with lack of Medicaid expansion. A rational system would be one in which everyone was covered, and at the same rates, so that lower reimbursement for some patients did not discourage their being seen. These are not innovative ideas; these systems exist, in one form or another in every developed country (single payer in Canada, National Health Service in Britain, multi-payer private insurance with set costs and benefits provided by private non-profit insurance companies in Switzerland, and a variety of others in France, Germany, Taiwan, Scandanavia, etc.). If payment were the same for everyone, empowered people would ensure that it was adequate. Payment should be either averaged over the population or tied to the complexity of disease and treatment (rather than what you could do, helpful or not). We would have doctors putting most of their work into the people whose needs were greatest, rather than those whose reimbursement/difficulty of care ratio was highest. There are other alternatives coming from what is often called “the right”, but as summarized in a recent blog post (“You think Obamacare is bad…”) by my colleague Dr. Allen Perkins, they are mostly, on their face, absurd.

Our country can act nobly and often has. ACA was a nice start, but now we need to move to a system that treats people, not “insurees”.


Saturday, November 23, 2013

Outliers, Hotspotting, and the Social Determinants of Health

We all have, or will have, our personal health problems, and the health problems that confront those close to us in our family and among our friends. Some are relatively minor like colds, or are temporary like injuries from which we will heal. Some are big but acute and will eventually get all better, like emergency surgery for appendicitis, and others are big and may kill us or leave us debilitated and suffering from chronic disease. Some of us have more resources to help us deal with these problems and others fewer. Those resources obviously include things like how much money we have and how good our health insurance is, but also a variety of other things that have a great impact in our ability to cope with illness, survive when survival is possible, and make the most of our lives even when afflicted with chronic disease. 

These other things are often grouped under the heading of “social determinants of health”. They include factors clearly related to money, such as having safe, stable and warm housing, having enough to eat, and otherwise having our basic needs met. They also include support systems – having a family and friends that is supportive and helpful, or alternatively not having one or having family and friends whose influence is destructive. It includes having a community that is safe and livable and nurtures and protects us and insulates us from some potential harm. This concept, “social capital”, is most well-described in Robert Putnam’s “Bowling Alone”[1] and its health consequences in Eric Klinenberg’s “Heat Wave”[2], discussed in my post “Capability: understanding why people may not adopt healthful behaviors”, September 14, 2010.

How this affects communities is a focus of the work of Dr. Jeffrey Brenner, a family physician who practiced in one of the nation’s poorest, sickest, and most dangerous cities, Camden, NJ, and is a founder of the “Camden Coalition”. I have written about him and his work before (“Camden and you: the cost of health care to communities”, February 18, 2012); his work drew national attention in the New Yorker article by Dr. Atul Gawande in January, 2011, “The Hot Spotters”. Brenner and his colleagues have taken on that name to describe the work that they do, and have collaborated with the Association of American Medical Colleges (AAMC) to focus on “hotspotting” (www.aamc.org/hotspotter) and produce a downloadable guide to help health professionals become “hot spotters” in their own communities in ten not-easy steps. The focus of this work is on identifying outliers, people who stand out by their exceptionally high use of health care services, and develop systems for intervening by identifying the causes of their high use and addressing them to the extent possible, activities for which traditional medical providers are often ill-suited and health care systems are ill-designed.

The essential starting point in this process, emphasized by Brenner in two talks that he gave at the recent Annual Meeting of the AAMC in Philadelphia (his home town) in early November, 2013, is identifying “outliers”.  The concept of recognizing outliers was the topic of a major best seller by Malcolm Gladwell a few years ago (called “Outliers”[3]), and Brenner notes that they are the “gems” that help us figure out where the flaws, and the costs, in our system are. As described in Gawande’s article, Brenner was stimulated by looking at work done by the NYC Police Department to identify which communities, which street corners, and which individuals were centers of crime; rather than developing a police presence (and, hopefully, pro-active community intervention) for the “average” community, they were able to concentrate their work on “hot spots”. Moving out of a crime-prevention and policing model, Brenner and his colleagues were able to link to hospital admissions data that was tied to people and performed a “utilization EKG” of their community, looking at who had the highest rates of admissions, ER visits, 911 calls and sought to determine what the reasons were.

Unsurprisingly, the individuals identified most often had the combination of multiple chronic diseases, poverty, and a lack of social supports – pictures of the impact of poor social determinants of health. Sometimes there were individual, specific issues – like the person who called 911 multiple times a day and was found to both live alone and have early Alzheimer’s so that he couldn’t remember that he already had called. Often, there were predictable community and poverty related issues, related to inadequate housing , food, transportation, and poor understanding of the instructions given them by the health care providers that they had seen.

One example of such an effort is “medicine reconciliation”, in which (usually) pharmacists review the medications that a patient entering the hospital, clinic or ER is supposed to be on (per their records) and what they say they are taking. It sounds like a good idea, and it has received a great deal of emphasis in the last several years, but it is one that Brenner calls a “fantasy” because it doesn’t involve going into people’s homes and (with them) searching through their medicine cabinets and drawers to find the piles of medications they have, and often have no idea of how to take, which ones are expired, which ones have been replaced by others, which ones are duplicated (maybe brand vs. generic names or from samples). He showed a slide of a kitchen table piled high with medicines found in one house, and says that his group has collected $50,000 in medicines found in people’s houses that their current providers did not know they were taking or wanted them to take.

Brenner notes that continuous ongoing stress weakens the body and the immune system, enhancing production of cortisol (a stress hormone) that has effects like taking long-term steroids, increasing the probability of developing “metabolic syndrome” and a variety of other physical conditions. He also cites the work of Vincent Felitti[4] and his colleagues that have identified Adverse Childhood Events (ACEs), such as abuse, neglect, etc., being associated with the presence of being a high-utilizer sick person in middle age (and, if they reach it, old age). This is, he indicates, exactly what they have found doing life histories of these “outliers”. It suggests that while interventions at the time of being identified as a high utilizer can be helpful for the individual patient, for the cost to the health system, and even to the community; but it also reinforces what we should already know – that the interventions need to occur much earlier and be community-wide, ensuring safe housing and streets, effective education, and adequate nurturance for our children and their families.

We need, Brenner says, half as many doctors, twice as many nurses, and three times as many health coaches, the intensively trained community-based workers who do go out and visit and work with people at home. I do not know if those numbers are true, but it is clear that we need to have comprehensive interventions, both to meet the needs of those who are sickest now and to prevent them from developing in the future. We are not doing it now; Brenner says “Like any market system, if you pay too much for something you’ll get too much of it, and if you pay too little you’ll get too little.”

We need to have a system that pays the right amount for what it is that we need.





[1] Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community.Simon & Schuster, New York, NY. 2000.
[2] Klinenberg, Eric. Heat Wave: A Social Autopsy of a Disaster in Chicago. University of Chicago Press, Chicago. 2002.
[3] Gladwell, Malcolm. Outliers: the story of success. Little Brown. New York. 2008.
[4] Felitti, V et al., “Relationship of Childhood Abuse and Household Dysfunction to Many of the Leading Causes of Death in Adults The Adverse Childhood Experiences (ACE) Study”, Am J Prev Med 1998;14(4) (and many subsequent publications).

Sunday, November 17, 2013

Dead Man Walking: People still die from lack of health insurance

At the recent meeting of the Association of American Medical Colleges (AAMC) meeting in Philadelphia, Clese Erikson, Senior Director of the organization’s Center for Workforce Studies, gave the Annual State of the Workforce address. It had a great deal of information, and information is helpful, even if all of it is not good. She reported on a study that asked people whether they had always, sometimes or never seen a doctor when they felt they need to within the last year. On a positive note, 85% said “always”. Of course, that means 15% -- a lot of people! – said “sometimes” (12%) or “never” (3%). Of those 15%, over half (56%) indicated the obstacle was financial, not having the money (or insurance). There are limitations to such a survey (it is self-report, so maybe people could have gone somewhere, like the ER; or maybe they asked your Uncle George who would have said always because he never wants to see a doctor even though you think he should for his high blood pressure, diabetes, and arthritis!) but it is not good news.

Of course, as former President George Bush famously said in July, 2007, "I mean, people have access to health care in America. After all, you just go to an emergency room." Many of us do not think that this is a very good solution for a regular source of care in terms of quality. Also, if you have had to use the ER regularly for your care and already have a huge unpaid stack of bills from them, it can make you reluctant to return. This likely contributes to the “sometimes” responses, probably often meaning “sometimes I can ride it out but sometimes I am so sick that I have to go even though I dread the financial result.” Following this ER theme, another leading Republican, Mitt Romney, declared repeatedly during the 2012 Presidential campaign, that “No one dies for lack of health insurance,” despite many studies to the contrary. And despite the fact that as Governor of Massachusetts he presumably thought it was a big enough issue that he championed the passage of a model for the federal Affordable Care Act in his state.

People do, in fact, die for lack of health insurance. They may be able to go to the ER when they have symptoms, but the ER is for acute problems. Sometimes a person’s health problem is so far advanced by the time that they have symptoms severe enough to drive them to the ER that they will die, even though the problem might have been successfully treated if they had presented earlier. Or, the ER makes a diagnosis of a life-threatening problem, but the person’s lack of insurance means that they will not be able to find follow-up care, particularly if that care is going to cost a lot of money (say, the diagnosis and treatment of cancer). If you doubt this still, read “Dead Man Walking”[1], a Perspective in the October 12, 2013 New England Journal of Medicine, by Michael Stillman and Monalisa Tailor (grab a tissue first).

We met Tommy Davis in our hospital's clinic for indigent persons in March 2013 (the name and date have been changed to protect the patient's privacy). He and his wife had been chronically uninsured despite working full-time jobs and were now facing disastrous consequences.

The week before this appointment, Mr. Davis had come to our emergency department with abdominal pain and obstipation. His examination, laboratory tests, and CT scan had cost him $10,000 (his entire life savings), and at evening's end he'd been sent home with a diagnosis of metastatic colon cancer.

Mr. Davis had had an inkling that something was awry, but he'd been unable to pay for an evaluation...“If we'd found it sooner,” he contended, “it would have made a difference. But now I'm just a dead man walking.”

The story gets worse. And it is only one story. And there are many, many others, just in the experience of these two physicians. “Seventy percent of our clinic patients have no health insurance, and they are all frighteningly vulnerable; their care is erratic.”  And the authors are just two doctors, in one state, a state which (like mine) starts with a “K” and (like mine) is taking advantage of the Supreme Court decision on the ACA to not expand Medicaid, and which (like mine) has two senators who are strong opponents of ACA, which means, de facto, that they are opposed to ensuring that fewer people are uninsured. I cannot get their thinking, but it really doesn’t matter, because it is ideology and they have no plan to improve health care coverage or access. So people like Mr. Davis will continue to die. This same theme is reflected in a front-page piece in the New York Times on November 9, 2013, “Cuts in hospital subsidies threaten safety-net care” by Sabrina Tavernise:

Late last month, Donna Atkins, a waitress at a barbecue restaurant, learned from Dr. Guy Petruzzelli, a surgeon here, that she has throat cancer. She does not have insurance and had a sore throat for a year before going to a doctor. She was advised to get a specialized image of her neck, but it would have cost $2,300, more than she makes in a month. ‘I didn’t have the money even to walk in the door of that office,’ said Ms. Atkins.

In a recent blog about the duration of medical education, I included a graphic from the Robert Graham Center which show the increased number of physicians that the US will need going forward, mostly as a result of population growth but also from the aging of that population, along with a one-time jump because of the increased numbers people who will be insured as a result of ACA (this will, I guess, have to be adjusted down because of the states that start with “K” and others that are not expanding Medicaid). Ms. Erikson included this graphic in her talk at AAMC, with numbers attached. Just from population growth and aging, we will require about 64,000 more physicians by 2025 (out of 250,000-270,000 total physicians).The one-time jump because of the ACA is about 27,000, bringing the number to 91,000.

But, of course, there is a big problem here. The projection that we will need more doctors because we have more people, or because our population is aging and older people need more medical care, is one thing. But the need for more doctors because more people will be insured? What is that about? Those people are here now, and they get sick, and they need care now, no less than they will when they are covered in the future. I do not mean to be critical of the Graham Center or Ms. Erikson for presenting those data. I do, however, think that we should emphasize how offensive is the idea that we will need more doctors just because more people will have coverage. They didn’t need doctors before, when they didn't have insurance?

If there are people who cannot access care, we need to be able to provide that care. We will need more health care providers, including more doctors, especially more primary care doctors. We need health care teams, because there will not be enough doctors, especially primary care doctors. We need the skills of health workers who can go to people’s homes, and identify their real needs (see the work of Jeffrey Brenner and others (see Camden and you: the cost of health care to communities, February 18, 2012). We need to ensure that people have housing, and food, and heat, and education – to address the social determinants of health.

Decades ago, I heard from someone who visited Cuba a few years after the revolution. He said he mentioned to a cab driver the dearth of consumer goods, such as shoes, in the stores. The cab driver said “we used to have more shoes in the stores, but now we first make sure that they are on children’s feet before we put them in stores windows.” There was enough before the revolution, enough shoes and enough milk, as long as a lot of people were not getting any. The parallel is that now, in the US, if we seem to have enough health clinicians, it is because there are lots of people not getting health care.

This is not ok. It isn’t ok with the ACA, and it isn’t OK without it.






[1] Stillman M, Tailor M, “Dead Man Walking”, Michael Stillman, M.D., and Monalisa Tailor, M.D.
October 23, 2013DOI: 10.1056/NEJMp1312793

Sunday, November 10, 2013

Does quality of care vary by insurance status? Even Medicare? Is that OK?

While the Affordable Care Act will not lead to health insurance coverage for everyone in the US (notably poor people in the states that do not expand Medicaid, as well as those who are undocumented), it will significantly improve the situation for many of those who are uninsured (see What can we really expect from ObamaCare? A lot, actually, September 29, 2013). The hope, of course, is that health insurance will lead to increased access to medical care and that this access will improve people’s health, both through prevention and early detection of disease, and through increased access to treatment when it is needed, including treatment that requires hospitalization. Implicit in this expectation is the assumption that the quality of care received by people will be adequate, and that the source of their insurance will not affect that care.

This may not be true. I spent a large portion of my career working in public hospitals. I absolutely do not think that the care provided by physicians and other staff in those hospitals was different for people with different types of insurance coverage (many or most patients were uninsured), and indeed for many conditions the care was better. But the facilities were often substandard since they depended upon the vagaries of public funding rather than the profit generated from caring for insured patients. The physical plants were older and not as well maintained, staffing levels were lower, and availability of high-tech procedures often less. There are changes; the Cook County Hospital I worked in through the late 1990s, with antiquated facilities including open wards and no air-conditioning, has been replaced by the very nice (if overcrowded) John P. Stroger, Jr. Hospital of Cook County. University Hospital in San Antonio, where I worked in the late 1990s, may have been seen by the more well-to-do as a poor people’s hospital, but in many areas, including nurse turnover and state of the art imaging facilities, it outdid other hospitals in town. Still, the existence of public hospitals suggests two classes of care, and as we know separate is usually unequal.

But what about the quality of care given to people with different insurance status in the same hospital? Surely, we would expect there not to be differences; differences based on age, yes; on illness, yes; on patient preference, yes. But who their insurer is? Sadly, Spencer and colleagues, in the October issue of Health Affairs, call this assumption into question. In “The quality of care delivered to patients within the same hospital varies by insurance type”[1], they demonstrate that the quality of care measures for a variety of medical and surgical conditions are lower for patients covered by Medicare than for those with private insurance. Because Medicare patients are obviously older, and thus probably at higher risk, the authors controlled for a variety of factors including disease severity. The most blatant finding was that “risk adjusted” mortality rate was significantly higher in Medicare than in privately insured patients.

This is Medicare. Not Medicaid, the insurance for poor people, famous for low reimbursement rates. It is Medicare, the insurance for older people, for our parents, for us as we age. For everyone. Medicare, the single-payer system that works so well at covering everyone (at least those over 65). (One of the reasons the authors did this study was the existing perception -- and some evidence -- that Medicaid and uninsured patients, as a whole, received lower quality care, but that was related to their care often being delivered at different hospitals.) The increase in mortality rates for Medicare patients compared to others with the same diagnosis was often substantial. But why?

Our hospital clearly has demonstrated that, essentially, Medicare is its poorest payer, and that, on the whole, it loses money on Medicare patient. This may well be true at other hospitals, but in itself should not account for lower quality of care, just lower profit. I would strongly doubt that either our hospital or the physicians caring for them believe that they deliver lower quality care to Medicare patients or that they are more reluctant to do expensive tests or provide expensive treatments when they are indicated. And yet, at the group of hospitals studied (if not mine, perhaps), it is true. The authors speculate as to what reasons might be. One thought is that Medicare (and other less-well-insured patients) might have worse physicians (“slower, less competent surgeons”); in some teaching hospitals, perhaps they are more likely to be cared for by residents than attending physicians. However, I do not believe, and have not seen good evidence, that this is the case. Another possibility is that newer, more expensive, technologies are provided for those with better insurance. Not good evidence for this, either, nor for another theory, that more diagnoses (“co-morbidities”) are listed on patient bills to justify higher reimbursements. I think that there is an increasing trend to do this (not necessarily inappropriately), and that, as the authors indicate, the trend is greater among for-profit than teaching hospitals, but in itself this does not suggest a significant difference for privately insured patients compared to those covered by Medicare.

What, then, is the reason? Frankly, I don’t know. It could be simply a coding issue; that is, in order to get greater reimbursement, hospitals list more intercurrent (co-morbid) conditions for private patients in hopes of greater reimbursement, which makes them appear sicker compared to Medicare patients when the latter are actually sicker. Or it may be that less experienced physicians and surgeons care for them. Or it may be that, despite the willingness of physicians, hospitals are less likely to provide expensive care for patients who, like those covered by Medicare, are reimbursed by diagnosis, not by the cost of treatment. Indeed, there may be other patient characteristics that lead to inequities in care that confound this study, but the idea that it may be because they are insured by Medicare is pretty disturbing.

Actually, in any case it is disturbing. It is already disturbing enough that a large portion of the US population is uninsured or underinsured, and that even with full implementation of the ACA there will still be many, if fewer, of us in that boat. It is disturbing to think that those who are poor and uninsured or poorly insured receive lower quality of care, possibly from less-skilled or less-experienced physicians, than those with private insurance. It is understandable (if not acceptable) that hospitals, physicians, and rehabilitation facilities might prefer to care for relatively young, straightforward patients with a single diagnosis, low likelihood of complications, and clean reimbursement. But if people are receiving poorer-quality care because they are our seniors, that is neither understandable nor acceptable.

It is another strong argument for everyone being covered by the same insurance, by a single-payer plan. Then, whatever differences in quality might be discovered, it would not be by insurance status.



[1] Spencer CS, Gaskin DJ, Roberts ET, “The quality of care delivered to patients within the same hospital varies by insurance type”, Health Affairs Oct2013;32(10):1731-39.

Saturday, November 2, 2013

Should Medical School last 3 years? If so, which 3?


Displaying as_seen_woz_chart_review (1).jpgAs we look at how to increase the number, and percent, of students entering primary care residency programs, it is interesting to see how some schools have creatively tried to address the problem. Texas Tech University Medical School and Mercer University Medical School’s Savannah campus have begun to offer MD degrees in 3 years to a select group of students who are both high performers and planning on Family Medicine careers, thus decreasing their indebtedness (one less year of school to pay for) and getting them into family medicine residencies, and several other schools are considering the same. They do this by essentially eliminating the fourth year of medical school. This is the subject of a piece by surgeon Pauline Chen, “Should medical school last just 3 years?” in the New York Times. She discusses different perspectives on the fourth year, previous experiences with reducing the length of medical school training, and two ‘point-counterpoint’ essays on the topic in the New England Journal of Medicine.

Chen addresses prior efforts to shorten medical school, including the most recent precursor of this current one. Specifically aimed at increasing the number of highly-qualified students entering Family Medicine residencies, it was implemented in several in the 1990s, and allowed students to effectively combine their 4th year of medical school with their first year of family medicine residency, thus completing both in 6 years. The programs were successful by all criteria. Students did well on exams and were able to save a year of tuition money, and medical schools were able to retain some of their best students into family medicine. Of course, therefore, the programs were stopped. In this case the villain was the Accreditation Council for Graduate Medical Education, which decreed that the fact that because students did not have their MD when they started residency training (it was granted after the first year, a combined 4th year of medical school and internship) they were ineligible for residency training. Thus this newest iteration offers the MD degree after three years.

An older effort to shorten medical school is also mentioned, one with which I have personal experience. In the 1970s ”as many as 33 medical schools began offering a three-year M.D. option to address the impending physicians shortages of the time.” One of those was Loyola-Stritch School of Medicine, in which the only curriculum was 3 years. In 1973, I was in the second class entering that program. We spent 12 months in ‘basic science’, pretty much just in classes in the mornings, and then two full years in clinical training. Chen writes that “While the three-year students did as well or better on tests as their four-year counterparts, the vast majority, if offered a choice, would have chosen the traditional four-year route instead.” I have no idea where she gets this impression; it is certainly not at all my memory. Our friends across town at the University of Illinois went to school for two years of basic science, 8 hours a day to our 4. We did not envy that. As Chen notes, we did just as well on our exams, and saved a year’s tuition, and I daresay no one could tell the difference in the quality of the physicians graduating between the two schools, when they entered residency in 1976 or today after 37 years of practice. Again, it was all good.

And, again, it was stopped. Why? Of course, the experiment only led to one additional class of physicians being produced (after that, it was still one class per year) so that benefit expired, but what about the other benefits that I have cited? Why wasn’t the program continued? Chen hits the nail on the head in her next paragraph: “The most vocal critics were the faculty who, under enormous constraints themselves to compress their lessons, found their students under too much pressure to understand fully all the requisite materials or to make thoughtful career decisions.” In particular, the basic science faculty who taught the first two-years-now-compressed-into-one of school. The fact that students did just fine on USMLE Step 1 and became good doctors was apparently insufficient to convince them. They made arguments like the one above, shifting the problem from to the students (“they” were under too much pressure) rather than that the faculty felt the pressure. I can’t remember anyone wishing they had another year to spend in basic science lectures.

The truth is that there is no magic amount of basic science time educational time needed to become a doctor. The amount of time needed is the amount necessary to either: (1) learn enough to pass USMLE 1, a fine utilitarian standard, or (2) learn the key pieces of basic science information that every physician needs to know in order to be able to practice quality medicine. If there are some basic science faculty might bridle at the idea of #1 (“Teach to the test? Moi?”), trying to identify what comprises #2 is a lot of work. It is easier to teach what we have always taught, what the instructors know about. If the reason for more time were the amount of basic science knowledge, then what required two years 35 years ago would require 10 or more years to teach now, because so much more is known. That is not feasible. The right answer is #2, but getting folks to do it is hard.

Chen quotes Dr. Stanley Goldfarb, lead author of the perspective piece against three-year programs as saying “You can’t pretend to have a great educational experience without spending time on the educational experience,”  which is of course true but begs the question of what those experiences should be. If we are going to decrease the length of time students are in medical school, it makes much more sense to reduce the amount of time spent in learning basic science factoids that most will forget after USMLE 1 (reasonable enough, since they will never need most of that information again) and focus on adult learning by teaching that information that all physicians do need to know. This effort requires clinicians having major involvement in the decision about what that is. It makes much less sense to remove one of the years of clinical training; what should be done is that training should be augmented, become less about vacations and “audition clerkships” and more about learning.  Why this is unlikely to happen, of course, has nothing to do with educational theory or the quality of physicians produced and everything to do with medical school politics. There is no constituency on the faculty for the fourth year, and a strong basic science faculty constituency for the first two.

Yes, we need more primary care doctors, lots of them, and we may need more doctors altogether, to help meet the health needs of the American people, and we need them soon. Data from the Robert Graham Center of the American Academy of Family Physicians (AAFP)[1] (attached figure) show the projected increase in need, including the one-time bump from the ACA, which will bring a large number of people who have not had access into care, and the longer-term need from population growth and aging. Programs that increase the number of primary care doctors (like the 6-year family medicine programs of the 1990s) are good. Programs that decrease the number of years by reducing basic science courses rather than clinical times obviously make more sense from the point of view of having well-trained doctors. (Programs like the 3-year option at NYU which is not even geared to training more primary care are, from this point of view, irrelevant.) We need to have these not be pilots, but scaled up to produce more clinically well trained primary care doctors.

And we need to do it soon. Medical school turf battles should not be the determinant of America’s health.







[1] Petterson SM, et al., “Projecting US Primary Care Physician Workforce Needs: 2010-2025”, doi: 10.1370/afm.1431 Ann Fam Med November/December 2012 vol. 10 no. 6 503-509

Saturday, October 26, 2013

Why do students not choose primary care?


We need more primary care physicians. I have written about this often, and cited extensive references that support this contention, most recently in The role of Primary Care in improving health: In the US and around the world, October 13, 2013. Yet, although most studies from the US and around the world suggest that the optimum percent of primary care doctors should be 40-60%, the ratio in the US is under 30% and falling. A clear reason for this is that relative lack of interest of US medical students in entering primary care at the rates needed to maintain, not to mention increase, our current primary care ratio. In addition, the ratio of primary care to other specialty residency positions is too low. Here we confront the fact that the large majority of medical students completing Internal Medicine residencies enter subspecialty fellowships rather than practicing General Internal Medicine. At the Graduate Medical Education level, a simple way of estimating the future production of primary care doctors would be to add the number of residency positions in Internal Medicine (IM), Pediatrics (PD), Family Medicine (FM), and combined Internal Medicine-Pediatrics (IMPD) and subtract the number of fellowship positions they might enter. This still overestimates the number of general internists, however, since it does not account for doctors who practice as “hospitalists” after completing their residency because such a role does not currently require a fellowship (as does, say cardiology). Estimates are now that 50% or more of IM graduates who do not pursue fellowship training become hospitalists.

Thus, we welcome the research report from the Association of American Medical Colleges (AAMC) “The role of in medical school culture in primary care career choice[1], by Erikson et al. that appears in the December 2013 issue of AAMC’s journal Academic Medicine. The authors surveyed all 4th-year medical students from a random sample of 20 medical schools to assess both student and school level characteristics that were associated with greater likelihood of entering primary care. The first, and arguably most important finding, was that only 13% of these final-year medical students were planning on primary care careers. This is despite the fact that 40% were planning to enter the “primary care” residencies of IM, PD, FM, and IMPD, with most of the fall-off in internal medicine and least in family medicine. This finding strongly supports my assertions above, and makes clear that the historically AAMC-encouraged practice of medical schools reporting “primary care” rates by entry into residencies in those fields is not valid. It also, even more important, shows the extent of our problem – a 13% production rate will not get us from 30% to 40% or 50% primary care no matter how long we wait; obviously it will take us in the other direction.

The primary outcome variable of the study was entry into primary care, and it specifically looked at two school level (but perceived by students, as reported in the survey) characteristics: badmouthing primary care (faculty, residents or other students saying it is a fall back or something that is a “waste of a mind”) and having greater than the average number of positive primary care experiences. It turns out that both were associated with primary care choice (in the case of badmouthing, students from schools with higher than average reported rates were less likely to be planning primary care careers, while students who were planning such careers reported higher rates of badmouthing), but, after controlling for individual student and school characteristics, accounted for only 8% of the difference in primary care choice. Characteristics of the student (demographics such as sex, minority status or rural origin, academic performance defined as the score on Step 1 of USMLE, as well as expectation of income and feeling of a personal “fit” with primary care) and of the school (research emphasis, private vs. public,  selectivity) accounted for the rest. Interestingly, debt was not a significant factor in this study.

I would argue that many of these individual and school characteristics are highly correlated. A school that prides itself on being selective (taking students with high scores) and producing subspecialists and research scientists does not have to badmouth primary care; the institutional culture intrinsically marginalizes it. On the other side, the students selected at those schools are more likely to have those characteristics (particularly high socioeconomic status and urban or suburban origin) not associated with primary care choice. It is worth noting that the measure of academic performance in this study was USMLE Step 1, usually taken after the first 2 years and focusing more on the basic science material covered in those years, rather than USMLE Step 2, which covers more clinical material (perhaps because not all 4th-year students studied have taken Step 2 yet). This biases the assessment of academic qualification; many studies have demonstrated high levels of association of pre-medical grades and scores on the Medical College Admissions Test (MCAT) with pre-clinical medical school course grades and USMLE Step 1 scores, but not with performance in any clinical activity, not to mention primary care. Perhaps most students improve their scores from Step 1 to Step 2, but it is particularly true for FM and primary care. A quick look at our KU students applying to our family medicine program shows an average increase of nearly 30 points in these scores.

So the problem is in the overall culture of medical schools, in their self-perception of their role (creating research scientists vs. clinicians, creating subspecialists vs. primary care doctors) and in their belief that taking students with the highest grades is equivalent to taking the best students. This culture, simply put, is bad, defined as “it has undesirable outcomes for the production of the doctors America needs”, and must change. Erikson and colleagues acknowledge that schools could do a better job of taking rural students, offer more opportunities to engage in public health and community outreach activities, and have more experiences in primary care, all of which were somewhat associated with primary care career choice. These are tepid, but coming from the AAMC, a reasonably significant set of recommendations. I say we need an immediate change in every single medical school to recruit at least half of every class with students whose demographic and personal characteristics are strongly associated with primary care choice, present a curriculum that has much less emphasis on “basic science” and more on clinical, especially public health, community health, and primary care. One of the primary bases for assessing the quality of a medical school should be its rate of primary care production, and this is going to require a major qualitative shift in their practices and the beliefs of many of their faculty and leaders.

I am NOT saying is that we don’t need subspecialists or research scientists. We do. I AM saying that the emphasis on production of these doctors compared to primary care doctors is out of whack, not just a little but tremendously so, and can only be addressed by a major sea change in attitudes and practices in all of our medical schools. I do not expect that all schools should produce the same percent of primary care physicians. Some might be at 70%, while others are “only” at 30%, but ALL need a huge increase, by whatever means it takes. Even if we produce 50% primary care physicians on average from all schools it will be a generation before we get to their being 50% of the workforce. At less than that it will take longer, and at less than 30% we will not even maintain where we are.

13% is not just “insufficient”, it is a scandalous abrogation of the responsibility of medical schools to provide for the health care of the American people. They should be ashamed, should be shamed, and must change.






[1] Erikson CE, Danish S, Jones KC, Sandberg SF, Carle AC, “The role of in medical school culture in primary care career choice”, Acad Med December2013;88(12) published online before print.

Total Pageviews