Saturday, September 29, 2012

Primary care, specialty care: what about health?


Three “Perspectives” in the September 6 issue of the New England Journal of Medicine address different, but clearly related, aspects of the transformation of health care. I have previously discussed one of them, “Becoming a physician: the developing vision of primary care" by Barnes, and Comfort in Social determinants key to the future of Primary Care (September 22, 2012). The two others are "What business are we in? The emergence of health as the business of health care"[1], by Asch and Volpp from the Wharton School of the University of Pennsylvania, and "From Sick Care to health care -- re-engineering prevention into the US health system" by Marvasti and Stafford.[2] Taken as a whole, the three provide some significant insight to the current US healthcare delivery system, the changes that need to be made in it, and the way we will get there.

Asch and Volpp discuss the need to shift from "what can we produce" (health care) to "what do people want" (health). They use the parallels of the failure of major industries (railroads in the last century; Eastman Kodak in this) to make this distinction. Each of those industries made the mistake of confusing what they produced with what people wanted when they bought their products. Thus, railroad companies provided railroads, when customers wanted transportation of goods and people; when alternatives (trucking, air) became available they were unable to adjust (although in Europe they did a pretty good job). Similarly, Kodak made film when what people wanted was to store images of their lives. Asch and Volpp do not mention that the ad agencies got this right (“we create memories”) but the corporation did not move into the digital age early enough and, last year, went into Chapter 11 bankruptcy.

The “health care” industry provides, at best, health care, but more often just medical care, and most especially disease care. People seek it out because it is what available; what they want is health, to not be sick, in pain, disabled. This is of course why “mainstream” medicine is not the only source of treatment people seek. It explains the allure, and extensive use, of products of the “alternative care” industry, which ranges from degreed practitioners like chiropractors and naturopaths to long-used herbal and other cultural treatments dispensed by various methods (botánicas and the Internet), to Eastern medicine such as acupuncture, to religious rituals including American Indian healing, Catholic exorcisms, and the rituals of Santeria, CandomblĂ©, and Voodoo derived from Africa, to straightforward quackery. “Integrative medicine” is an effort by traditional western medicine to employ many of these techniques and traditions. While we probably would actually prefer the diagnostic and treatment magic of Star Trek’s doctors, we’d certainly like the magic pill, elixir, injection, herb, or prayer that would heal all our ills – preferably with no real or sustained effort on our parts, and without side effects. Interestingly, while “alternative” medicine is seen as more “holistic”, it is in fact often more biologically reductionistic, using interventions (e.g., enemas, diet changes, supplements) to cure social and psychological problems (see my 2005 piece “Towards a definition of holism” in the British Journal of General Practice).[3]
Perhaps this is all “quackery”, or just some of it is and others are not. In mainstream American medicine, something doesn’t become a real disease until there is a test for it – or better yet a drug for it – thus the medicalization of many things that people have experienced as part of the mortal coil for thousands of years. Some things that were accepted as “part of life” (and death) are now diagnosable and treatable; others might be in the future. But those of us in the “health care” industry need to understand that folks will only buy what we are selling if it is, on the whole, the most effective way for them to get what they want: health.

Marvasti and Stafford discuss the need to change from a system designed to treat acute conditions and acute exacerbations of chronic diseases to one in which, in Fries' model, of "morbidity compression", "in which the disease-free life span is extended through the prevention of disease complications and the symptom burden is compressed into a limited period preceding death." This dovetails well with what I have discussed above; people want to be healthy. They recognize that they are going to die, but they want to do this quickly, painlessly, and at the end of a long and healthy life. As a loved one of mine who is closer to this than many puts it, “someday I just won’t wake up”. Morbidity compression. If he is lucky, if we are all lucky, that will be how it happens.

But right now, our health care system is in fact designed to treat acute conditions and acute exacerbations of chronic disease, not to maintain the care and the health of people who have not yet developed chronic disease or are stable. More to the point, our system does this because this is what is paid for; we are just scratching the surface of the ideas of “chronic disease management”. In fact, we pay so well for acute interventions that hospitals are hiring acute “interventionalists” at extraordinary salaries compared to their colleagues in the same specialties that actually manage people over time. For example, “stroke neurologists”, or sometimes interventional neuroradiologists, who can inject clot-busters into the arteries of people with acute strokes make perhaps 2-3 times what a neurologist who just manages chronic neurologic diseases does (not that much more than regular radiologists, who are overall much higher paid); this is because the hospitals that employ them are so highly reimbursed for these procedures.

The need to manage actual people, especially when they have chronic disease, not just an acute episode, is obvious to most of us. It may even be obvious to the insurance companies and other payers, to the hospitals who support the acute interventionalists, but so far they haven’t changed what it is that they pay for, what it is that is financially incented. I write a lot about primary care, but it is not only primary care. Commenting on my last piece, Social determinants key to the future of Primary Care, a neurologist colleague who cares for people with Amyotrophic Lateral Sclerosis (ALS), Lou Gehrig’s Disease, a terrible and always fatal degenerative condition that satisfies almost none of the criteria for “morbidity compression”, wrote:

But the group health care model is also the best for chronic rare diseases managed by subspecialists....this is what we do in ALS clinic on Monday mornings. You should come visit, Josh, at 8 when all the folks (speech therapy, physical therapy, occupational therapy, social workers, dieticians, equipment providers, respiratory therapy) meet with the neuromuscular neurologists to discuss each case. Then we see them and all weigh in and we give patient a printout with advice from each. But, alas, there is no way to pay for this without support from local and national foundations....Medicare doesn't cover it by far.

I wrote back:
Of course. In this sense, you are sharing the same issues as primary care doctors -- you are managing patients, not just a single episode of disease. Certainly ALS is a disease, but it is a chronic one that takes over people's -- and their family's -- lives, and requires not only complex and interdisciplinary, but long-term, management. Indeed, the concept of the medical home was developed in the 1960s by the specialty pediatricians managing kids with chronic diseases such as cystic fibrosis, juvenile diabetes, and sickle cell, which share with ALS the fact that there is one disease that dominates the lives of the patient and their family; to care for it requires managing not just the disease but working with the whole person and their family. It can also (but is not always in practice) be true of HIV clinics.

This is less true of many other adult diseases, which often co-exist --diabetes, hypertension, congestive heart failure, chronic lung disease, depression, arthritis -- so that one specialist is not interested in (or perhaps capable of) managing them all -- thus primary care for adults, geriatricians, etc.

It is virtually not at all true of those who do radiology, anesthesiology, single-time consults, or one-shot surgery, or one-shot-into-the-cerebral artery neuro-interventionalists. Or, to be short, any of the folks making a lot of money for single things, while the stuff that you do in ALS clinic is not paid for.

This is insane. We do not have a health system, and we do not even have a health care system. We have a medical care system, with the emphasis on the medical. It is fine to pay for an episode of care, but it is much more important to reward care.


[1] Asch DA, Volpp KG, “What Business Are We In? The Emergence of Health as the Business of Health Care”, NEJM 367(10);887-89. DOI: 10.1056/NEJMp1206862
[2] Marvasti FF, Stafford RS, “From Sick Care to health care -- re-engineering prevention into the US health system", NEJM, 367(10);889-91. DOI: 10.1056/NEJMp1206230
[3] Freeman J, “Towards a definition of holism”, Br J Gen Pract. 2005 Feb;55(511):154-5. PMC1463203

Sunday, September 23, 2012

Social determinants key to the future of Primary Care



A "Perspective" in the September 6 issue of the New England Journal of Medicine, "Becoming a physician: the developing vision of primary care"[1] by Kathleen A. Barnes, Jason C. Kroening-Roche, and Branden W. Comfort*, addresses the change in the practice of primary care enabled by changes in payment and structure and how this is more attractive to medical students. All three are medical students (although Kroening-Roche already has both his MD and MPH) from schools in different parts of the country (Harvard, Oregon, and Kansas); they met at the Harvard School of Public Health, and all of whom seem to be interested in being primary care physicians. They describe a model – or, more accurately, as they say, a vision – of primary care practice in which they see themselves in the future, and about which they are enthusiastic. By extension, one would hope that this is also true of many other medical students.

The practice that they describe is quite detailed in many ways:
 "…a day in a primary care office would begin with a team huddle….The team would discuss the day's patients and their concerns. They would review quality metrics, emphasize their quality-improvement cycle for the week, and celebrate the team's progress in caring for its community of patients…The RN would manage his or her own panel of patients with stable chronic disease, calling them with personal reminders and using physician-directed protocols…The social worker, nutritionist, and behavioral therapist would work with the physician to address the layers of complexity involved in keeping patients healthy. Clinic visits would ideally be nearly twice as long as they are now…"

It sounds great. As the authors note, there are practices that are working toward, and in some cases have begun to achieve this "new model" of care; these 3 did not originate these ideas. Practitioners and thinkers such as Tom Bodenheimer, Joe Scherger, Bob Phillips, and Kevin Grumbach have written about this, and many practices, particularly integrated groups such as Kaiser Permanente, Inter-Mountain Health Care, and Geisinger Clinic have implemented many of these characteristics. But will it be the future of all health care? Will, importantly, these changes – or ones like them – both provide the functionality that the health system needs from primary care and the physicians entering into this practice?

In many articles, including Transforming primary care: from past practice to the practice of the future [2], Bodenheimer has emphasized the need for teams from a practical standpoint – there are more people needing care and not enough primary care physicians to provide it. Phillips ("O Brother Where Art Thou: An Odyssey for Generalism", presented at the Society of Teachers of Family Medicine Annual Conference in May, 2011) shows data indicating that even including "mid-level providers" such as advanced practice nurses and physician's assistants there are way too few primary care providers, and the trajectory of production is in the wrong direction. Our own data[3] show the marked decrease in the number of medical students entering family medicine (and other primary care specialties) in the last dozen years. So it is profoundly to be hoped that the model of care described by these authors develops, that they are able to develop it, and that it will attract more future physicians.

While practice change is hard, and culture change is harder, there are issues that these authors talk about but do not seem to overly worry them. They note the importance of the Affordable Care Act, and how it "…emphasizes population health and primary care services, and establishes accountable care organizations that require strong primary care foundations," but do not, in my opinion, adequately address two key challenges to implementation that will present profound obstacles to the achievement of their vision.

The first is payment, reimbursement, allocation of health care dollars. They assume that, "…thanks to a restructured reimbursement system," medical assistants will "…have protected time to provide health coaching for behavior change and to ensure that the patients on their panel were current with their preventive care." Because reimbursement would be "…through global payments linking hospitals to primary care practices, the physician, too, would have a financial incentive to keep patients healthy…."  It is a great model, and one that I agree with, but it hasn't happened in most places. Because it is more costly and requires significant investment in prevention and primary care, and since there are unlikely to be additional dollars in the health system, it will mean lower reimbursement for hospitalizations, for procedures, and for the specialists who are the currently the most highly paid. This, I would argue, would not be a bad thing, but it will not happen easily. Those who are doing well under the current system are going to fight to hold on to it, and the reimbursement structure is not changing quickly enough to push such change outside of integrated health systems – and even within many of them.

The second is what can be summarized as the "social determinants of health". Good public health students, they observe that "…the health care system must strive to affect more than the 10% of premature mortality that is influenced by medical treatment," and note correctly that "Primary care cannot be primary without the recognition that it is communities that experience health and sickness. Providing better health care is imperative but insufficient." 

This is true, but there is more to it. Health care, in itself, even well-organized with adequate numbers of primary care practices working in teams, and collaborating with public health workers, and going out into the community, and employing culturally-competent health navigators/guides/case managers/promotoras, is not going to do it alone. The social determinants of health have to be addressed by the entire society.

Poverty, unstable housing, food insecurity, cold, and the social threats that often accompany the communities in which they are prevalent (violence, drug use, abuse, etc.) will continue to create situations in which people are not healthy and need medical care. Even in the larger society, in the part where people are not living at the edge, there are many anti-health forces; stress (including the stress of working harder and at more jobs to keep away from the edge), the ubiquity and ease of access of poor quality, high-calorie food, and the shredding of the social safety net that is almost gone for at the bottom and fraying at the sides (Social Security, Medicare), are not harbingers of a happier, healthier society.

I am thrilled about the enthusiasm of these young physicians and physicians-to-be, and their commitment to primary care and a new kind of practice. They begin by observing, echoing Bob Dylan from 50 years ago, and more important the movement that was growing then, that "times are changing", but I fear we are not yet clear what that change will be; there is tremendous energy – and even more money – behind a change that will be for the worse for everyone except the most privileged.

They end by saying that "We are here to engage in and advance the movement." They are talking about transforming primary care, but I hope that they and their colleagues recognize that it will not be enough unless they are willing to engage in and advance the movement to transform society.


*In full disclosure, one of the authors, Branden Comfort, is a student at the KU School of Medicine. Although he has spent his clinical years at our Wichita campus, I know him well because we worked together in the student run free clinic (and he was my advisee) in his first two years here in Kansas City.





[1] Barnes KA, Kroening-Roche JC, Comfort BW, "The developing vision of primary care", NEJM Sept 6, 2012;367(10):891-4.
[2] Margolius D, Bodenheimer T, Transforming primary care: from past practice to the practice of the future, Health Aff (Millwood). 2010 May;29(5):779-84.
[3] Freeman J, Delzell J, ""Medical School Graduates Entering Family Medicine: Increasing The Overall Number", Family Medicine, October 2012, in press.

Sunday, September 9, 2012

Research basic and applied: we need them both


 “Not every mystery has to be solved, and not every problem has to be addressed. That’s hard to get your brain around.”

This statement was the coda of a very good article, “Overtreatment is taking a harmful toll”, by Tara Parker-Pope, in the NY Times, August 28, 2012. The topic of the article, and the implication by the speaker, who was talking about her own family’s health care and unnecessary testing, is one that I have written about several times recently, in terms of both screening tests (“The "Annual Physical": Screening, equity, and evidence”, July 4, 2012) and investigation and treatment of disease (“Rationing, Waste, and Useless Interventions”, June 21, 2012). Thus, I certainly agree that there is too much testing and too much intervention, and that it has a high cost in both dollars and in potential risk to people (the English word for what the health system calls “patients”). So why do I feel a little uncomfortable with the quotation above?

I think it is because I very strongly believe that the decision on what tests to do and what interventions to take should be informed, as much as possible by the evidence. That evidence, I have also argued, should come from research, from well-designed studies, from science. This is also costly, but it is necessary. Your treatment should be based on evidence and probability gathered from studies of large populations. Without it, doctors and other health professionals are flying blind, with treatments based on their own experience, or worse yet “what makes sense”. Sometimes the doctor’s own experience is a good guide, if they see a lot of patients with the same problem, and have reason to know what works. It is even better when they can bring in knowledge of the local community (e.g., what antibiotics are common bugs resistant to here? What are the common belief systems of the people that I care for?) and better yet if they actually know you, and what you value, and what your medical history is, and what your belief system is, and what is most likely to engage your effort in the interest of your health.

But it is better if the set of options from which they choose are all based in evidence. That something makes sense, I have often pointed out to medical students and residents, makes it a research question, not an answer. If something makes sense, based on what we already know, it is likely to be a more valuable thing to study than something that does not make sense. However, until the study, or more likely several studies, are done we won’t know if it is, in fact, true. Human beings, both in terms of their biology and behavior, are too complex, and have too many different systems interacting with each other, to predict accurately how something that “makes sense” based on one of those dimensions is likely to turn out.

The thing is that not all research is immediately clinically relevant. Sometimes it is; the “Ottawa rules”, developed by research done in Canada, provide physicians with evidence based guidelines about when it is appropriate to do x-rays for injured ankles, knees, and feet – common problems. Other studies investigate whether particular drugs may provide real benefit to people with more uncommon problems. This is particularly satisfying when the drug is not some new, expensive blockbuster but something cheap and common like aspirin or folic acid. Or when an old drug, all but abandoned for its original purpose, turns out to be very effective for another condition entirely. (One of my colleagues just demonstrated this for an old heart drug that works for a rare neuromuscular condition – coming soon to your local JAMA!) But much research is at a very basic level. Before those drugs can be tested on particular conditions, they have to be developed. Before they can be developed, the biological and biochemical mechanisms upon which they have an effect have to be identified. Just as, before we can send rockets to the moon, we need to understand physics. Science, what in medicine we call “basic science”, has to continually move forward, and this requires not solely focusing on what might be of practical use tomorrow, but what is still a mystery that has to be solved.

I find it almost ironic that I am writing this defense of basic science research. Just recently, I was in NYC and went to brunch at the riverpark restaurant. On the block leading to it is a big vegetable gardens where they grow many of their own ingredients, much of it surrounded by a big wooden fence. And, since it is right there at Bellevue Hospital and NYU Medical Center and Rockefeller University, that fence is decorated with pictures and biographies of Nobel Prize winners in Medicine who had ties to NYC. My reaction was that all of these people (even if they had MD degrees) were doing laboratory, basic science research, not clinical research, even though the prize is for “Medicine”. Of course, having won Nobel Prizes, their research led to important practical breakthroughs, but for every Nobel Prize winner who discovers something that will make a major difference in health, there are thousands and thousands of others, working in laboratories everywhere, and this work is necessary.

Personally, I don’t think it is necessarily necessary that it  be done in medical schools, whether NYU or the University of Kansas, rather than in research institutes like Rockefeller or Kansas City’s Stowers Institute (or Karolinska in Sweden or the Pasteur Institute in France). I find, as a family doctor, that the fact that much basic research in human biology is done at medical schools leads to what I think are negative “side effects”. I believe that there is an over-emphasis on teaching medical students biological sciences in great detail (often at the level of minutia) and an under-emphasis on the social sciences. I think that these areas are just as important – maybe more important for the practicing physician -- but are usually not considered as “core” to medical student teaching.

In part this is because those working in the social sciences are most often “there”, at the main campus, not “here”, at the medical school. I am proud that the research conducted by faculty in my department is mostly community-based, looking at determinants of health and health disparities. But, whether biomedical research should be as important a part of medical schools as it usually is, or not, it is absolutely clear that it needs to occur, and that scientists need to solve mysteries.

Every mystery? Well, of course, that will never happen. And even for the ones they solve, the results are not always beneficial for folks. We can map the human genome! We can tell you if you and your family members are at increased risk for a terrible disease! Of course, often we cannot do anything about it, but it can make you depressed and pessimistic, and maybe you’ll lose your health insurance. So maybe we don’t need to tell your insurer, or even tell you, but getting to be able to do something about it first requires doing the science.

And of course there is a big difference between uncovering the mysteries of the universe, and even of finding evidence for what is appropriate diagnosis and treatment in populations, and in having to investigate everything in you. The father of another person quoted in the article developed delirium from overtreatment with drugs that was mistaken for dementia. “I don’t know if we have too many specialists and every one is trying to practice their specialty, but it should not have happened.” I agree; too many mistakes, too many errors (see Medical errors: to err may be human, but we need systems to decrease them, August 10, 2012) can come from there being too many specialists combined with too little communication.

The quote at the top of this piece notes that not everything has to be addressed and that this is hard to wrap your brain around, but it shouldn’t be.  All that research in the basic and clinical sciences should help us to understand when we need to investigate (do a CT scan for a black eye, in another example from the article, say) and when we don’t.

Often we should leave well enough alone. 

Sunday, September 2, 2012

Financial Incentives, maybe; corporate profit, no!


If we truly wish to move toward a healthcare system which delivers high quality in a reliable manner, one of the great flaws of our current system is that incentives are not always lined up to achieve that goal. Indeed, we could make a strong argument that incentives, particularly financial incentives, often lead healthcare providers (sometimes individuals, but certainly large organizations such as hospitals, nursing home and hospital chains, pharmaceutical companies, device manufacturers) in the wrong direction. That is, they pursue financial profitability rather than the highest quality of care for our people.

Sometimes these two run together, and sometimes they do not. If the disease you have is one that is well-provided for and you have the money or insurance to pay for it, you are in luck. If you don’t have financial access to care, or your disease’s “product line” was not one deemed financially profitable enough for your hospital or health system to invest in, you are not. Similarly, if you have a disease that lots of others share, pharma is always ready to provide a drug for it – particularly if it is still patented; if you have an “orphan” disease, you may not be able to get treatment, or it can cost (truly) more than $100,000 a year.

The federal government, through Medicare, has sought to use financial incentives to (most often) control costs and (sometimes) to encourage quality; this occurs under both Republican and Democratic administrations, but is also a big feature of the Affordable Care Act (ACA). One example going back to the 1980s, is the reimbursement of hospitals for what Medicare has figured is the appropriate cost of care for a particular set of diagnoses, rather than by whatever the hospital charges. Under ACA and in incentive plans in place from private insurers, doctors get more money if they do more of the “right” things and fewer of the “wrong” things. Financial incentives can also be used by organizations to encourage certain types of performance in its employees or contractors. Examples include incentive payments for generating more revenue, or financial penalties written into a contract for poor performance. Financial incentives are not unique to health care and, in fact, have been used and studied in many other industries. Their use in health care is not unique to the US.The question is, however, “do they work?”

This is the question that a group of Australian scholars led by Paul Glasziou sought to answer in an “Analysis” published in the British Medical Journal (subscription required), “When financial incentives do more good than harm: a checklist.”[1] Glasziou and his colleagues review the data on the effectiveness of financial incentives in both health care and other industries, and focus upon a meta-analysis by Jenkins et al.[2], and two Cochrane studies, one an analysis of 4 systematic reviews (? a meta-meta analysis?)[3] by Flodgren et al., and one looking at primary care by Scott, et al.[4] Basically, the results were mixed; sometimes they worked (to achieve the desired ends) and sometimes they didn’t. Glasziou observes: “While incentives for individuals have been extensively examined, group rewards are less well understood….Finally, and most crucially, most studies gathered few data on potential unintended consequences, such as attention shift, gaming, and loss of motivation.” (Again, see Daniel Pink, “Drive”, on Motivation).

In an effort to help identify what works, Glasziou has developed a 9-item “checklist” for financial incentives in healthcare that is the centerpiece of this article. Six items are related to the question “Is there a remediable problem in routine clinical care?”, and 3 are related to Design and Implementation. The first six are:
1. Does the desired clinical action improve patient outcomes?
2. Will undesirable clinical behavior persist without intervention?
3. Are there valid, reliable, and practical measures of the desired clinical behavior?
4. Have the barriers and enablers to improving clinical behavior been assessed?
5. Will financial incentives work, and better than other interventions to change behavior, and
why?
6. Will benefits clearly outweigh any unintended harmful effects, and at an acceptable cost?

And the 3 regarding implementation are:
7. Are systems and structures needed for the change in place?
8. How much should be paid, to whom, and for how long?
9. How will the financial incentives be delivered?

They provide explanations of each of these and include a useful table that uses real life positive and negative examples to illustrate their points. For example, regarding #1 they note that the UK has provided financial incentives to get the glycated hemoglobin level in people with type 2 diabetes below 7%, despite several studies showing no patient benefit. (This is an example of “expert opinion” governing practice ever after contradicted by good research.) #2 means some behaviors occur or extinguish if effective processes are put in place without financial incentives. #3 is important because of the cost of implementation (“We found no studies on the cost of collecting clinical indicators.”); one of the great complaints of providers is that they spend so much time providing information to various oversight bodies that they haven’t sufficient time to provide good patient care.

Criterion #5 relates to the issue of “what, in fact, motivates people?” Criterion #6 is, I believe, relates to the greatest flaws in most of our financial incentive (often called “pay for performance” systems. The four behaviors most often creating harmful effects have all been discussed in this blog:
Attention shift (focusing on the area being rewarded distracts from attention to other areas);
Gaming (a huge negative especially for large organizations). This specifically refers to manipulating data to look “good” on the measurement, but also includes upcoding and what might be called intentional attention shift, where the organization focuses, on purpose, on to the areas that make it the most money and neglects others;
Harm to the patient clinician relationship, when the patient, often correctly, feels that it is not her/his benefit but some external target that is motivating providers;
Reduction in equity. This is extremely important. I have written extensively about health disparities; this point is meant to drive home the reality that this inequities, or disparities, can persist even when there is an overall improvement in the areas being measured.

Most of these issues and several others derive from the simplistic application of financial rewards to complex interdependent systems. Financial incentives assume that paying more for a service will
lead to better quality or additional capacity, or both. However, because money is only one of many internal and external influences on clinical behavior, many factors will moderate the size and direction of any response. The evidence on whether financial incentives are more effective than other interventions is often weak and poorly reported.”


These authors are from Australia, which like most developed countries, has a national health insurance system. (See the map[5]).  The data they cite is world-wide, but largely from their country, the UK (which also has a national health system) and the US (which does not). The real problems of health disparities and inequity are enormous in our country. They are not modified by the presence of a national health system, which reduces many of the financial barriers to health care; indeed they are exacerbated by a make-money, business-success psychology of providers that may be worse in the for-profit sector but essentially drives the non-profit sector as well.
 
The application of a simplistic corporate psychology to health care delivery can lead to poorer quality and greater inequity in any country. In combination with an entire system built on making money, gaming the system, and excluding the poor, and making corporate profit (see graph) it is a disaster. Our disaster.



[1] Glasziou P, et al., When financial incentives do more good than harm: a checklist, BMJ 2012;345:e5047 doi 10,1136/bmj.e5037, published August 20, 2012
[2] Jenkins GD, Mitra A, Gupta N, Shaw JD. Are financial incentives related to performance? A meta-analytic review of empirical research. J Appl Psychol 1998;83:777-87.
[3] Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev 2011;7:CD009255.
[4] Scott A, Sivey P, Ait Ouakrim D, Willenberg L, Naccarella L, Furler J, et al. The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev 2011;9:CD008451.

[5] Interestingly, this map, from the Atlantic, may make us think that the aloneness of the US in not having national health care is less serious than it is. Most adults are used to seeing map projections that inflate the size of Europe and North America. This is a geographically more accurate map, but if it were in our “accustomed” projections would be even more green.

Total Pageviews