Sunday, January 26, 2014

Doctors' incomes and patient coverage: both need to be more equal

On Sunday, January 18, 2014, the New York Times ran another stellar front-page piece by Elisabeth Rosenthal addressing the fact that, as the title states clearly, Patients’ Costs Skyrocket; Specialists’ Incomes Soar. It continues, with detailed documentation, her explanation of how providers – in this case doctors, and in particular the most highly paid subspecialists -- game the system of insurance reimbursement to maximize their income, and how patients pay the price. She focuses particularly on dermatologists, whose combination of high income and low workload makes them the exemplar of what medical students call the “ROAD”, the specialties of radiology, ophthalmology, anesthesiology, and dermatology, all of which are known for having high income/work-hours ratios.

Rosenthal addresses in particular a kind of dermatologic surgery called Mohs surgery, which commands a high price. Mohs surgery is very good for its ability to identify margins of a skin cancer and leave less of a scar, but it can be, and often is, overused at high cost. She cites a particular case of a woman who had a small basal cell cancer (the kind that almost never metastasizes and is often simply excised) removed from over her cheekbone and had a bill of over $25,000. “Her bills included $1,833 for the Mohs surgery, $14,407 for the plastic surgeon, $1,000 for the anesthesiologist, and $8,774 for the hospital charges.” The plastic surgeon, by the way, was called in – along with the anesthesiologist – to close the small lesion from the excision that the dermatologist was unwilling to do.

The cases that Rosenthal documents are typical enough that they cannot be  called “abuses” because they are the norm; it is, of course, the entire system that is the abuse. It would be absurd if it were not so real, if it didn’t skew the entire health care system away from primary care and toward specialties where enormous incomes are made by billing – and collecting – for each single activity. Rosenthal describes the RUC, the AMA-convened body that makes “recommendations” to Medicare about the relative value (and thus payment) for procedures, as well as for other forms of patient care -- such as listening to you, examining you, thinking about your problem, making a diagnosis, and recommending treatment -- which are well undervalued compared to procedures. I discussed the RUC, and what I consider its outrageous behavior, in Changes in the RUC: None.. How come we let a bunch of self-interested doctors decide what they get paid?, July 21, 2013, and earlier in  Outing the RUC: Medicare reimbursement and Primary Care, February 2, 2011, but Rosenthal does an excellent job of describing its perverse incentives. Given that Medicare takes the RUC’s recommendations 95% of the time, and that most insurers base their payments on Medicare’s, the RUC, which is heavily stacked against primary care, essentially sets doctors’ reimbursement. And this is not to the benefit of patients, either financially or medically.

In another article, Rules for Equal Coverage by Employers Remain Elusive Under Health Law, buried much farther inside the paper but also very important, Robert Pear describes the fact that the Obama administration has chosen to not (yet) enforce rules which allow companies to offer discriminatory levels of coverage to some employees than to others; generally, “better” coverage to executives than to line workers. There is already a ban on such discrimination in companies that are self-insured, but the Affordable Care Act (ACA) extended this to those who purchase coverage from insurance companies. The possibilities for discrimination are illustrated by the things that are forbidden; for example, covering only executives and not others, paying the full cost of executives’ premiums while making lower-paid workers pay for a portion of their benefits, or offering different terms for coverage of the dependents of executives and other workers.

Of course, the companies (read: the executives who run the companies and stand to benefit from discriminatory practices) disagree. Pear quotes Kathryn Wilber, from the American Benefits Council “which represents many Fortune 500 companies” as saying “Employers should be permitted to provide lower-cost coverage to employees who may not be able to afford the comprehensive coverage being provided to other employee groups,” which, of course, would not be an issue if the company paid for comprehensive coverage for all employees. The one “benefit” that may be excluded from the non-discrimination rules are “certain types of executive physicals”, which is ironic because there is no data that these benefit most people, including executives, but rather increase both the cost of care and the risk (in follow-up tests for false positives) to the patient. Certainly there are some occupations where the risk of something going wrong is high enough that it exceeds the risk of harm, changing the harm/benefit ratio -- airplane pilots for example, or possibly those who drive buses full of school children. But virtually no corporate executives are in this group.

The reason companies want differential benefits is primarily to save money by not offering good coverage to the majority of their employees, and also as a “perk” that they can offer to their executives. It is presented as parallel to other market goods, the difference between “serviceable” and “excellent”. This is even carried over in the metaphor to describe the low-copay plans that the ACA was going to tax, “Cadillac” plans, when everyone knows that a Chevy is just as good at getting you where you want to go, just not in such luxurious circumstances. But this is a lousy metaphor for health care, and confuses two benefits. One, which is the intent of the “Cadillac policy” tax, is whether individuals have to make co-pays or have co-insurance, or have limits on their benefits, or not. This is financial, and is very important. The other, however, is whether some people have coverage that gets them better health care. This is not OK. Obviously, they come together at some point since health care that is unaffordable to a person is unavailable to them, even when it is necessary. Conversely, for those executive physicals, providing a “benefit” that the individual does not have to pay for encourages them to seek unnecessary, and sometimes potentially harmful, care.

It may be that there are certain kinds of “health care” that are in fact reasonable to treat as elective consumer goods which a company might offer to some employees and not others; cosmetic surgery is the classic example (or non-medically-necessary contact lenses or radial keratotomy [Lasik®]; see Rand Paul on health policy: small brain and no heart, September 1, 2013). There also may be some employees for whom the harm/benefit ratio makes certain services of value when it does not for others (the comprehensive exams, “physicals”, for pilots, or Pap smears for women but not men). But, overall, coverage that does not include all necessary care for everyone is inappropriate. In addition, coverage of unnecessary care is as well. It is not Cadillacs vs. Chevies, or Volkwagens vs. Mercedes; it is making sure that everyone is covered for their health care needs. Like every other OECD country does. Like we could do if everyone was in Medicare.

Then instead of buying their executives “Cadillac health plans” to demonstrate how important they are, these companies can just buy them Cadillacs. 



Sunday, January 19, 2014

More guns and less education is a prescription for poor health

Within the span of one week, my state of Kansas was headlined in two pieces in the New York Times, unusual for a small state. Unfortunately, neither was meant to be complimentary. “What’s the matter with Kansas Schools?” by David Sciarra and Wade Henderson appeared as an op-ed on January 8, 2014, and “Keeping Public Buildings Free of Guns Proves Too Costly for Kansas Towns”, by Steve Yaccino, was a news article (middle of the main section but top of the web page!) on January 12. Both are political and social issues; for example, the thrust of the “guns” article is that Kansas municipalities (like Wichita) that want to keep guns out of public buildings (like the library) are financially stymied by the cost of the security requirements the legislature has put in place in areas where carrying guns is not permitted. Like abortion (and neither of these pieces addresses Kansas’ virulent anti-abortion laws), guns are a very hot-button issue that inflames deep-seated passion in places like Kansas, and so is (sometimes) education. I will, however, focus my comments on the health impacts of these laws.

First, guns. Guns are, very simply, bad for people’s health. (Obviously, even when used as “intended”, for hunting, they are bad for some animals’ health, but this is not my focus.) Having guns around increases the risk of death or injury from them. Having guns intended for hunting stored locked and unloaded is the safest, but this doesn’t work for guns intended for self-defense since that renders them less available for that purpose. Carrying guns on your person, in your car, in public, on the street, and into businesses, public buildings, schools, and health care settings increases the risk. This is not what gun advocates, and concealed-carry advocates believe. Their idea is that there are bad guys out there carrying guns, either criminals who might want to rob you or crazy people who might want to shoot up your school or post office, and that carrying a gun allows one to protect oneself, and possibly others, by shooting down the perpetrator before more damage can be done. Thus, it protects your health, and that of others.

Nice idea, but completely unsupported by the facts.  Guns kill lots of people, injure many more, and virtually never save lives. This is the case even when used by police, and even more true when use of guns by police officers is excluded. It is true despite the widely-publicized, often repeated on the internet, and frequently invented stories about a virtuous homeowner shooting an armed robber. I have no doubt that such cases occur, but with such rarity as to be smaller than rounding error on the number of deaths and serious injuries inflicted by guns.  Suicides and homicides are among the leading causes of death in the US, most are caused by guns, and almost none of the homicides are “justifiable manslaughter” from a person protecting him/herself from an armed invader. The mere presence of easy-to-access guns in the environment increases dramatically the risk of successful suicide (see my blog, Suicide: What can we say?, December 12, 2013, with data from David Hemenway’s “Private Guns, Public Health”[1]). In addition, the number of “accidental” deaths (where someone other than the intended victim was shot, or someone was shot when the intent was “just” to threaten or show off, or by complete accident, sometimes when an unintended user – say a child – gets hold of a loaded gun) from guns is way ahead of any other method of harm (knives, bats, etc.)

When we go beyond having guns to carrying guns in public places, the data is less well collected. However, the trope of the heroic law-abiding, gun-carrying citizen drawing down on the evildoer in a public place, like say a movie theater or the waiting room of your clinic, is a terrifying thought. First of all, almost none of them are Bat Masterson or Wyatt Earp or Annie Oakley (except maybe in their own minds) and the idea that they will hit who they are aiming at is wishful thinking; the rest of the folks are caught in a gunfight. It is scary enough when this involves police officers, but if half the waiting room pulls out pieces, the results will be, um, chaotic. Harmful. Not to mention what happens when the police show and don’t know who to shoot at (maybe if you are a gun-toting good guy you can wear a white hat…).

So, having guns around, and the more easily they are available, is absolutely harmful to the health of the population, and generally you as an individual. If people, including legislators, and Kansas legislators in particular, want to encourage gun carrying for other reasons, they should at least be aware of and acknowledge the health risks. But what about education? The cuts in state education will, quite likely, harm the education of children (or if, as the article notes, the state Supreme Court forces the legislature to fund K-12, the education of young adults since the money will likely come from higher education), but what about health?

There is a remarkable relationship. More education leads to better health. Better educated people are healthier. The relationship is undoubtedly complex, because better educated people also have better jobs and higher incomes, which is also associated with health. This is addressed with great force in a recent policy brief “Education: It Matters More to Health than Ever Before”, by the Virginia Commonwealth University Center for Society and Health sponsored by the Robert Wood Johnson Foundation; for example, while lifespan overall in the US continues to increase, for white women with less than 12 years of education, it is currently decreasing! The RWJ site also includes an important interview with Steven Woolf, MD MPH, Director of the Center. “I don’t think most Americans know that children with less education are destined to live sicker and die sooner,” Dr. Woolf says. He discusses both the “downstream” benefits of education: “getting good jobs, jobs that have better benefits including health insurance coverage, and higher earnings that allow people to afford a healthier lifestyle and to live in healthier neighborhood,” and the “upstream” issues, “factors before children ever reach school age, which may be important root causes for the relationship between education and health. Imagine a child growing up in a stressful environment,” that increase the risk of unhealthy habits, poor coping skills and violent injuries.

In several previous blogs I have cited earlier work by Dr. Woolf, one of the nation’s most important researchers on society and health, notably in "Health in All" policies to eliminate health disparities are a real answer, August 18, 2011. I included this graph, in which the small blue bars indicate the deaths averted by medical advances (liberally interpreted) and the purple bars represent the potential deaths that could be averted if all Americans had the death rates of the most educated. I also included a link to the incredible County Health Calculator (http://chc.humanneeds.vcu.edu) which allows you to look at any state or county, find out how the education or income level compares to others, and use an interactive slider to find out how mortality and other health indicators would change if the income or education level were higher or lower.

In the US, the quality of one’s education is very much tied to the neighborhood you live in, since much of school funding is from local tax districts and wealthier communities have, simply, better schools. (This last is completely obvious to Americans, but not necessarily to foreigners. A friend from Taiwan was looking at houses and was told by the realtor that a particular house was a good value because it was in a good school district. She called us an asked what that meant; “In Taiwan, all schools are the same; they are funded by the government. No one would choose where to live based on the school.”) This difference could be partially compensated for by state funding for education, which is why cuts in this area are particularly harmful, including to our people’s health. In fact the most effective investment that a society can make in the health of its people is in the education of its young.

An educated population is healthier. Wide availability and carrying of guns decreases a population’s health. Unfortunately, the public’s health seems to carry little weight in these political decisions.





[1] Hemenway, David. Private Guns, Public Health. University of Michigan Press. Ann Arbor. 2007.

Sunday, January 12, 2014

Changing the structure of health care delivery systems: to benefit the patient, the providers, or the insurers?

In an important series of 3 articles beginning on the Sunday before the New Year, “Doctors Inc.”, Alan Bavley of the Kansas City Star looked at the increasing acquisition of physician practices by hospitals, and the impact this has on access to, quality of, and cost of health care for patients. The first article, “Medicine goes corporate as more physicians join hospital payrolls”, describes the “what”, that:
Since 2000, the number of doctors on hospital payrolls nationwide has risen by one-third, according to the American Hospital Association. In the Kansas City area, fully 55 percent of physicians are now employed by hospitals, Blue Cross and Blue Shield of Kansas City estimates. That includes virtually all cardiologists and most cancer specialists.” 

These changes are not limited to the KC area; he cites both national data and that from disparate regions such as Spartanburg, SC and Phoenix, AZ. Part of the reason, the "financial model", which is described in this first article, is that such “integrated” practices generate internal referrals, keeping patients within the system, as well as generating lucrative procedures. Physicians get a piece of the action; they get guaranteed salaries paid in part by the hospital or health system which is getting downstream revenue for their referrals.

And it makes these hospitals and health systems a lot of money, because they can now charge a lot more money. Bavley quotes “Robert Zirkelbach, vice president of America’s Health Insurance Plans, the industry’s trade association. ‘When a hospital buys a practice, its rates will increase in the following year’s contract. Increases of 20, 30 or 40 percent are not uncommon. It’s not 3 or 4 percent, that is for sure.’” 

It is also not always good for patients, as Bavley illustrates with examples of people who were referred internally and had delayed diagnosis. (One story discusses a woman discouraged from going to the academic medical center at which I work – full disclosure – for a second opinion regarding her lung cancer; the "reasons" given were both that she “didn’t have time”, and because she would see “young doctors still in training”.) Sometimes it is fine to see doctors within the system, and certainly this can be, and is, encouraged, but discouraging people from seeking outside referrals can also be hazardous to their health.

The Affordable Care Act (ACA) encourages the creation of “Accountable Care Organizations” (ACOs), which would be responsible (at least hopefully, in the best of scenarios) for the health of a population. At a minimum, they would seek to decrease the degree to which the delivery of health care is a series of episodic events paid for individually, instead taking on a global responsibility including inpatient, outpatient, and long-term care. This would, in theory, change the usual patient experience from seeing one (or many) doctors or having one (or many) ER visits, each charged and paid separately, culminating in a hospitalization, and then discharge to one (or many) doctors, or a long-term care facility (paid separately), and failure of care resulting in readmission to the hospital (paid again). The idea is that all levels would be coordinated to provide the best care at the most appropriate (inpatient, outpatient, long-term, home based) level.  In some settings, particularly for fully-integrated plans (where the providers of care are also the insurers) such as Kaiser, this works relatively well.

However, as Bavley makes clear in his series, written as part of a yearlong Reporting Fellowship on Health Care Performance sponsored by the Association of Health Care Journalists and supported by The Commonwealth Fund, particularly in the second article, “’Facility fees’ add billions to medical bills”, there is often a great cost to those who are paying, the patient and their insurer (including Medicare and Medicaid). This is because Medicare (and, following their lead, private insurers) pays an additional fee (the "facility fee") for services, and especially procedures, done in a hospital outpatient facility beyond what they would pay for it to be done in a doctor’s office. (This is also addressed in the series by Elisabeth Rosenthal, "The $2.7 Trillion Medical Bill" in the New York Times.) 

Why? The original intent (as is often the case) was good, intended to both save money and improve care, by having many procedures done in outpatient rather than inpatient settings, where the cost would be even higher. And (as also so often is the case) the providers realized that this system could be gamed as well. The physician fee for a visit or procedure done in an office is greater than that done in a hospital clinic, but is expected to include all the overhead. In a hospital-based clinic (which just has to be owned by the hospital or health system; it doesn’t have to be on the campus and can be in the same doctor’s office that used to be separate) there is a somewhat lower doctor's fee, but there is also a facility fee that, together with the doctor’s fee, is much higher in total than the office-based reimbursement; indeed, the facility fee can be far higher than the physician fee. Thus, the hospital makes money, and can share some of that with the physician, allowing the physician to make a lot more money without the overhead and risk. VoilĂ ! Physicians are incented to become employed by hospitals!

I am a doctor and work in a medical center, so I understand the impetus for this from the point of view of providers, both doctors and hospitals. Medicare is ratcheting down its reimbursement, and a particular form of support for hospitals caring for a disproportionate share of uninsured is being cut back, and operating margins for many hospitals are getting thin or negative. Doctors are making less money (arguably some of them were making far too much, but they still don’t like making less) and will thus endorse efforts to have hospitals support them and maintain their incomes. The problem is that the cost to consumers goes up, especially when co-pays and co-insurances that come out of patients’ pockets, even when they are insured, go ever upward. 

Medicare is, sadly, responsible for much of this situation, as illustrated by the following: seeking to reduce costs for unnecessary admissions, Medicare has empowered bounty hunters (called “RACs”) to go after Medicare “fraud” by reviewing admissions to hospitals for patients who could have been care for in the hospital on “observation” status, which will save Medicare money. Hospitals are thus very careful to only officially “admit” people who meet very strict criteria. However, because “observation” status is officially “outpatient”, while Medicare saves money the patient pays more out of pocket, because this is under Medicare Part B, not the Part A that covers hospitalization. Complicated, but what it comes down to is what is financially good for Medicare is financially bad for the patient. Is this what we want?

I hope not. Yes, some of the fault is Medicare, and the fault is also providers (hospitals and doctors) seeking to maximize profit (even if “not-for-profit”) by manipulating the rules of the system. The fault is that we have Rube Goldberg-type complex constructs put in place to encourage behavior by providers, and providers are figuring out ways to work the system to their benefit. The real problem is that we do not have a straightforward system to deliver the highest-quality, necessary, health care to all people but a mess of conflicting incentives where gain to one component (i.e., insurers) is a loss to another (i.e., providers) and that they then take actions that benefit them and the overall loser is the patient. Bavley quotes an email from a board member of a hospital system to the Chief Financial Officer that said “Let’s be realistic. Employing physicians is not achieving better cost, it’s achieving better profit.”

That is not what our national health policy should be doing. A health system that did not permit gaming but straightforwardly paid for health care,  and eliminated the profit motive, would solve these problems. The answer is to put everyone in Medicare, in a single-payer system, so some patients are not “more desirable” than others. And to have Medicare, which is now covering everyone, pay for the appropriate level of care for every patient, where doctors and hospitals have no incentive to label a person’s hospitalization as “admission” or “observation”, or an outpatient visit as “hospital based” or “office based” because there is a difference in the reimbursement.


It can be done. It is done in Canada. It is done in some fashion in every other developed country. If we decide that the health of our people is more important than the profit of the health care industry, we can do it also. 

Sunday, January 5, 2014

Medical schools are no place to train physicians

Doctors have to go to medical school. That makes sense. They have to learn their craft, master skills, and gain an enormous amount of knowledge. They also, and this is at least as important, need to learn how to think and how to solve problems. And they need to learn how to be life-long learners because new knowledge is constantly being discovered, and old truths are being debunked. Therefore, they must learn to un-learn, and not to stay attached to what they once knew to be true but no longer is. They also need, in the face of drinking from this fire-hose of new information and new skills, to retain their core humanity and their caring, the reasons that (hopefully) most of them went into medicine.

Medical students struggle to acculturate to the profession, to learn the new language replete with eponyms, abbreviations, and long abstruse names for diseases (many are from Latin, and while they are impressive and complicated, they are also sometimes trite in translation, e.g., “itchy red rash”). They have to learn to speak “medical” as a way to be accepted into the guild by their seniors, but must be careful that it does not block their ability to communicate with their patients; they also need to continue to speak English (or whatever the language is that their patients speak). “Medical” may also offer a convenient way of obscuring and temporizing and avoiding difficult conversations (“the biopsy indicates a malignant neoplasm” instead of “you have cancer”).  But there needs to be a place for them to learn.

So what is wrong with the places that we are teaching them now? Most often, allopathic (i.e., “MD”) medical schools are part of an “academic health center” (AHC), combined with a teaching hospital. They have large biomedical research enterprises, with many PhD faculty who are, if they are good and lucky, are externally funded by the National Institutes of Health (NIH). Some or many of them spend some of their time teaching the “basic science” material (biochemistry, anatomy, physiology, microbiology, pharmacology, pathology) that medical students need to learn. By “need to learn” we usually mean “what we have always taught them” or “what they need to pass the national examination (USMLE Step 1) that covers that material”. This history goes back 100 years, to the Flexner Report of 1910. Contracted by the AMA, educator Abraham Flexner evaluated the multitude of medical schools, recommended closing many which were little more than apprenticeship programs without a scientific basis, and recommended that medical schools be based upon the model of Johns Hopkins: part of a university (from the German tradition), grounded in science, and based in a core curriculum of the sciences. This has been the model ever since.

However, 100 years later, these medical schools and the AHCs of which they are a part have grown to enormous size, concentrating huge basic research facilities (Johns Hopkins alone receives over $300 million a year in NIH grants) and tertiary and quarternary medical services – high tech, high complexity  treatment for rare diseases or complex manifestations of more common ones. They have often lost their focus on the health of the actual community of which they are a part. This was a reason for two rounds of creating “community-based” medical schools, which use non-university, or “community”, hospitals: the first in the 1970s and the second in the 2000s. Some of these schools have maintained a focus on community health, to a greater or lesser degree, but many have largely abandoned those missions as they have sought to replicate the Hopkins model and become major research centers. The move of many schools away from community was the impetus for the “Beyond Flexner” conference held in Tulsa in 2012 (see Beyond Flexner: Taking the Social Mission of Medical Schools to the next level, June 16, 2012) and for a number of research studies focused on the “social mission” of medical schools.

The fact is that most doctors who graduate from medical school will not practice in a tertiary AHC, but rather in the community, although the other fact is that a disproportionate number of them will choose specialties that are of little or no use in many communities that need doctors. They will, if they can (i.e., if their grades are high enough) often choose subspecialties that can only be practiced in the high-tech setting of the AHC or the other relatively small number of very large metropolitan hospitals, often with large residency training programs. As they look around at the institution in which they are being educated, they see an enormously skewed mix of specialties. For example, 10% of doctors may be anesthesiologists and there well may be more cardiologists than primary care physicians. While this is not the mix in world of practice, and still less the mix that we need to have for an effectively functioning health system, it is the world in which they are being trained.

The extremely atypical mix of medical specialties in the AHC is not “wrong”; it reflects the atypical mix of patients who are hospitalized there. It is time for another look at the studies that have been done on the “ecology of medical care”, first by Kerr White in 1961 and replicated by the Robert Graham Center of the American Academy of Family Physicians in 2003 (see The role of Primary Care in improving health: In the US and around the world, October 13, 2013), and represented by the graphic reproduced here. The biggest box (1000) is a community of adults at risk, the second biggest (800) is those who have symptoms in a given month, and the tiny one, representing less than 0.1%,  is those hospitalized at an academic teaching hospital.  Thus, the population that students mostly learn on is atypical, heaving skewed to the uncommon; it is not representative of even all hospitalized people, not to mention the non-hospitalized ill (and still less the healthy-but-needing-preventive care) in the community.

Another aspect of educating students in the AHC is that much of the medical curriculum is determined by those non-physician scientists who are primarily researchers. They not only teach medical students, they (or their colleagues at other institutions) write the questions for USMLE Step 1. They are often working at the cutting edge of scientific discovery, but the knowledge that medical students need in their education is much more basic, much more about understanding the scientific method, and what constitutes valid evidence. There is relatively little need, at this stage, for students to learn about the current research that these scientists are doing. Even the traditional memorization of lots of details about basic cell structure and function is probably unnecessary; after 5 years of non-use students likely retain only 10% of what they learn; even if they need 10% -- or more – in their future careers, there is no likelihood that it will be the same 10%. We have to do a better job has of determining what portion of the information currently taught in the “basic sciences” is crucial for all future doctors to know and memorize, and we also need to broaden the definition of “basic science” to include the key social sciences of anthropology, sociology, psychology, communication, and even many areas of the humanities such as ethics. This is not likely to happen in a curriculum controlled by molecular biologists.

Medical students need a clinical education in which the most common clinical conditions are the most common ones they see, the most common presentations of those conditions are the most common ones they see, and the most common treatments are the ones they see implemented. They need to work with doctors who are representative, in skills and focus, of the doctors they will be (and need to be) in practice. Clinical medical education seems to work on the implicit belief that ability to take care of patients in an intensive care unit necessarily means one is competent to take care of those in the hospital, or that the ability to care for people in the hospital means one can care for ambulatory patients, when in fact these are dramatically different skills sets.

This is not to say that we do not need hospitals and health centers that can care for people with rare, complicated, end stage, tertiary and quarternary disease. We do, and they should have the mix of specialists appropriate to them, more or less the mix we currently have in AHCs. And it is certainly not to say that we do not need basic research that may someday come up with better treatments for disease. We do, and those research centers should be generously supported. But their existence need not be tied to the teaching of medical students. The basic science, and social science, and humanities that every future doctor needs to learn can be taught by a small number of faculty members focused on teaching, and does not need to be tied to a major biomedical research enterprise. Our current system is not working; we produce too many doctors who do narrow rescue care, and not enough who provide general care. We spend too much money on high-tech care and not enough on addressing the core causes of disease.

If we trained doctors in the right way in the right place we might have a better shot at getting the health system, and even the health, our country needs.

Total Pageviews