• 4/25/2005
  • as reported by www.pccoaltion.com
  • Consumer Reports on Health

Following the back and forth of medical news is enough to give you whiplash. Supplemental estrogen, portrayed in the media for decades as a veritable fountain of youth, ends up being anything but when definitive studies show it can increase the risk of breast cancer, heart attack, and stroke. Initially heralded as safer than existing drugs, the pain relievers rofecoxib (Vioxx) and celecoxib (Celebrex) make the front pages again when later studies show they increase heart-attack risk. After riding a long wave of good press, the reputation of the antidepressant paroxetine (Paxil) crashes amid reports that the drug may make some teenagers suicidal.

Indeed, medical news often seems to follow an all-too-familiar pattern: New drugs or therapies are introduced with glowing reports, followed a few years later by headlines blaring their dangers. “That pattern leaves many people confused or even angry,” says Steven Woloshin, M.D., a professor at the Dartmouth Medical School’s Center for Evaluative Clinical Sciences.

Some people react to that uncertainty by dismissing all medical news, while others overreact by adopting-or abandoning-medicines too soon. For example, in the 1990s many people stopped taking certain blood-pressure medications after a pair of studies linked them to increased heart-attack risk; subsequent research refuted that evidence, but only after some patients suffered adverse events because they stopped taking their medication.

While some of the confusion stems from the natural unfolding of scientific knowledge, some comes from shortcomings in the way medical research is published and the way the mass media present medical news. Woloshin and others have identified crucial shortcomings in medical news reports about everything from dietary and exercise habits to new drugs and surgical procedures. This report suggests questions you should keep in mind when reading medical news with a critical eye.

HOW GOOD IS THE RESEARCH?

* Has the study been published? News reports often trumpet tantalizing results of preliminary research presented at medical meetings, relying only on the investigator’s description of ongoing studies. A 2002 review of such reports found that only half ever get published in respected journals.

For example, after studies suggested that celecoxib (Celebrex) increased heart-attack risk, a Pfizer-funded scientist cited unpublished research that linked similar risks to the related drug naproxen (Aleve). While that comment received much attention in the press, subsequent data from the study, released in February 2005, cleared naproxen of the supposed heart risk.

* Who funded-and promoted-the study? The vast commercial machinery behind the publication of scientific information can sometimes overcome even the vigilance of peer-reviewed medical journals, which are supposed to screen for biases.

For example, The New York Times and other papers published reassuring articles about the safety of the diet-drug combination known as phen-fen based on a study and an editorial published in the Journal of the American College of Cardiology in 1999. It turned out, however, that both the study and editorial were written by paid consultants to Wyeth, a maker of one-half of the phen-fen combination.

Because pharmaceutical companies help fund much medical research, it’s unreasonable to dismiss all industry-related studies. It is vital, however, to know about the potential conflict of interest of the researchers involved. Be leery of any news account that omits that crucial information.

* How good is the study? The gold standard in medical research is the double-blind, controlled clinical trial, in which subjects are randomly assigned to a control (placebo) or an experimental (the real thing) group. Neither the subjects nor the researchers know who is in which group until the study is over and the code is broken.

Observational studies, on the other hand, compare people who independently chose a particular health intervention against others who didn’t. These studies can suggest a probable link but can’t prove a causal effect. Other factors, such as lifestyle habits, can influence results. For example, estrogen’s reputation took a nosedive when clinical trials failed to confirm the apparent heart and other benefits suggested by numerous, previously published observational studies.

PUT IT IN CONTEXT

Despite the “Eureka!” enthusiasm often expressed in headlines, the process of gaining scientific knowledge more often resembles the creation of a pointillist painting: If you concentrate too much on individual dots, you’ll miss the big picture. That means seeing not only how the research fits in with what was known before, but also how relevant it is to you as an individual.

What’s the supporting evidence? A single study seldom constitutes strong evidence of anything. Look for descriptions of previous research that pointed in the same direction or at least provided a plausible biological explanation for the finding.

What do others have to say? Don’t rely on a single news report. Check whether other sources give additional details or perspectives that provide a fuller picture. Also, look for responses of governmental agencies and reputable organizations, which can often help you gauge how seriously to take the news. For example, the fact that the National Institutes of Health, the American Cancer Society, and the American Heart Association all chimed in with concerns about estrogen was a strong sign that the news was unusually significant.

Is the study relevant to you? Many drugs that show promise in the early test-tube stage or in animal research don’t work safely or effectively on humans. In human trials, some treatments are tested only in men or women, others only in young, healthy, or sick people. The less you resemble the subjects, the more reason to temper your enthusiasm.

What does it cost? New drugs often cost more, though their benefits are often described as though money is no issue. For example, the new drug omalizumab (Xolair) may indeed revolutionize treatment of asthma and rhinitis by targeting the immune-system flaw that often underlies both disorders. But it costs $1,000 or more a month, a significant limitation.

What do you and your doctor think? Your individual health needs may make even relevant-seeming research moot. So talk with your doctor before rushing to judgment.

NEW . . . BUT IMPROVED?

News reports are far more likely to describe the possible benefits of a new therapy than their potential risks. And research into new drugs or treatments is likely to underestimate the risks they pose and overestimate the potential benefits.

Only the good survive. Positive studies are far more likely than negative ones to make it to print, and thus far more likely to get media attention. For example, the makers of paroxetine (Paxil) withheld from publication two studies that showed the drug was ineffective in children. (Consumers Union, publisher of this newsletter, believes drugmakers should be mandated to publish results of all clinical trials; see the “Prescription for Change” initiative at www.prescriptionforchange.org.)

Stacked studies. Studies testing new treatments are often conducted by only the best doctors and in the best hospitals, skewing the results. For example, complication rates for prostate-cancer surgery initially seemed reasonably low because studies testing the procedure were performed by experts; follow-up studies that measured complication rates in the real world found much higher rates.

Also, study groups are often filled with patients most likely to benefit from the treatment and least likely to be harmed. For example, only 1 in 200 or 300 potential patients are accepted into clinical trials of breast cancer or carotid endarterectomy treatments, according to some research. Studies need to target ideal candidates to make expensive clinical trials cost-efficient. But as a result such studies might not reflect the real risks and benefits of that therapy in the general population.

No long-term data. Finally, even the best research of new therapies can’t provide evidence of their long-term safety and efficacy. Indeed, one study found that over a five-year period the Food and Drug Administration had to pull roughly 5 percent of newly approved medications off the shelves because of unexpected risks. According to another study, the FDA later issued more stringent warnings on about 10 percent of new drugs.

“50 PERCENT DECREASE!” DRAMATIC STATISTICS CAN BE MISLEADING

* The statistics used to report the findings of medical studies can often distort, rather than clarify, health benefits and risks. That’s because such reports often rely on something called “relative difference,” a statistical formulation that can lead to dramatic headlines but misrepresent the real-life importance of the research. For that, you’re better off knowing the “absolute difference” instead.

Consider, for example, news stories about the increased heart risks of celecoxib (Celebrex), which announced that the drug “more than tripled” the risk of heart attack and stroke. That does sound scary-but does it accurately convey the threat posed by the drug?

Let’s look a little closer. In the celecoxib study, 1 percent of people taking a placebo pill over a three-year period died of coronary disease compared with 3.4 percent of those taking 400 milligrams of the drug twice a day. If the results are presented as a relative difference, or a ratio, those on the drug were indeed 240 percent more likely to die of heart attack or stroke-or at more than triple the risk-than people on a placebo.

But what was the actual risk of taking the drug? To find that out, the placebo result is subtracted from the experimental result to get the absolute difference: In this example, 2.4 percent of patients actually suffered a heart attack or stroke because of the medication. That’s still important news-but not nearly as dramatic a headline.

Just as relative risk can make some problems seem bigger than they are, they can also overstate the benefits of drugs. For example, in a landmark trial of the cholesterol-lowering drug simvastatin (Zocor), 11.5 percent of patients on a placebo died of a heart attack compared with 8.2 percent on the drug. That’s an absolute difference of 3.3 percent-but a much more impressive-sounding 29 percent decline in relative mortality.

Finally, relative risk can be used to make rare problems seem much more important than they are. For example, one study found that a vigorously active person is 2.4 times more likely to have a heart attack during exercise than at rest, a finding that seems to suggest you should hang up your gym bag.

But those attacks are extremely rare; the absolute risk of having a heart attack while you’re working out is only about 1 in 2 million, so the fact that exercise multiplies the risk to that level is nearly meaningless. Moreover, knowing that regular exercise can cut the risk of heart attack the rest of the time roughly in half should be enough to get you confidently back onto the treadmill.