unpacking a statistic

September 1, 2009

A new study in the Archives of General Psychiatry reports that the percentage of Americans on antidepressants almost doubled from 5.84% in 1996 to 10.12% in 2005. The first thing that probably hits you after reading that is, “Wow, that’s a big jump in ten years.” The next thing might be something like, “I wonder what caused that big of a jump?”

To an educated (but generally ignorant in psychiatry) observer, a few initial explanations seem plausible:
1) The number of depressed people has simply increased, bringing with it the number of people on medication.
2) Cultural acceptance of depression as a legitimate physiological illness has grown, allowing more people to come out into the open and seek treatment they had previously avoided.
3) Our ability to diagnose the illness has improved, allowing us to catch and treat more cases.
4) Our acceptance of (and desire for) drugs to address our problems has increased (to, what many would say — although I’ll withhold judgment — is an unhealthy level).

The authors of the study mainly think it’s 1). In 91/92, the rate of depression in the US was 3.3% and rose to 7.1% in 01/02. This increase in depression itself is an interesting nugget. If fundamentally we think of depression as a physiological disorder, I find it hard to believe that our brain chemistry has really changed that much in ten years. Another explanation is that people are seeking out their doctors more for mental health problems, but then we start spilling into hypothesis 2) and possibly 4).

The authors also suggest an additional hypothesis:

5) Between 1996 and 2005, four new antidepressants (mirtazapine, citalopram, fluvoxamine, and escitalopram) were approved by the FDA for treating depression and anxiety. Furthermore, while the total promotional spending stayed the same, the percentage of that aimed directly at consumers (rather than physicians) increased from 3.3% to 12.0%. So, another explanation of the increase is simply that there are more, and more aggressively marketed drugs out there.

The point I’m trying to make here is that statistics like, “The percentage of Americans using antidepressants doubled between 1996 and 2005.” (from SciAm) and their handling in casual print or conversation often confuse or obfuscate all the issues really at work. Most of us, myself included, don’t put a lot of (or enough) thought into everything going on behind a statistic when we hear or read about them. We’re not critical, skeptical observers, and that’s dangerous.

It’s nice to get out the magnifying glass and do a little digging every once in a while to really understand what all these numbers actually mean.

A vitamin C product from http://www.myaloe-vera-health.com

I woke up this morning with a sore throat and knew that my annual end-of-summer/beginning-of-fall cold had arrived again. What preventative measures did I take this morning? Vitamin C, zinc lozenges, and lots of water. While good hydration is always a good idea, especially when getting sick, the efficacy of zinc lozenges is tenable (supporters here and here, controversy discussion here), vitamin C (in the customary large amounts) has been roundly been shown not to have any significant impact on the severity and length of cold symptoms.

I know all this, and yet I still down vitamin C tablets like there’s no tomorrow. I used to tell my students that I wouldn’t believe anything unless there was a peer-reviewed, double blind paper published about it. Well, those have been done and a verdict rendered (at least about vitamin C). But my anecdotal life evidence says otherwise. In previous years when I’ve felt the onset of a cold, I’ve downed tons orange juice and vitamin C tablets and felt like my cold went away quite quickly.

What seems to be at work here are two things: my selective memory and the placebo effect. My selective memory, of course, doesn’t bring back those times when I took vitamin C and didn’t get better right away, or when my sore throat turned into a three week hacking cough. Clearly, I’m remembering what I want to remember to support the remedy I’m naturally inclined to take.

Radiolab has a wonderful show about the placebo effect. If you have an hour to listen to a podcast, I’d highly recommend it. We all know what the placebo effect is and that it achieves positive results in many cases. (A particularly illustrative example in the show is where a man with Parkinson’s has a stimulating electrode implanted in his brain that the researchers can remotely turn on and off. They tell him they’re turning it on, although they’re not, and his tremors remarkably vanish, at least temporarily).

So here’s my case for taking vitamin C this cold season. 1) It will make you feel proactive about your cold. No one likes sitting around and having a cold hit them. We want to feel like we’re fighting it somehow! 2) The placebo effect might just trick your body into defeating (or think it’s defeating, but does the difference really matter?) that cold a little bit earlier. Either way, there’s very little harm and some potential psychological if not physiological benefits.

Of course, the placebo effect doesn’t really work if you know you’re taking a placebo. So try hard to forget those dry scientific papers and remember your mom making you drink orange juice and take vitamin C when you had a cold. Self deception is a wonderful thing.

Darrell Issa (R-CA) meddles in the NIH

Darrell Issa (R-CA) meddles in the NIH

ScienceInsider recently reported that representative Darrell Issa (R-CA) succeeded in stripping the funding of a study for HIV/AIDS in prostitution from an NIH funding bill. The overseas studies are aimed at understanding how the disease spreads (and how to halt it). Apparently Issa thought it was a waste for the researchers to fly over to Thailand when they could just take a $3.10 train across down. And rather than argue the point, the bill’s manager, David Obey (D–WI), accepted the amendment and moved on.

The stripped funding for the three specific grants totaled $5 million. The entire NIH funding bill was $31 billion. As I see it, this is a great example of politicians trying to score points. Not only were the studies a part of the scientific peer-review process, but they are actually incredibly important for us to understand the inextricable relationship between drugs, disease, and prostitution. You either fund the NIH for a certain amount or you don’t, and let it decide how to apportion that money. Congress doesn’t tell the CIA or FBI what to spend money on and what not to, do they?

If we’re going to get past the dogmatic aversion to drugs and prostitution (controversial, I know; perhaps I’ll post on those later), we need to understand how they interact with sexually transmitted diseases. Even if nothing legally changes here, we can certainly develop better policies for reducing the number of drug-addicted and diseased-infected prostitutes.

Non-scientists deciding not to fund certain research (like human cloning) is one thing. After all, it’s taxpayer money (and thus, in the politicians’ minds, theirs), but it is not their place to decide how that research is carried out. That’s the job of the grant reviewing committee. That’s why we have those committees, to decide what proposals meet the aims of the grant.

a free lunch?

July 28, 2009

These two mice were fed the same high fat diet. The top mouses liver cells were engineered to metabolize fat directly into carbon dioxide.

These two mice were fed the same high fat diet. The top mouse's liver cells were engineered to metabolize fat directly into carbon dioxide. (credit: Jason Dean, University of California, Los Angeles)

Don’t you wish your body could just get rid of that extra fat by itself, without the pesky exercise or dieting? It may not be as far off as you think.

In the June edition of Cell Metabolism, James Liao’s group reports that it succeeded in reducing diet-induced obesity in mice fed a high fat diet (Technology Review also has a nice article on it). They did this by splicing in something called the glyoxylate shunt into the mice DNA from E. coli.

When our body wants to use fat, it breaks it down and often converts it to carbohydrates (mostly glucose), whose excess can have all sorts of pernicious effects (i.e. diabetes). With this new glyoxylate shunt pathway, the mouse liver cells metabolize the fat directly into CO2, which is absorbed in the blood stream and simply exhaled.

While employing this technique in humans is still far, far away, it’s a nice reminder of the fruits of genetic engineering and synthetic biology will eventually bear.

What’s interesting, though, is that short of geneticly engineering humans (which I doubt we’re close to doing any time soon, for both technological and ethical reasons), we’re a good ways off from being able to change our own DNA and thus use what we’ve discovered in other animal models. Almost all current cell-level treatment involves getting various molecules into cells rather than actually changing their DNA (although see gene therapy).

Adderall (credit: www.michaelshouse.com)

Adderall (credit: http://www.michaelshouse.com)

If you aren’t familiar with the debate about neuroenhancers, you should definitely read up on it. It’s one of the most interesting contemporary debates going in the public health/education/pharmacology realm. For those of you who aren’t familiar with the debate, here’s a quick primer: the use of neuroenhancers like Ritalin, Adderall, Modafinil, and others has become more prevalent among people (often students but increasingly professionals too) wishing to squeeze a bit more of productivity out of their lives. (This discussion does not include the legal and appropriate use of these drugs for clinical learning disabilities like ADHD.) If you have a half hour, I’d highly suggest Margaret Talbot’s excellent New Yorker article on the issue.

Those of you not familiar with the debate may have the immediate and understandable reaction against the use of neuroenhancers. To those people, I urge you to consider the difference between taking a stimulant in pill form (as these come) and one in drink form, as our beloved coffee comes.

Unlike steroids, these drugs don’t yet have well documented health consequences for human use yet, which makes their use harder to damn. After all, we’ve been prescribing these stimulants (n. b. – Modafinil, a sleeping disorder drug, works differently than Ritalin and Adderall and is not an amphetamine) for years without seemingly negative consequences.

And yet, Edmund Higgens, a professor of family medicine and psychiatry, has an interesting piece in Scientific American that discusses some recent (mostly animal) studies that indicate the health consequences of stimulants like Adderall are more complicated than previously thought. Animals subjected to regimens of similar stimulants displayed some signs of anxiety, depression, and long term “cognitive defects.” While it’s important to distinguish between animal studies and clinical studies, the results of the latter often follow the former.

While the article is more directed at the over-prescription of ADHD drugs for kids, its contents certainly come to bear on (what I think is) a more interesting discussion. Once health consequences of neuroenhancers become present, the case for their recreational (academic, or professional) use becomes more difficult.

What’s still unclear is how (if at all) intermittent use affects our long term mental and cognitive health. Still, these new studies color a discussion that will only become more important in the future.

credit: daninz.files.wordpress.com

credit: daninz.files.wordpress.com

I’ve always been one of those people who claims that our nation’s high drinking age has caused increased binge drinking. “Look at Europe,” I’d say, “they have lower drinking ages and less crazy binge drinking.” Correlation equals causation. The argument goes that entirely blocking someone’s access to a drug makes them all the more likely to abuse it when they do eventually get access to it.

Well, I’m sad to admit, this study shows that I am wrong. The researchers analyzed data between 1979 and 2006 from over 500,000 subjects in the National Survey on Drug Use and Health. As it turns out, except in college kids (and that’s a big except), the incidence of binge drinking declined significantly as the drinking age increased from 18 to 21. Binge drinking in college students, where access to alcohol is still quite easy through students of legal age, remained roughly the same.

I used to always tell me students that I would only believe a non-intuitive claim if it was published in an peer-reviewed academic journal. Well, it’s been done, so I have to change my opinion about the drinking age and its influences on binge drinking.

That’s not to say that someone reading the methods section of this paper in the Journal of the American Academy of Child and Adolescent Psychiatry (where it’s being published this month) might say, “That’s not right because of x, y, and z.” If that does happen, as happens in all good science, a civil, measured discussion will ensue in the academic journals. That discussion is healthy and important for us to eventually understand the complexity of the issue. But this article now shifts the burden of proof to the other side.

Too often, and I think scientists are prone to this behavior as well, we read convincing, reliable evidence contrary to our own opinions and immediate write it off for one reason or another. Do not confuse this inclination with skepticism, which challenges claims and evidence, probing them for weaknesses. Ultimately, though, the skeptic can be won over if the argument and evidence is convincing enough. The dogmatic cannot. Dogma is dangerous in that it leads to conformity, which another recent study found to be bad for a civilization.

Dogma is the antithesis of scientific inquiry, and we must be wary that it does not seep into our thinking. Ask yourself, what would it take for you to change your mind on your most fundamental principles. If the answer is nothing, you’re in trouble.