David Souter has famously described the Supreme Court term from October to June as a period during which he undergoes an “intellectual lobotomy.” I had hoped that entering grad school and being surrounded constantly with the bloody edge of science would stimulate my curiosity. Unfortunately, after nine to ten hours a day of reading and thinking about science, I come home to find the well dry.

It’s not that I don’t want to write, it’s just that I don’t have nearly as much to say as I did before. In contrast, I now find a desire to come home and read fiction or watch good shows like Arrested Development.

These findings lead me to surmise that I have a limited capacity for thinking about science, be it focused or unfocused (how I see this blog) thought. When I was teaching science, actually, I rarely thought about it. Instead I thought more about grading, making exams, setting up labs. And during the summer, I wasn’t really forced to think about anything.

Perhaps (and I hope) this is an adjustment period, that my brain is undergoing a large intellectual transformation, akin to the burn of working out hard when out of shape, and that over time, my most powerful muscle (to use the term loosely) will strengthen and begin to desire the outlet of curiosity that this blog has been.

I’m stepping away because I have no obligation to stay, and writing as an “assignment” can sometimes produce a resentment that I do not wish to have. I will certainly keep the blog up, writing whenever the thought strikes me.

So thanks for reading. The purpose of my writing (other than simply a selfish exercise) is to try to convey the power, excitement, and controversy that science brings to our lives. All of these are important and need to be acknowledged. I hope I’ve succeeded at one time or another in getting that message across.

Remember, in the words of good old Descartes, “de omnibus dubitandum.”

I came across an interesting article in Nature News about the controversy surrounding a paper published in PNAS (Proceedings in the National Academy of Sciences, a big deal journal) about genetically modified (GM) corn and its toxicity in streams. The article in Nature News discusses the controversy this PNAS paper ignited among other scientists in the GM field (those who tend to support its use and development). The “attackers” wrote an open letter rebutting the methods used in the original study, and the study’s author said she received a large amount of inflammatory (negative) feedback, more than the usual objective disagreement.

While I always think controversy and rebuttal letters are valuable to the scientific project of distilling true principles out of our otherwise vague world, I was interested to hear about the more emotional aspect of some of this response against the original author’s methods (and thus validity of the paper). What I see in these “pro-GM” scientists are people fighting for their field. They know that all it takes is one or two papers elucidating the negative effects of GM-crops to sway public support away from them, and away from their research (cf. monarch butterflies, Europe)

Scientific research mainly benefits but also suffers from the fact that when something is published in a respectable, peer-reviewed journal, people believe it. Scientists read a lot of papers within their field (or subfield). Each paper is usually not a definitive statement on something, it’s a bit of evidence for or against something. Just because one paper is published claiming X does not mean a scientist believes X. It takes a number of papers claiming X in a variety of different angles through a variety of different methods. For sure, a paper, especially one in a well known journal, lends a good deal of credence to claim X but by no means proves it.

And yet, the public, which includes scientists outside of their own subfields, gives so much more credence to individual papers. All it takes is one paper to influence important policy decisions that affect that field. The public does not read Methods sections, where most of the internal debate often stems.

I have been guilty myself, many times probably on this blog, of drawing conclusions from science-news headlines without thoroughly scouring the papers with the skepticism perhaps they deserve. After all, it’s impossibly to be an expert in every field. What’s important for us all to remember is that each paper is just a small bit (however convincing) of evidence for or against something. Very few papers can single-handedly take down or erect a scientific belief.

So the next time you find yourself citing a paper (from reading it directly or indirectly through the news) as evidence for something, remember to take it with a grain of salt. The world could do with a bit more scientific skepticism.

in vitro meat

September 3, 2009

This is where most of us get our ground beef from. (credit: www.marlerblog.com)

This is where most of us get our ground beef. (credit: http://www.marlerblog.com)

Seed Magazine (my favorite science publication), has an interview with Jason Matheny of New Harvest, a non-profit organization aimed developing scalable and tasty meat grown on a petri dish (basically). Growing meat in a controlled environment like a petri dish is more environmentally efficient, in energy, in water, in land usage. As developing nations like China and India become wealthier, the demand for meat only increases.

Of course, the best scenario is that we all become vegetarians, both because it’s healthier and more environmentally friendly. However, I’m not quite there yet myself (my mind can’t quite overpower my evolved lust for high-protein, high-energy flesh) and so can’t demand that the rest of the world must be as well. If we assume that people are going to eat meat, why not have it come from a controlled environment, where fat content and every other chemical can be controlled?

Interestingly, the technology used to grow little bits of meat is hijacked from the field of tissue engineering, which aims to grow everything from skin to muscle to organ tissue. The limiting factor for tissue (be it for eating or healing) is getting the blood vessels built so the tissue can be sufficiently large. Vascularization of artificial tissue is very tricky thing that we haven’t quite mastered yet.

Thus, the applications for this technology would only be (for the present) in ground meat, where the small amounts grown in vitro (meaning in the lab, not in a real animal) could be put together like we’re used to. But still, ground meat comprises a very large part (Matheny claims roughly half, but I don’t have an independent number) of the world’s meat consumption and so could still have a significant impact.

You might react to this idea of eating meat grown in a test tube as just another part of our over-commercialized, over-scientificized (my own word), over-supply-chain-ized agricultural industry. We should be eating our meat from local, grass-based, holistic farms. I agree that the nice local farm alternative feels better (and IS better at the moment), but if we can produce meat that’s healthier, cheaper, significantly more environmentally friendly, and perhaps even tastier than our local farm, how long are you going hold out just on principle?

Imagine this scenario: you’re coming back through customs from a business trip, and the official suddenly asks if he can have your briefcase, which you give up, expecting the routine open…rifle through…close, only to have someone (perhaps a higher up) explain that the Department of Homeland Security needs to take your briefcase and thoroughly examine its contents. The official is probably fairly polite about it, but firm. He explains that if you just write down your address here, they’ll be sure to mail it back to within 30 days. Sound strange, or vaguely illegal? Well, get used to it, because DHS can now do that, except just not with paper documents.

From PC World:

The U.S. Department of Homeland Security has released new rules for border agents searching travelers’ laptops and other electronic devices, but the revised guidelines won’t quiet complaints from the American Civil Liberties Union.

The new guidelines, unveiled Thursday, continue to allow U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE) to search electronic devices during border crossings without suspicion of wrongdoing. Both CBP and ICE are part of DHS.

Note the part about “without suspicion of wrongdoing.” I think many of us are at least used to (if not totally in agreement with) the idea that officials can search your car, your bag, your briefcase given probable cause. But introducing a certain level of arbitrariness seems beyond our (or at least my) civic understanding of what’s reasonable.

DHS has lots of language in their statement about protecting civil liberties and such, but merely saying that you’re going to do something in a directive doesn’t actually mean you’re going to do it.

The new directive came a day after the ACLU filed an Freedom of Information Act for more information regarding the policy, so it seems that they at least are sensitive to public concerns.

And to that extent, I’d really like to hear more information about why DHS thinks it necessary to search people’s documents without probable cause.

unpacking a statistic

September 1, 2009

A new study in the Archives of General Psychiatry reports that the percentage of Americans on antidepressants almost doubled from 5.84% in 1996 to 10.12% in 2005. The first thing that probably hits you after reading that is, “Wow, that’s a big jump in ten years.” The next thing might be something like, “I wonder what caused that big of a jump?”

To an educated (but generally ignorant in psychiatry) observer, a few initial explanations seem plausible:
1) The number of depressed people has simply increased, bringing with it the number of people on medication.
2) Cultural acceptance of depression as a legitimate physiological illness has grown, allowing more people to come out into the open and seek treatment they had previously avoided.
3) Our ability to diagnose the illness has improved, allowing us to catch and treat more cases.
4) Our acceptance of (and desire for) drugs to address our problems has increased (to, what many would say — although I’ll withhold judgment — is an unhealthy level).

The authors of the study mainly think it’s 1). In 91/92, the rate of depression in the US was 3.3% and rose to 7.1% in 01/02. This increase in depression itself is an interesting nugget. If fundamentally we think of depression as a physiological disorder, I find it hard to believe that our brain chemistry has really changed that much in ten years. Another explanation is that people are seeking out their doctors more for mental health problems, but then we start spilling into hypothesis 2) and possibly 4).

The authors also suggest an additional hypothesis:

5) Between 1996 and 2005, four new antidepressants (mirtazapine, citalopram, fluvoxamine, and escitalopram) were approved by the FDA for treating depression and anxiety. Furthermore, while the total promotional spending stayed the same, the percentage of that aimed directly at consumers (rather than physicians) increased from 3.3% to 12.0%. So, another explanation of the increase is simply that there are more, and more aggressively marketed drugs out there.

The point I’m trying to make here is that statistics like, “The percentage of Americans using antidepressants doubled between 1996 and 2005.” (from SciAm) and their handling in casual print or conversation often confuse or obfuscate all the issues really at work. Most of us, myself included, don’t put a lot of (or enough) thought into everything going on behind a statistic when we hear or read about them. We’re not critical, skeptical observers, and that’s dangerous.

It’s nice to get out the magnifying glass and do a little digging every once in a while to really understand what all these numbers actually mean.

The PBS show Reading Rainbow, hosted by LeVar Burton ran its last show on Friday after 26 years on the air, reports NPR. This show was a serious staple of my childhood, as I imagine it was of many of yours as well. My favorite episode was the one where he cleans out his garage, followed closely by when he visits the beekeeper. I don’t think I need to go into the details about why this show was awesome. You already know.

According to NPR, neither WNED (the show’s host), PBS, nor the Corporation for Public Broadcasting is willing to put up the money for another season. The angle that NPR takes, which I think is most interesting, is that Federal priorities (initiated by the Bush Administration’s Department of Education) for reading-focused TV programming have moved to a more skills-centered (like phonics and spelling) area. The funding out there is being directed towards programming that emphases these new priorities, which apparently today’s research finds are more effective in the challenge of getting kids to read.

This shift represents yet another reallocation of our education resources away from creative, inspiring, fun content towards a focus on skills, skills, skills. The specter of state and national testing (and the obsession with it) for elementary school kids suddenly looms in my head. I know that research says that our kids need more of a foundation in the basic skills, that not having them prevents kids from succeeding later, etc, etc.

But (at the risk of hyperbole) what about the soul of childrens’ education? Has “reading is fun”, “learning is fun” really gone out of style now? Is it now, “do this — it’s good for you” ? That sounds like taking a bitter medicine to me. Everyone’s wringing their hands these days about how kids don’t read enough on their own. Something tells me that spelling and phonics (what is phonics, anyway?) aren’t the best tools for teaching a love of reading. To extrapolate from an N=1 study, I was (and still am) terrible at spelling and still unsure about some phonics, and I love to read. I loved it in elementary school (starting with the Redwall books), and I love it now. Also, I watched Reading Rainbow, which teaches kids about interesting things and shows them that other kids their own age like to read to, and can even articulate quite impressively why they liked a book.

Ergo, Reading Rainbow = love of reading. Sad that it’s gone now.

A vitamin C product from http://www.myaloe-vera-health.com

I woke up this morning with a sore throat and knew that my annual end-of-summer/beginning-of-fall cold had arrived again. What preventative measures did I take this morning? Vitamin C, zinc lozenges, and lots of water. While good hydration is always a good idea, especially when getting sick, the efficacy of zinc lozenges is tenable (supporters here and here, controversy discussion here), vitamin C (in the customary large amounts) has been roundly been shown not to have any significant impact on the severity and length of cold symptoms.

I know all this, and yet I still down vitamin C tablets like there’s no tomorrow. I used to tell my students that I wouldn’t believe anything unless there was a peer-reviewed, double blind paper published about it. Well, those have been done and a verdict rendered (at least about vitamin C). But my anecdotal life evidence says otherwise. In previous years when I’ve felt the onset of a cold, I’ve downed tons orange juice and vitamin C tablets and felt like my cold went away quite quickly.

What seems to be at work here are two things: my selective memory and the placebo effect. My selective memory, of course, doesn’t bring back those times when I took vitamin C and didn’t get better right away, or when my sore throat turned into a three week hacking cough. Clearly, I’m remembering what I want to remember to support the remedy I’m naturally inclined to take.

Radiolab has a wonderful show about the placebo effect. If you have an hour to listen to a podcast, I’d highly recommend it. We all know what the placebo effect is and that it achieves positive results in many cases. (A particularly illustrative example in the show is where a man with Parkinson’s has a stimulating electrode implanted in his brain that the researchers can remotely turn on and off. They tell him they’re turning it on, although they’re not, and his tremors remarkably vanish, at least temporarily).

So here’s my case for taking vitamin C this cold season. 1) It will make you feel proactive about your cold. No one likes sitting around and having a cold hit them. We want to feel like we’re fighting it somehow! 2) The placebo effect might just trick your body into defeating (or think it’s defeating, but does the difference really matter?) that cold a little bit earlier. Either way, there’s very little harm and some potential psychological if not physiological benefits.

Of course, the placebo effect doesn’t really work if you know you’re taking a placebo. So try hard to forget those dry scientific papers and remember your mom making you drink orange juice and take vitamin C when you had a cold. Self deception is a wonderful thing.

Hmmm, a major motion picture now, eh? Im glad he was doing it for the right reasons.

Hmmm, a major motion picture now, eh? I'm glad he was doing it for the right reasons.

The New Yorker’s Elizabeth Kolbert has a wonderful undercut of the so-called eco-stunts, or living an extremely environment-friendly lifestyle, and the publicity surrounding them. Her primary targets are bloggers-et-book writers Colin Beaven (No Impact Man) and Vanessa Farquharson (Sleeping Naked Is Green: How an Eco-Cynic Unplugged Her Fridge, Sold Her Car, and Found Love in 366 Days), both of whom adopted extreme eco-lifestyles for a period of time, blogged about it, and then got a book deal. After describing all the measures they take, like going without a refrigerator, toilet paper, and cars/trains/buses/subways, her closing paragraph sums it all up particularly well:

The real work of “saving the world” goes way beyond the sorts of action that “No Impact Man” is all about.

What’s required is perhaps a sequel. In one chapter, Beavan could take the elevator to visit other families in his apartment building. He could talk to them about how they all need to work together to install a more efficient heating system. In another, he could ride the subway to Penn Station and then get on a train to Albany. Once there, he could lobby state lawmakers for better mass transit. In a third chapter, Beavan could devote his blog to pushing for a carbon tax. Here’s a possible title for the book: “Impact Man.”

Seriously, take the fifteen minutes to read this article. It’s good.

I’m going to take her criticism of such publicity stunts (she reminds us that Colin Beavan’s idea of living a no impact life arose from a lunch conversation with his agent about his next book project) one step further: not only are they inane and self-promoting, but they trivialize environmental measures the rest of us — in the real world — take.

Notice that once the book deal is finished, these authors revert mostly back to their original, planet-killing lifestyle. If I were to read such a book, I would probably feel some mixture of shame — that my own life was not more environmentally efficient — and humor — because the life described by the authors sounds so crappy.

Such accounts of “extreme” lifestyles seem to trivialize the things the rest of us do to help out, like switching in CFL bulbs, recycling, and turning the lights off. Compared to the “extreme” lifestyles, some might think that these small measures don’t have as significant an impact and thus aren’t worth being as diligent about. What a shame, because those small measures we all take work into our life are by far the best sources of energy efficiency.

Someone needs to write a book about reinsulatining his house, riding his bike more, installing a smart electricity meter, going to the farmer’s market, and installing solar panels on his roof. The book needs to span a year, and then another, and then another, and another. That stuff doesn’t make us feel bad about what we do. It makes us think, “Hmm, that sounds pretty doable.” The key part is that there’s no end point to “experiment.” It’s a sustainable (ahh…wordplay) lifestyle.

Of course, such a book would be quite boring. But that’s the point, isn’t it. It’s not about selling copies. It’s getting us to change our lifestyles — for good.

over-softwaring our lives

August 26, 2009

How many times have you seen a program, a desktop widget, a web application, and thought, “Wow, that would really make my life easier!” I know I have. Many times. And yet, so many of them — while theoretically quite useful — never actually end up being used that much because they take a significant investment in time to learn how to use and often take more time than what we’d otherwise do to address the need.

Here’s an example, from Lifehacker.com, which (while I love them) often features such programs: Kitchen Monki. Kitchen Monki is a recipe planner/shopping list organizer that allows you to plan meals and then load the ingredients into a shopping list. (Note: the following criticisms aren’t meant at all as a hit against the programmers. The application actually looks very well thought out and easy to use. It’s the idea that we even need such a program that I’m criticizing.)

Who really sits at their computer and plans a meal? And even if you do, man, what steps you must go through: finding a recipe, adding it to your meal, checking to see what you already have, making your shopping list, exporting it to your phone or printing it out, etc. What ever happened to the pen and paper? I’m guessing that almost anyone can do all this much faster just by thinking about it in your kitchen with a cookbook and writing a few notes down on a scrap of paper.

Contrast that program with something like Mint.com, a personal finance web app that automatically categorizes your credit card and banking transactions for the purpose of keeping a budget. That’s a really useful tool, and I can say that from using it for 16 months or so. Another recent software favorite: Dropbox.

My point is not that all software (or technology in general if you like) is bad. Some of it is quite useful and improves our lives, but much of it seems like a good idea at the time and then is never used after about the first day. I’m also not trying to stifle developers’ creativity: I write programs that I think will be useful but other may not. Indeed, I imagine many of the programs that we encounter are the successful personal projects of developers that have been taken into the public sphere. So it’s really up the consumer to decide whether they should use it or not.

It’s almost ingrained into our psyche when we see something new and seemingly useful: “Ohh, I want to try that.” This is applicable in everything from organizational stuff from stores like Organized Living and to electronics to video games (like those of you who’ve downloaded every arcade game ever invented onto your PC or XBox) to software. We’re programmed (getting into a bit of social theory here) to want more stuff. And so we acquire it if it seems even remotely fun or useful without much thought about whether we’re actually going to use it or to a simpler alternative. This point holds especially true for software, most of which is free these days.

As consumers of everything, we need to be mindful of what we need (or want) and what we don’t need. And for those things we need, what’s the balance between their advantages and their efficiency (in time, energy, and money). Otherwise, we’ll go through trailing a mess of physical and digital detritus.

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

A new study (as reported in the ArXiv blog) by Fermín Moscoso Del Prado Martín of the Universite de Provence shows that the human data-processing rate is not much more than 60 bits per second. The results are based on measuring the time it takes a subject to complete what’s called a lexical decision task, like determining whether a collection of letters is a word or not. This complexity can be quantified into a certain number of bits (each bit represents a binary state like on or off). Thus, if we know the complexity of a given task (called the entropy), and we know the average time it takes a human to complete that task, we can determine the decision-processing rate of the brain, which is where we get 60 bps.

This speed is really, really, really slow, compared to today’s technology. It’s likely that the internet connection you’re using to read this post is at least 3,000,000 bps (although I should be careful to distinguish this data transfer rate of your internet connection with the data processing rate of our brain). The computer you’re reading this on probably has a processor of at least 2,000,000,000 Hz, or cycles per second, which is more similar to the brain’s data-processing speed of 60 bps. I think you get the message.

So here’s my question: if we would ultimately like to get computers that can think like humans (ultimately being a long time from now), does it make sense to limit the speed at which they can operate? Hardware (or software) that can process data at blazing speeds can allow us to approach a problem the wrong way. The best example is the use of brute-force computational churning to make a simple decision (like using a computer to test every possible game of checkers, taking almost 20 years, to figure out the next best move; don’t laugh, it’s been done).

The power we have available to us can blind us in creating things that actually mimic the way our brain works. It allows us to go far, for sure, but far down a dead end that ultimately will not lead us to the AI sophistication we’d like. Would artificially limiting the various transfer and processing rates of our hardware force us to approach decision making in machines in a way similar as our brain?

I’m not a sophisticated AI developer by any means, but this idea seems at least worth considering. Many people, perhaps most, don’t even thing we’ll ever be able to approach the functionality of the brain, but for those true believers like me out there, this thought is worth considering.