The PBS show Reading Rainbow, hosted by LeVar Burton ran its last show on Friday after 26 years on the air, reports NPR. This show was a serious staple of my childhood, as I imagine it was of many of yours as well. My favorite episode was the one where he cleans out his garage, followed closely by when he visits the beekeeper. I don’t think I need to go into the details about why this show was awesome. You already know.

According to NPR, neither WNED (the show’s host), PBS, nor the Corporation for Public Broadcasting is willing to put up the money for another season. The angle that NPR takes, which I think is most interesting, is that Federal priorities (initiated by the Bush Administration’s Department of Education) for reading-focused TV programming have moved to a more skills-centered (like phonics and spelling) area. The funding out there is being directed towards programming that emphases these new priorities, which apparently today’s research finds are more effective in the challenge of getting kids to read.

This shift represents yet another reallocation of our education resources away from creative, inspiring, fun content towards a focus on skills, skills, skills. The specter of state and national testing (and the obsession with it) for elementary school kids suddenly looms in my head. I know that research says that our kids need more of a foundation in the basic skills, that not having them prevents kids from succeeding later, etc, etc.

But (at the risk of hyperbole) what about the soul of childrens’ education? Has “reading is fun”, “learning is fun” really gone out of style now? Is it now, “do this — it’s good for you” ? That sounds like taking a bitter medicine to me. Everyone’s wringing their hands these days about how kids don’t read enough on their own. Something tells me that spelling and phonics (what is phonics, anyway?) aren’t the best tools for teaching a love of reading. To extrapolate from an N=1 study, I was (and still am) terrible at spelling and still unsure about some phonics, and I love to read. I loved it in elementary school (starting with the Redwall books), and I love it now. Also, I watched Reading Rainbow, which teaches kids about interesting things and shows them that other kids their own age like to read to, and can even articulate quite impressively why they liked a book.

Ergo, Reading Rainbow = love of reading. Sad that it’s gone now.

Darrell Issa (R-CA) meddles in the NIH

Darrell Issa (R-CA) meddles in the NIH

ScienceInsider recently reported that representative Darrell Issa (R-CA) succeeded in stripping the funding of a study for HIV/AIDS in prostitution from an NIH funding bill. The overseas studies are aimed at understanding how the disease spreads (and how to halt it). Apparently Issa thought it was a waste for the researchers to fly over to Thailand when they could just take a $3.10 train across down. And rather than argue the point, the bill’s manager, David Obey (D–WI), accepted the amendment and moved on.

The stripped funding for the three specific grants totaled $5 million. The entire NIH funding bill was $31 billion. As I see it, this is a great example of politicians trying to score points. Not only were the studies a part of the scientific peer-review process, but they are actually incredibly important for us to understand the inextricable relationship between drugs, disease, and prostitution. You either fund the NIH for a certain amount or you don’t, and let it decide how to apportion that money. Congress doesn’t tell the CIA or FBI what to spend money on and what not to, do they?

If we’re going to get past the dogmatic aversion to drugs and prostitution (controversial, I know; perhaps I’ll post on those later), we need to understand how they interact with sexually transmitted diseases. Even if nothing legally changes here, we can certainly develop better policies for reducing the number of drug-addicted and diseased-infected prostitutes.

Non-scientists deciding not to fund certain research (like human cloning) is one thing. After all, it’s taxpayer money (and thus, in the politicians’ minds, theirs), but it is not their place to decide how that research is carried out. That’s the job of the grant reviewing committee. That’s why we have those committees, to decide what proposals meet the aims of the grant.

scientific skepticism

August 3, 2009

Dowsing (credit:

(I realize I may be preaching to the choir here, but I still feel this topic deserves some text.)

One of the most useful (I mean in every day life) aspects of the scientific method is the tradition of reasoned skepticism that it teaches us.

If you live in the country, there’s a good chance that you get your water from a well. The person who dug your well had to decide where to place it, and it’s likely that he or she decided that location using dowsing (sometimes called witching or divining).

Dowsing is the practice of using metal rods or a willow switches to tell the user where something (usually water) is when the user is walking around. Many, many people swear by it to find water and try to explain it with (science-esqe) “electric fields” and “vibrations,” but really it’s no more than well-ingrained superstition. (For a more favorable explanation, go here.)

Dowsers understandably believe that the technique works because the place that the metal rods tell them has water, low and behold, usually has water. This fact in itself is not a proper test, nor is the vast, vast body of anecdotal evidence that accompanies old superstitions like this. To properly test this hypothesis, what do we need?….a control. We need some sort of benchmark to ascertain that the results of the dowsers are something more than just random chance.

Such is dowsing’s prevalence that a number of such studies have actually been done to ascertain its effectiveness. R. A. Foulkes published a study in Nature (need subscription or other means for full article) performed by the British Army and Ministry of Defense that shows that dowsing yields no better results than random chance. Others have published to the same effect:
– M. Martin (1983-1984). “A new controlled dowsing experiment.” Skeptical Inquirer. 8(2), 138-140
– D. Smith (1982). “Two tests of divining in Australia.” Skeptical Inquirer. 4(4). 34-37.]
(thanks to for the links)

A small number of studies were performed that seem to confirm the efficacy of dowsing, the most comprehensive of which was done by Hans-Dieter Betz (part 1 and part 2). Unfortunately, though, J.T. Enright has pretty thoroughly discredited that study.

The scientific literature seems pretty clear.

I don’t mean to mock or pillory dowsers. Instead, I hope to show how very common practices, with wide bodies of anecdotal evidence, can persist even today without actually being true. So the next time someone makes a claim that intuitively strikes you as odd, use the tools science gives you unpack the truth.



I’ve always been one of those people who claims that our nation’s high drinking age has caused increased binge drinking. “Look at Europe,” I’d say, “they have lower drinking ages and less crazy binge drinking.” Correlation equals causation. The argument goes that entirely blocking someone’s access to a drug makes them all the more likely to abuse it when they do eventually get access to it.

Well, I’m sad to admit, this study shows that I am wrong. The researchers analyzed data between 1979 and 2006 from over 500,000 subjects in the National Survey on Drug Use and Health. As it turns out, except in college kids (and that’s a big except), the incidence of binge drinking declined significantly as the drinking age increased from 18 to 21. Binge drinking in college students, where access to alcohol is still quite easy through students of legal age, remained roughly the same.

I used to always tell me students that I would only believe a non-intuitive claim if it was published in an peer-reviewed academic journal. Well, it’s been done, so I have to change my opinion about the drinking age and its influences on binge drinking.

That’s not to say that someone reading the methods section of this paper in the Journal of the American Academy of Child and Adolescent Psychiatry (where it’s being published this month) might say, “That’s not right because of x, y, and z.” If that does happen, as happens in all good science, a civil, measured discussion will ensue in the academic journals. That discussion is healthy and important for us to eventually understand the complexity of the issue. But this article now shifts the burden of proof to the other side.

Too often, and I think scientists are prone to this behavior as well, we read convincing, reliable evidence contrary to our own opinions and immediate write it off for one reason or another. Do not confuse this inclination with skepticism, which challenges claims and evidence, probing them for weaknesses. Ultimately, though, the skeptic can be won over if the argument and evidence is convincing enough. The dogmatic cannot. Dogma is dangerous in that it leads to conformity, which another recent study found to be bad for a civilization.

Dogma is the antithesis of scientific inquiry, and we must be wary that it does not seep into our thinking. Ask yourself, what would it take for you to change your mind on your most fundamental principles. If the answer is nothing, you’re in trouble.

true randomness

July 1, 2009

We’ve all rolled dice in board games and are confident that those rolls are truly random, i.e. not dependent on any measurable forces. We can’t recreate or predict any type of roll. You’ve probably also flipped a coin, which might seem easier to fix but have probably met a good deal of frustration if you tried to do so.

We take this kind of randomness for granted, but what’s interesting is that for all of our computational prowess, we are unable to create random numbers in computers. Of course, we can create pseudorandom numbers, but not the real deal.

Computers often generate pseudorandom numbers using a starting number, or seed, and then complicated functions to get the next random number. If you supply the same seed number to the randomizing function, you’ll get the same (infinite) stream of “random” numbers out. This property of pseudorandom numbers is actually quite useful when building and debugging computer programs because it allows the programmer to recreate seemingly random scenarios for testing.

When computers are actually trying to generate random numbers, they often use the clock’s timestamp as a seed, since it’s never the same. While this technique does generate a random stream of numbers it’s still, in a way, determined. We can predict what a random number will be given a seed value at a specific time.

In order to achieve true randomness, programmers have had to turn to the real world. Sites like, which generates random numbers from atmospheric noise, and (more recently) the Dice-O-Matic hopper, which physically performs millions of dice rolls today (follow the link for a cool video of it in action), serve up genuine random numbers.

What’s interesting is that, in theory, these numbers aren’t really random either. Sure, they’re products of chaos theory, but they each have predictable forces acting upon them that cause them to behave in a predictable way (remember quantum uncertainty operates on much, much smaller levels than dice rolling). What makes them basically random, though, is that there are so many interacting forces, it’s impossible for us to compute with today’s computing power. But Laplace’s demon could figure it out, which makes me wonder if at some day in the future we won’t be able to predict the outcome of a dice roll, taking us one small step further to predicting the future itself.

(I’m currently reading a very interesting book by Daniel Dennet entitled Freedom Evolves about how to have free will in an determined universe. If this kind of stuff interests you, I’d certainly recommend it.)

anticipating wrongdoing

June 27, 2009

Those of you who remember the movie Minority Report, with Tom Cruise, are familiar with the idea of anticipating someone’s future wronging and then taking preventative action against it. It’s an interesting idea, but when it came out in 2002, it was still science fiction. Now, it seems we could be getting closer to something like that with the preliminary (unpublished) results that Vincent Clark or the University of New Mexico at Albuquerque gave a talk at the Organization for Human Brain Mapping conference.

Clark claims that he can predict which drug addicts will relapse after treatment with 89% accuracy using both traditional psychiatric techniques and fMRI brain imaging. He used 400 subjects in his decade long study. What’s interesting about this approach is that it involves a more serious level of quantitative analysis (from the fMRI) than most psychiatric evaluations and thus would be a more rigorous metric by which to measure patients against a standard.

While determining if patients in treatment will relapse (and thus might need more treatment) is a beneficial evaluation for both society and the patient, it’s not hard to extend this type of test to a more ethically difficult scenario. Suppose someone develops a test that, with 90% accuracy, determines (via MRI or some other such technique) whether a violent offender in prison will commit a repeat act of violence after paroled. I think we’re a way off (if it’s even possible) from such a test, but still, the thought experiment is interesting.

How would our criminal justice system handle such a test? Since the ostensible goal of our penitentiaries is to “reform” those who’ve done wrong, could such a test be used to determine at what point someone’s been “reformed?” How do we balance the idea of reform with the idea of penance, a similarly old but quite different justification for imprisoning someone. How much testing of such a test would we need to actually implement it, since incorrect diagnosis could lead to either additional harm to citizens or wrongful confinement. Is there any (non 100%) level of efficacy that would be acceptable?

It strikes me that implementing a test like this in our criminal justice system would force us to rework a good deal of the philosophy behind locking people up (which I don’t think would be a bad thing). It’s an interesting thought experiment now, but perhaps in a few decades it will become a reality.

faith in AI

June 22, 2009

Namit Arora at 3quarksdaily has a very interesting and thoughtful post about the future (or lack thereof) of true artificial intelligence.

He does a good job at tracing the major phases of AI design, from essentially large databases to the more modern neural networks. He points out that while AIs have become more and more capable of solving well-defined problems (although one could argue we’ve been able to expend the set of solvable well-defined problems a great deal over the years), ultimately they will fail to reach the truly human je ne sais quoi because they are unable to become completely immersed in the human experience of emotions, relationships, and even simple relationships between objects and things in our world. (Arora borrows much of these ideas, which I am only briefly paraphrasing, from Hubert L. Dreyfus who borrows from Heidegger.)

While I agree that we are no where near the singularity, as Ray Kurzweil would have you believe, I disagree that we are no where nearer than when we started in the early days of artificial intelligence (that is, the 60s and 70s).

A big shift in the development of AI, in my opinion, was moving away from the teleological view of intelligence, away from “This is how we think the mind works, so this is how we’re going to program our AI.” The transition from symbolic (brute force) AI to neural networks marks a large shift in that it’s basically an acknowledgement that we programmers don’t know how to solve every problem. Now, what we still know how to do (and must do for now at least) is to define our problems. Thus, if I make an AI to solve a certain problem, I may run it though millions of machine-learning iterations so that it can figure out the best way to solve that problem, but I’m still defining the parameters, the heuristics that make that program determine whether the current technique it’s testing is good or not.

I agree that this approach, while yielding many powerful problem-solving applications, is ultimately doomed. But in pursuing it, we have bootstrapped ourselves into a less well-defined area of AI. If you believe (as I do, although I don’t like the religiously aspects of the word “believe”), that the brain is simply a collection of interconnected cells and nothing else, then in theory we can recreate it in silicone. The problem arises in determining how the cells (nodes in comp sci language) are interconnected. How can we even know where to start?

And here’s where the faith aspect comes in. I’ll call it what it is. As our understanding of the functional aspects of the brain improves (thanks to all the tools of modern technology) as do our computational processing and storage capabilities, I find it hard to think that we will not ultimately get there.

Yes, we will probably need a more philosophical view of what it means to be human and sentient. Yes, it will probably take us a long, long time from now, perhaps even after my lifetime, but remember, the field is incredibly new. I’m heartened by work done by Jeff Krichmar’s group at UC Irvine with neurobots in approaching the idea of intelligence from a non-bounded perspective.

As our technology and understanding of intelligence improves, I simply cannot believe (and here, perhaps, I am using a more religious flavor) that our quest to understand ourselves would allow us to abandon this project.