The PBS show Reading Rainbow, hosted by LeVar Burton ran its last show on Friday after 26 years on the air, reports NPR. This show was a serious staple of my childhood, as I imagine it was of many of yours as well. My favorite episode was the one where he cleans out his garage, followed closely by when he visits the beekeeper. I don’t think I need to go into the details about why this show was awesome. You already know.

According to NPR, neither WNED (the show’s host), PBS, nor the Corporation for Public Broadcasting is willing to put up the money for another season. The angle that NPR takes, which I think is most interesting, is that Federal priorities (initiated by the Bush Administration’s Department of Education) for reading-focused TV programming have moved to a more skills-centered (like phonics and spelling) area. The funding out there is being directed towards programming that emphases these new priorities, which apparently today’s research finds are more effective in the challenge of getting kids to read.

This shift represents yet another reallocation of our education resources away from creative, inspiring, fun content towards a focus on skills, skills, skills. The specter of state and national testing (and the obsession with it) for elementary school kids suddenly looms in my head. I know that research says that our kids need more of a foundation in the basic skills, that not having them prevents kids from succeeding later, etc, etc.

But (at the risk of hyperbole) what about the soul of childrens’ education? Has “reading is fun”, “learning is fun” really gone out of style now? Is it now, “do this — it’s good for you” ? That sounds like taking a bitter medicine to me. Everyone’s wringing their hands these days about how kids don’t read enough on their own. Something tells me that spelling and phonics (what is phonics, anyway?) aren’t the best tools for teaching a love of reading. To extrapolate from an N=1 study, I was (and still am) terrible at spelling and still unsure about some phonics, and I love to read. I loved it in elementary school (starting with the Redwall books), and I love it now. Also, I watched Reading Rainbow, which teaches kids about interesting things and shows them that other kids their own age like to read to, and can even articulate quite impressively why they liked a book.

Ergo, Reading Rainbow = love of reading. Sad that it’s gone now.

Darrell Issa (R-CA) meddles in the NIH

Darrell Issa (R-CA) meddles in the NIH

ScienceInsider recently reported that representative Darrell Issa (R-CA) succeeded in stripping the funding of a study for HIV/AIDS in prostitution from an NIH funding bill. The overseas studies are aimed at understanding how the disease spreads (and how to halt it). Apparently Issa thought it was a waste for the researchers to fly over to Thailand when they could just take a $3.10 train across down. And rather than argue the point, the bill’s manager, David Obey (D–WI), accepted the amendment and moved on.

The stripped funding for the three specific grants totaled $5 million. The entire NIH funding bill was $31 billion. As I see it, this is a great example of politicians trying to score points. Not only were the studies a part of the scientific peer-review process, but they are actually incredibly important for us to understand the inextricable relationship between drugs, disease, and prostitution. You either fund the NIH for a certain amount or you don’t, and let it decide how to apportion that money. Congress doesn’t tell the CIA or FBI what to spend money on and what not to, do they?

If we’re going to get past the dogmatic aversion to drugs and prostitution (controversial, I know; perhaps I’ll post on those later), we need to understand how they interact with sexually transmitted diseases. Even if nothing legally changes here, we can certainly develop better policies for reducing the number of drug-addicted and diseased-infected prostitutes.

Non-scientists deciding not to fund certain research (like human cloning) is one thing. After all, it’s taxpayer money (and thus, in the politicians’ minds, theirs), but it is not their place to decide how that research is carried out. That’s the job of the grant reviewing committee. That’s why we have those committees, to decide what proposals meet the aims of the grant.

scientific skepticism

August 3, 2009

Dowsing (credit: skepdic.com)

(I realize I may be preaching to the choir here, but I still feel this topic deserves some text.)

One of the most useful (I mean in every day life) aspects of the scientific method is the tradition of reasoned skepticism that it teaches us.

If you live in the country, there’s a good chance that you get your water from a well. The person who dug your well had to decide where to place it, and it’s likely that he or she decided that location using dowsing (sometimes called witching or divining).

Dowsing is the practice of using metal rods or a willow switches to tell the user where something (usually water) is when the user is walking around. Many, many people swear by it to find water and try to explain it with (science-esqe) “electric fields” and “vibrations,” but really it’s no more than well-ingrained superstition. (For a more favorable explanation, go here.)

Dowsers understandably believe that the technique works because the place that the metal rods tell them has water, low and behold, usually has water. This fact in itself is not a proper test, nor is the vast, vast body of anecdotal evidence that accompanies old superstitions like this. To properly test this hypothesis, what do we need?….a control. We need some sort of benchmark to ascertain that the results of the dowsers are something more than just random chance.

Such is dowsing’s prevalence that a number of such studies have actually been done to ascertain its effectiveness. R. A. Foulkes published a study in Nature (need subscription or other means for full article) performed by the British Army and Ministry of Defense that shows that dowsing yields no better results than random chance. Others have published to the same effect:
– M. Martin (1983-1984). “A new controlled dowsing experiment.” Skeptical Inquirer. 8(2), 138-140
– D. Smith (1982). “Two tests of divining in Australia.” Skeptical Inquirer. 4(4). 34-37.]
(thanks to http://www.skepdic.com/dowsing.html for the links)

A small number of studies were performed that seem to confirm the efficacy of dowsing, the most comprehensive of which was done by Hans-Dieter Betz (part 1 and part 2). Unfortunately, though, J.T. Enright has pretty thoroughly discredited that study.

The scientific literature seems pretty clear.

I don’t mean to mock or pillory dowsers. Instead, I hope to show how very common practices, with wide bodies of anecdotal evidence, can persist even today without actually being true. So the next time someone makes a claim that intuitively strikes you as odd, use the tools science gives you unpack the truth.

credit: daninz.files.wordpress.com

credit: daninz.files.wordpress.com

I’ve always been one of those people who claims that our nation’s high drinking age has caused increased binge drinking. “Look at Europe,” I’d say, “they have lower drinking ages and less crazy binge drinking.” Correlation equals causation. The argument goes that entirely blocking someone’s access to a drug makes them all the more likely to abuse it when they do eventually get access to it.

Well, I’m sad to admit, this study shows that I am wrong. The researchers analyzed data between 1979 and 2006 from over 500,000 subjects in the National Survey on Drug Use and Health. As it turns out, except in college kids (and that’s a big except), the incidence of binge drinking declined significantly as the drinking age increased from 18 to 21. Binge drinking in college students, where access to alcohol is still quite easy through students of legal age, remained roughly the same.

I used to always tell me students that I would only believe a non-intuitive claim if it was published in an peer-reviewed academic journal. Well, it’s been done, so I have to change my opinion about the drinking age and its influences on binge drinking.

That’s not to say that someone reading the methods section of this paper in the Journal of the American Academy of Child and Adolescent Psychiatry (where it’s being published this month) might say, “That’s not right because of x, y, and z.” If that does happen, as happens in all good science, a civil, measured discussion will ensue in the academic journals. That discussion is healthy and important for us to eventually understand the complexity of the issue. But this article now shifts the burden of proof to the other side.

Too often, and I think scientists are prone to this behavior as well, we read convincing, reliable evidence contrary to our own opinions and immediate write it off for one reason or another. Do not confuse this inclination with skepticism, which challenges claims and evidence, probing them for weaknesses. Ultimately, though, the skeptic can be won over if the argument and evidence is convincing enough. The dogmatic cannot. Dogma is dangerous in that it leads to conformity, which another recent study found to be bad for a civilization.

Dogma is the antithesis of scientific inquiry, and we must be wary that it does not seep into our thinking. Ask yourself, what would it take for you to change your mind on your most fundamental principles. If the answer is nothing, you’re in trouble.

true randomness

July 1, 2009

We’ve all rolled dice in board games and are confident that those rolls are truly random, i.e. not dependent on any measurable forces. We can’t recreate or predict any type of roll. You’ve probably also flipped a coin, which might seem easier to fix but have probably met a good deal of frustration if you tried to do so.

We take this kind of randomness for granted, but what’s interesting is that for all of our computational prowess, we are unable to create random numbers in computers. Of course, we can create pseudorandom numbers, but not the real deal.

Computers often generate pseudorandom numbers using a starting number, or seed, and then complicated functions to get the next random number. If you supply the same seed number to the randomizing function, you’ll get the same (infinite) stream of “random” numbers out. This property of pseudorandom numbers is actually quite useful when building and debugging computer programs because it allows the programmer to recreate seemingly random scenarios for testing.

When computers are actually trying to generate random numbers, they often use the clock’s timestamp as a seed, since it’s never the same. While this technique does generate a random stream of numbers it’s still, in a way, determined. We can predict what a random number will be given a seed value at a specific time.

In order to achieve true randomness, programmers have had to turn to the real world. Sites like random.org, which generates random numbers from atmospheric noise, and (more recently) the Dice-O-Matic hopper, which physically performs millions of dice rolls today (follow the link for a cool video of it in action), serve up genuine random numbers.

What’s interesting is that, in theory, these numbers aren’t really random either. Sure, they’re products of chaos theory, but they each have predictable forces acting upon them that cause them to behave in a predictable way (remember quantum uncertainty operates on much, much smaller levels than dice rolling). What makes them basically random, though, is that there are so many interacting forces, it’s impossible for us to compute with today’s computing power. But Laplace’s demon could figure it out, which makes me wonder if at some day in the future we won’t be able to predict the outcome of a dice roll, taking us one small step further to predicting the future itself.

(I’m currently reading a very interesting book by Daniel Dennet entitled Freedom Evolves about how to have free will in an determined universe. If this kind of stuff interests you, I’d certainly recommend it.)

anticipating wrongdoing

June 27, 2009

Those of you who remember the movie Minority Report, with Tom Cruise, are familiar with the idea of anticipating someone’s future wronging and then taking preventative action against it. It’s an interesting idea, but when it came out in 2002, it was still science fiction. Now, it seems we could be getting closer to something like that with the preliminary (unpublished) results that Vincent Clark or the University of New Mexico at Albuquerque gave a talk at the Organization for Human Brain Mapping conference.

Clark claims that he can predict which drug addicts will relapse after treatment with 89% accuracy using both traditional psychiatric techniques and fMRI brain imaging. He used 400 subjects in his decade long study. What’s interesting about this approach is that it involves a more serious level of quantitative analysis (from the fMRI) than most psychiatric evaluations and thus would be a more rigorous metric by which to measure patients against a standard.

While determining if patients in treatment will relapse (and thus might need more treatment) is a beneficial evaluation for both society and the patient, it’s not hard to extend this type of test to a more ethically difficult scenario. Suppose someone develops a test that, with 90% accuracy, determines (via MRI or some other such technique) whether a violent offender in prison will commit a repeat act of violence after paroled. I think we’re a way off (if it’s even possible) from such a test, but still, the thought experiment is interesting.

How would our criminal justice system handle such a test? Since the ostensible goal of our penitentiaries is to “reform” those who’ve done wrong, could such a test be used to determine at what point someone’s been “reformed?” How do we balance the idea of reform with the idea of penance, a similarly old but quite different justification for imprisoning someone. How much testing of such a test would we need to actually implement it, since incorrect diagnosis could lead to either additional harm to citizens or wrongful confinement. Is there any (non 100%) level of efficacy that would be acceptable?

It strikes me that implementing a test like this in our criminal justice system would force us to rework a good deal of the philosophy behind locking people up (which I don’t think would be a bad thing). It’s an interesting thought experiment now, but perhaps in a few decades it will become a reality.

faith in AI

June 22, 2009


Namit Arora at 3quarksdaily has a very interesting and thoughtful post about the future (or lack thereof) of true artificial intelligence.

He does a good job at tracing the major phases of AI design, from essentially large databases to the more modern neural networks. He points out that while AIs have become more and more capable of solving well-defined problems (although one could argue we’ve been able to expend the set of solvable well-defined problems a great deal over the years), ultimately they will fail to reach the truly human je ne sais quoi because they are unable to become completely immersed in the human experience of emotions, relationships, and even simple relationships between objects and things in our world. (Arora borrows much of these ideas, which I am only briefly paraphrasing, from Hubert L. Dreyfus who borrows from Heidegger.)

While I agree that we are no where near the singularity, as Ray Kurzweil would have you believe, I disagree that we are no where nearer than when we started in the early days of artificial intelligence (that is, the 60s and 70s).

A big shift in the development of AI, in my opinion, was moving away from the teleological view of intelligence, away from “This is how we think the mind works, so this is how we’re going to program our AI.” The transition from symbolic (brute force) AI to neural networks marks a large shift in that it’s basically an acknowledgement that we programmers don’t know how to solve every problem. Now, what we still know how to do (and must do for now at least) is to define our problems. Thus, if I make an AI to solve a certain problem, I may run it though millions of machine-learning iterations so that it can figure out the best way to solve that problem, but I’m still defining the parameters, the heuristics that make that program determine whether the current technique it’s testing is good or not.

I agree that this approach, while yielding many powerful problem-solving applications, is ultimately doomed. But in pursuing it, we have bootstrapped ourselves into a less well-defined area of AI. If you believe (as I do, although I don’t like the religiously aspects of the word “believe”), that the brain is simply a collection of interconnected cells and nothing else, then in theory we can recreate it in silicone. The problem arises in determining how the cells (nodes in comp sci language) are interconnected. How can we even know where to start?

And here’s where the faith aspect comes in. I’ll call it what it is. As our understanding of the functional aspects of the brain improves (thanks to all the tools of modern technology) as do our computational processing and storage capabilities, I find it hard to think that we will not ultimately get there.

Yes, we will probably need a more philosophical view of what it means to be human and sentient. Yes, it will probably take us a long, long time from now, perhaps even after my lifetime, but remember, the field is incredibly new. I’m heartened by work done by Jeff Krichmar’s group at UC Irvine with neurobots in approaching the idea of intelligence from a non-bounded perspective.

As our technology and understanding of intelligence improves, I simply cannot believe (and here, perhaps, I am using a more religious flavor) that our quest to understand ourselves would allow us to abandon this project.

21st century shop class

June 17, 2009


A report (summary, handy comparison tool) put out today by the American Institute for Research describes the state of American Math proficiency in fourth and eight grades compared to many other countries.

The gist, surprise surprise, is that the U.S. lags a good bit behind Asian countries like Japan, Taiwan, South Korea, Singapore, and Hong Kong. Many have called for the U.S. to reinvigorate its math and science education program so that we can churn out more scientists and engineers to keep up with everyone else in the market of ideas, and ideas about for how to do this.

I won’t advocate for or against any of these proposals, but instead will throw one of my own ideas into the ring.

If we’re going to throw some money at the dearth of kids going into science and engineering careers, let’s start by making science and engineering more than just intellectually interesting. I think the percentage of kids who like building stuff and tinkering is far larger than the percentage who eventually decide they like the strictly academic aspects of these fields. Unfortunately, many of those who don’t make it into the field by high school fall off because “they’re not good at math,” or they find it “boring” or “not relevant to their lives.” Ultimately, to make it into science and engineering fields, you must overcome these obstacles, but we can give them a good bit of the kinetic energy required to overcome those hills. Simply, we make them like the actual practice of science and engineering and worry about the theory later.

(You’ll notice that I’ve seemingly forgotten about math, but while the mathematicians usually have a different philosophy than scientists and engineers, I think we can also overcome the barriers that middle and high school math poses in a similar way as the science barriers through my proposal).

What ever happened to shop class, where you build birdhouses out of wood and maybe even get to do some metal working? I haven’t heard of its still existing in many schools (public or private) these days, and its disappearance makes sense given ever-tightening budgets and ever-escalating safety concerns (how many people trust an eight grader with a bandsaw?). While making birdhouses and the like can be quite enjoyable, we should extend the class to encompass a much broader spectrum of the sciences.

We need a new kind of shop class (or, even better, middle through high school curriculum) where the students work on well-defined projects that given them latitude to be creative and to take initiative. These projects can be things like customizing bicycles, making trebuchets, creating robots to do small tasks, developing simple wind turbines and fuel cells and solar arrays, taking apart a car engine, building an electric motor. The possibilities are many and quite scalable in complexity and knowledge prerequisites. In sixth grade, the students can build and program simple robots using the Lego Mindstorms kits, and in twelfth grade they can build them using breadboards, servos, and real programming. Students could construct a composing chamber where they introduce different types of organisms, test the chemical properties of the chamber, and monitor its progress over time. Sites like Make Magazine and Instructables are full of projects like this.

Projects can run the gamut from biology, chemistry, physics and environmental sciences. The main key, though, is that they must be self-directed and involve actually building things. They can learn the practical knowledge necessary along the way. Self-directed learning — another important trait of scientists and engineers. The teacher would take a very hands-off approach, walking around providing little bits of guidance here and there but mostly staying out of the way.

I’ve seen classes set up like this. They’re art classes and are often incredibly effective. Students come in on their free periods because it’s relaxing, almost, to work without direct teacher involvement and produce something they’re proud of.

Of course, in order to run a class like this, it takes a teacher who is energetic and motivated. Anyone can teach science from a textbook.

A class like this could be an elective, a supplement, because I doubt many schools will want to give up on the “hard sciences.” It should be an elective, for the main goal is to give kids the motivation they need to make it through the more academic parts of the disciplines.

I don’t fool myself that many schools will adopt a class like this, or that it will become any kind of national initiative, but when I’m back teaching in a high school, I’ll do whatever it takes to get a class like this underway.

teaching argument

June 15, 2009

Aristotle (credit: Wikimedia)

Aristotle (credit: Wikimedia)

I recently came across this article by Jay Heinrichs about teaching young kids to argue. I highly recommend reading it (it’s short), but I’ll summarize and perhaps add a bit of my own thoughts here.

While the thought of teaching kids to argue more seems crazy, the point of the articles is that when they learn the formal framework of arguing, it makes kids much more civil and far more persuasive. It teaches them that acting out of their emotions (i.e. a temper tantrum) is not an effective way of convincing the other person of something.

Heinrichs explains what I think is one of the most important concepts of polemical discourse: an argument is not a fight. Unfortunately, many people think arguing is by nature a negative activity, like a fight is. Arguments are much more subtle and necessarily involve winning the other side over, which prevents the negative emotions like revenge and bitterness that often come from losing a fight.

Too many people, at least by my anecdotal evidence, are afraid of any type of conflict, including an argument. Arguments get ideas on the table, where they can be aired, exercised, and dismissed if found weak. Furthermore, arguing builds social bonds, because in arguing with someone else, you’re acknowledging that they have a point at least worth listening to and considering. And, as Heinrichs explains, argument helps eliminates banes of social and professional relationships, passive aggressiveness and groupthink, respectively.

Heinrichs approaches the framework of argument from the classical, Aristotelian perspective, by breaking an argument down into its three constituent parts: logos, ethos, and pathos. Simplistically, logos is the logical part of an argument (“Having a later curfew would allow me to bond more with my peers”). Ethos is the part of the argument that appeals to the character of a person (“I’m a good guy and thus would never stay out after curfew”). And pathos is the part of the argument that appeals to the emotions of the other side (“Don’t you trust me?”).

I don’t know much about formal argument or rhetoric, but I like the didactic nature that this approach implies. When I have kids and teach them about arguing, I’ll have to study up.

Not only does teaching kids to argue make them better at getting their way (which most of us can acknowledge is in sum a good thing for a person in life), it also makes them far less susceptible to the rhetorical tricks of others (“drill here, drill now!”). In short, it teaches them not to accept what anyone says at face value. It teaches them to dissect an argument aimed at them into its parts, thus (often) rendering it impotent. It teaches them to be skeptics, and we need more skeptics in our world.

Heinrichs also mentions the very useful effects this process has when his kids encounter advertising. Advertising is simply an argument to buy a product, and the ability to deconstruct that argument renders most advertising useless.

We need to get back to a more civil way of disagreeing about politics and everything else, and the best way to get there is to teach people — especially children — the art of argument.

A deptiction of entanglement (credit: Nature Physics cover, Volume 2, No 10)

A depiction of entanglement (credit: Nature Physics cover, Volume 2, No 10)

The most recent issue of Nature reports a new study involving entanglement. For those not familiar with or a bit hazy on entanglement, here’s my best description:

The basic idea is that we can entangle two quantum particles, like electrons, such that one or more of their properties are inextricably related. This means that if I have two entangled electrons, and I measure the spin (think of it like planetary rotation on its axis) of one electron, the spin of the other electron necessarily becomes the other direction. This property means that if I measure the spin one electron here on earth to be “up,” immediately (not sure if it’s truly instantaneous or at the speed of light–asking a particle physics Ph.D on this, who’s going to get back to me) an entangled electron on Mars would have a spin of “down”.

What’s interesting about this new research is that it involves the mechanical oscillation of ions (beryllium). Previous experiments with entanglement have only been done with electrons and particles of that size (and ions, or atoms, are tens of thousands of times more massive than electrons), and none have dealt with a mechanical property like moving back and forth.

But what’s really interesting about this whole phenomenon is that it’s basically like a new way of communicating without radio waves or any other type of electromagnetic radiation. I can imagine cool Sci-Fi scenarios in which we have entangled communication devices that allow us to talk efficiently with people and their devices in other galaxies (although, again, the whole instantaneous or speed of light question comes back into play over large distances).

The process of entanglement also raises much more profound metaphysical questions. Most of us (I believe) think of objects–an apple, my computer, a person–as distinctly different things, all objects in the world, yes, but distinct objects nonetheless. An idea called monism, holds that (very simplistically) everything is really part of one thing. Just like individual waves in the ocean are really just wrinkles in one thing, the apple, computer, and person are wrinkles of a large thing, that contains every seemingly (but not really) individual thing in the universe.

Now, I recognize that this idea seems kind of crazy, but the existence of entanglement makes it seem much less crazy. If things in our universe are really completely distinct, why should they be able to influence each other millions of miles away form each other? Keep in mind that there’s no type of electromagnetic radiation or other normal “communication” between the two particles. If seemingly separate things are actually little wrinkles of one thing, this relationship between entangled particles makes much more sense.

Monism is still a bit too far out there for me to accept it at the moment, but more evidence of increasingly complex entanglement really makes me consider monism seriously as a way of understanding our universe.