The sarcomere is the function unit of skeletal muscles (credit: Wikimedia)

The sarcomere is the functional unit of skeletal muscles (credit: Wikimedia)

I was out chopping wood yesterday (which, let me tell you, is incredibly good exercise) and today have distinct muscle soreness in my lower back. I’m sure most of you have experienced this post-exercise soreness at some point or another and so decided to learn what I could about what’s called Delayed Onset Muscle Soreness (DOMS). (Most of my information came from the this solid review of DOMS by Priscilla Clarkson.)

The functional unit of your skeletal muscle (as opposed to smooth muscle) is the sarcomere. Much of the damage that results with DOMS involves injury parts of the sarcomere. Specifically, the Z-lines and the protein that holds them together, desmin, are disrupted. As these individual muscle fibers are strained, the extracellular matrix (which is a scaffold for your cells and other proteins outside the cell, including muscle fibers) is pulled apart from the fibers. Furthermore, there’s evidence that small capillaries are broken in this process as well which also help to trigger an inflammation response by your body.

One of the hallmarks of muscle injury and its subsequent DOMS is a loss of strength in that muscle. The most prevalent explanation for this loss is what’s know as the “popping sarcomere hypothesis,” which posits that the irregular lengthening of individual sarcomeres (that is, some get stretched out a lot, others not as much, rather than all of them an equal amount) causes the muscle fiber to be weaker (since the overlap between the thick myosin and thin actin filaments is less in a stretched-out sarcomere).

Finally, what causes that thing we care most about: the soreness? It comes in part from the tissue swelling and and fluid pressure in the muscle. (For those of you who doubt swelling hurts, I can tell that my forearm, which is the size of Popeye’s thanks to a wasp sting yesterday, hurts a good deal even though the pain of the sting is gone.) It’s also suspected (although not distinctly proven) that the pain also comes from the release of molecules (histamines, bradykinins, and postagladins) from damaged cells that active the type III and IV afferent nerves, which basically carry pain signals to the brain.

So the next time you feel that soreness the day after some hard working out, think of your out of wack Z-links and painful histamines but know that your body’s smart enough to replace that damaged muscle with more so you won’t have the same reaction a second time (unless, of course, you damage it more).

Advertisements

scientific skepticism

August 3, 2009

Dowsing (credit: skepdic.com)

(I realize I may be preaching to the choir here, but I still feel this topic deserves some text.)

One of the most useful (I mean in every day life) aspects of the scientific method is the tradition of reasoned skepticism that it teaches us.

If you live in the country, there’s a good chance that you get your water from a well. The person who dug your well had to decide where to place it, and it’s likely that he or she decided that location using dowsing (sometimes called witching or divining).

Dowsing is the practice of using metal rods or a willow switches to tell the user where something (usually water) is when the user is walking around. Many, many people swear by it to find water and try to explain it with (science-esqe) “electric fields” and “vibrations,” but really it’s no more than well-ingrained superstition. (For a more favorable explanation, go here.)

Dowsers understandably believe that the technique works because the place that the metal rods tell them has water, low and behold, usually has water. This fact in itself is not a proper test, nor is the vast, vast body of anecdotal evidence that accompanies old superstitions like this. To properly test this hypothesis, what do we need?….a control. We need some sort of benchmark to ascertain that the results of the dowsers are something more than just random chance.

Such is dowsing’s prevalence that a number of such studies have actually been done to ascertain its effectiveness. R. A. Foulkes published a study in Nature (need subscription or other means for full article) performed by the British Army and Ministry of Defense that shows that dowsing yields no better results than random chance. Others have published to the same effect:
– M. Martin (1983-1984). “A new controlled dowsing experiment.” Skeptical Inquirer. 8(2), 138-140
– D. Smith (1982). “Two tests of divining in Australia.” Skeptical Inquirer. 4(4). 34-37.]
(thanks to http://www.skepdic.com/dowsing.html for the links)

A small number of studies were performed that seem to confirm the efficacy of dowsing, the most comprehensive of which was done by Hans-Dieter Betz (part 1 and part 2). Unfortunately, though, J.T. Enright has pretty thoroughly discredited that study.

The scientific literature seems pretty clear.

I don’t mean to mock or pillory dowsers. Instead, I hope to show how very common practices, with wide bodies of anecdotal evidence, can persist even today without actually being true. So the next time someone makes a claim that intuitively strikes you as odd, use the tools science gives you unpack the truth.

I’m usually not one for hemming in research in any field and any direction, even when the directions hold potential ethical pitfalls (human cloning, for example). However, the attempt to develop autonomous and ethical robots for use in any wartime situation completely crosses the line.

I should distinguish between autonomous and remote controlled robots. Autonomous robots receive no human input for their direct actions. They are capable of making decisions for themselves and then acting on them. The military already uses remote controlled robots for handling IEDs and scouting. Such technology is merely another extension of a human controller’s will and (in my opinion) completely ethical.

In an interview with h+ magazine, though, Ronald Arkin of Georgia Tech, discusses creating an “ethical governor” to ensure that future autonomous robots don’t break the “rules of war.” I can see the allure of having robots in the battlefield: they’re expendable, entirely rational, and have faster reaction time than humans. Here are the “rules” that Arkin suggests:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

We are so, so far from having any kind of autonomous robot that can intelligently follow these rules that it’s not even worth spending the time on them. We would have to have a solid model of human intelligence up and running before we can even think about creating a robot that can discern and apply these rules. If current soldiers have trouble distinguishing between “enemy combatants” and civilians, how in the world will a robot be able to do it?

This type of research falls right in line with Regan’s Star Wars, having dubious (probably too generous) effectiveness and absurd cost. The problem is that too many of us have naive fantasies of robots fighting our wars. Let’s grow up and spend resources money more wisely than that, eh?

When we reach the singularity and finally develop a robust artificial intelligence that parallels our own, which — I guarantee — will not be in any of our lifetimes (although , Ray Kurzweil would have you believe otherwise) then we can start thinking about the rules for our warriors bots.

(credit: Wikimedia)

Fair weather cumulous clouds (credit: Wikimedia)

Even though it’s been around for over fifty years, the idea of controlling the amount of precipitation in an area with chemicals still seems quite futuristic to me. I know many ski resorts seed their environs for more fresh power and had heard stories of China’s preventing rain from its opening Olympic ceremonies last year, but this new report about China’s efforts to again ensure dry skies for next year’s Asian Games got me wondering just how cloud seeding (as it’s called) works. Here’s a brief discussion from my research:

The entire process revolves around the phases of water in clouds. Such weather control can either be used to promote precipitation (like rain or snow) or inhibit it (like rain or often hail).

The water vapor in clouds is very, very cold (well below its freezing point, called supercooled) due to its height in the atmosphere. The problem is that in order for the vapor to turn into liquid or solid droplets, it usually needs a seed or starting particle (natural dust particles usually serve this role). One of the most common seeding chemicals, silver iodide, which has a crystalline structure very similar to that of water, is used to start these water (or ice) particles forming in the cloud.

Other chemicals like dry ice (solid CO2, liquid nitrogen, or liquid propane) can be used to cool down the water vapor so much that it spontaneously forms small droplets without the need for a starter particle, so to speak.

When enough of these little droplets in a cloud form, they can start clumping together, and eventually the droplets become so large that the air currents can no longer support them, and they fall to earth as either rain, snow, or hail.

However, if you’re like China and want to avoid precipitation, you can seed the clouds just a little bit, and the ice particles produced actually form at the detriment of the natural water particles, thus reducing their size and likelihood of falling to earth.

It’s somewhat hard to measure exactly how efficacious cloud seeding actually is since there’s no way to do a counterfactual weather experiment, but it’s been successful enough for a number of countries, including the U.S., China, Russia, and Australia, to have used it at one point or another (China’s the most aggressive at it).

For those of you who are a bit wary of dropping chemicals like silver iodide into the sky, it seems that in the amount they’re currently being used, the health and environment impacts are negligible.

As many of you may know, there’s talk of creating more clouds in the atmosphere by either spraying water droplets up from the sea with vast fleets of autonomous sailboats or dropping seeding chemicals from the sky as previously discussed. It’ll be interesting to see how this last ditch option changes (or doesn’t) as our understanding of weather control improves.

Resources:
http://en.wikipedia.org/wiki/Cloud_seeding
http://www.lightwatcher.com/chemtrails/cloud_seeding.html
http://theblanketeffect.blogspot.com/2008/02/note-we-conclude-our-series-featuring.html

a free lunch?

July 28, 2009

These two mice were fed the same high fat diet. The top mouses liver cells were engineered to metabolize fat directly into carbon dioxide.

These two mice were fed the same high fat diet. The top mouse's liver cells were engineered to metabolize fat directly into carbon dioxide. (credit: Jason Dean, University of California, Los Angeles)

Don’t you wish your body could just get rid of that extra fat by itself, without the pesky exercise or dieting? It may not be as far off as you think.

In the June edition of Cell Metabolism, James Liao’s group reports that it succeeded in reducing diet-induced obesity in mice fed a high fat diet (Technology Review also has a nice article on it). They did this by splicing in something called the glyoxylate shunt into the mice DNA from E. coli.

When our body wants to use fat, it breaks it down and often converts it to carbohydrates (mostly glucose), whose excess can have all sorts of pernicious effects (i.e. diabetes). With this new glyoxylate shunt pathway, the mouse liver cells metabolize the fat directly into CO2, which is absorbed in the blood stream and simply exhaled.

While employing this technique in humans is still far, far away, it’s a nice reminder of the fruits of genetic engineering and synthetic biology will eventually bear.

What’s interesting, though, is that short of geneticly engineering humans (which I doubt we’re close to doing any time soon, for both technological and ethical reasons), we’re a good ways off from being able to change our own DNA and thus use what we’ve discovered in other animal models. Almost all current cell-level treatment involves getting various molecules into cells rather than actually changing their DNA (although see gene therapy).

Adderall (credit: www.michaelshouse.com)

Adderall (credit: http://www.michaelshouse.com)

If you aren’t familiar with the debate about neuroenhancers, you should definitely read up on it. It’s one of the most interesting contemporary debates going in the public health/education/pharmacology realm. For those of you who aren’t familiar with the debate, here’s a quick primer: the use of neuroenhancers like Ritalin, Adderall, Modafinil, and others has become more prevalent among people (often students but increasingly professionals too) wishing to squeeze a bit more of productivity out of their lives. (This discussion does not include the legal and appropriate use of these drugs for clinical learning disabilities like ADHD.) If you have a half hour, I’d highly suggest Margaret Talbot’s excellent New Yorker article on the issue.

Those of you not familiar with the debate may have the immediate and understandable reaction against the use of neuroenhancers. To those people, I urge you to consider the difference between taking a stimulant in pill form (as these come) and one in drink form, as our beloved coffee comes.

Unlike steroids, these drugs don’t yet have well documented health consequences for human use yet, which makes their use harder to damn. After all, we’ve been prescribing these stimulants (n. b. – Modafinil, a sleeping disorder drug, works differently than Ritalin and Adderall and is not an amphetamine) for years without seemingly negative consequences.

And yet, Edmund Higgens, a professor of family medicine and psychiatry, has an interesting piece in Scientific American that discusses some recent (mostly animal) studies that indicate the health consequences of stimulants like Adderall are more complicated than previously thought. Animals subjected to regimens of similar stimulants displayed some signs of anxiety, depression, and long term “cognitive defects.” While it’s important to distinguish between animal studies and clinical studies, the results of the latter often follow the former.

While the article is more directed at the over-prescription of ADHD drugs for kids, its contents certainly come to bear on (what I think is) a more interesting discussion. Once health consequences of neuroenhancers become present, the case for their recreational (academic, or professional) use becomes more difficult.

What’s still unclear is how (if at all) intermittent use affects our long term mental and cognitive health. Still, these new studies color a discussion that will only become more important in the future.

The photo taken (and debris identified) by ameteur astonomer Anthony Wesley

The photo taken (and debris identified) by ameteur astonomer Anthony Wesley

Science News recently reported that amateur astronomer Anthony Wesley (home page here) has documented that something big slammed into Jupiter, causing what apparently is called an impact scar. This is only the second impact scar recording on a large gaseous planet. While I imagine this occurrence is interesting (to astronomers), what’s far more interesting to me is that the discovery was made by an amateur scientist.

In our current world of NIH/NSF-funded research, where it takes 3 grad at least students, 2 post docs, and a PI (principle investigator – the person who runs the lab) to make any scientific discovery, it’s incredibly refreshing to know that the realm of science is still open to the amateur. The divisions between “professional” scientists and everyone else compartmentalize the field(s), which not only reduces the number of future scientists but also general scientific literacy.

Scientific illiteracy not only deprives people of the amazing insights that knowledge can bring (nerd alert, I know) but also fails to inoculate them against charlatan-speak (“there is a healthy debate about global warming among scientists”).

So score one for the everyman. Remember, all the old scientists — Bacon, Boyle, Descartes, Charles, Kepler, Leibniz, Einstein (before he struck it big) — were amateurs in their fields.