July 31, 2009
I’m usually not one for hemming in research in any field and any direction, even when the directions hold potential ethical pitfalls (human cloning, for example). However, the attempt to develop autonomous and ethical robots for use in any wartime situation completely crosses the line.
I should distinguish between autonomous and remote controlled robots. Autonomous robots receive no human input for their direct actions. They are capable of making decisions for themselves and then acting on them. The military already uses remote controlled robots for handling IEDs and scouting. Such technology is merely another extension of a human controller’s will and (in my opinion) completely ethical.
In an interview with h+ magazine, though, Ronald Arkin of Georgia Tech, discusses creating an “ethical governor” to ensure that future autonomous robots don’t break the “rules of war.” I can see the allure of having robots in the battlefield: they’re expendable, entirely rational, and have faster reaction time than humans. Here are the “rules” that Arkin suggests:
1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.
We are so, so far from having any kind of autonomous robot that can intelligently follow these rules that it’s not even worth spending the time on them. We would have to have a solid model of human intelligence up and running before we can even think about creating a robot that can discern and apply these rules. If current soldiers have trouble distinguishing between “enemy combatants” and civilians, how in the world will a robot be able to do it?
This type of research falls right in line with Regan’s Star Wars, having dubious (probably too generous) effectiveness and absurd cost. The problem is that too many of us have naive fantasies of robots fighting our wars. Let’s grow up and spend resources money more wisely than that, eh?
When we reach the singularity and finally develop a robust artificial intelligence that parallels our own, which — I guarantee — will not be in any of our lifetimes (although , Ray Kurzweil would have you believe otherwise) then we can start thinking about the rules for our warriors bots.
July 29, 2009
Even though it’s been around for over fifty years, the idea of controlling the amount of precipitation in an area with chemicals still seems quite futuristic to me. I know many ski resorts seed their environs for more fresh power and had heard stories of China’s preventing rain from its opening Olympic ceremonies last year, but this new report about China’s efforts to again ensure dry skies for next year’s Asian Games got me wondering just how cloud seeding (as it’s called) works. Here’s a brief discussion from my research:
The entire process revolves around the phases of water in clouds. Such weather control can either be used to promote precipitation (like rain or snow) or inhibit it (like rain or often hail).
The water vapor in clouds is very, very cold (well below its freezing point, called supercooled) due to its height in the atmosphere. The problem is that in order for the vapor to turn into liquid or solid droplets, it usually needs a seed or starting particle (natural dust particles usually serve this role). One of the most common seeding chemicals, silver iodide, which has a crystalline structure very similar to that of water, is used to start these water (or ice) particles forming in the cloud.
Other chemicals like dry ice (solid CO2, liquid nitrogen, or liquid propane) can be used to cool down the water vapor so much that it spontaneously forms small droplets without the need for a starter particle, so to speak.
When enough of these little droplets in a cloud form, they can start clumping together, and eventually the droplets become so large that the air currents can no longer support them, and they fall to earth as either rain, snow, or hail.
However, if you’re like China and want to avoid precipitation, you can seed the clouds just a little bit, and the ice particles produced actually form at the detriment of the natural water particles, thus reducing their size and likelihood of falling to earth.
It’s somewhat hard to measure exactly how efficacious cloud seeding actually is since there’s no way to do a counterfactual weather experiment, but it’s been successful enough for a number of countries, including the U.S., China, Russia, and Australia, to have used it at one point or another (China’s the most aggressive at it).
For those of you who are a bit wary of dropping chemicals like silver iodide into the sky, it seems that in the amount they’re currently being used, the health and environment impacts are negligible.
As many of you may know, there’s talk of creating more clouds in the atmosphere by either spraying water droplets up from the sea with vast fleets of autonomous sailboats or dropping seeding chemicals from the sky as previously discussed. It’ll be interesting to see how this last ditch option changes (or doesn’t) as our understanding of weather control improves.
July 28, 2009
Don’t you wish your body could just get rid of that extra fat by itself, without the pesky exercise or dieting? It may not be as far off as you think.
In the June edition of Cell Metabolism, James Liao’s group reports that it succeeded in reducing diet-induced obesity in mice fed a high fat diet (Technology Review also has a nice article on it). They did this by splicing in something called the glyoxylate shunt into the mice DNA from E. coli.
When our body wants to use fat, it breaks it down and often converts it to carbohydrates (mostly glucose), whose excess can have all sorts of pernicious effects (i.e. diabetes). With this new glyoxylate shunt pathway, the mouse liver cells metabolize the fat directly into CO2, which is absorbed in the blood stream and simply exhaled.
While employing this technique in humans is still far, far away, it’s a nice reminder of the fruits of genetic engineering and synthetic biology will eventually bear.
What’s interesting, though, is that short of geneticly engineering humans (which I doubt we’re close to doing any time soon, for both technological and ethical reasons), we’re a good ways off from being able to change our own DNA and thus use what we’ve discovered in other animal models. Almost all current cell-level treatment involves getting various molecules into cells rather than actually changing their DNA (although see gene therapy).
July 25, 2009
If you aren’t familiar with the debate about neuroenhancers, you should definitely read up on it. It’s one of the most interesting contemporary debates going in the public health/education/pharmacology realm. For those of you who aren’t familiar with the debate, here’s a quick primer: the use of neuroenhancers like Ritalin, Adderall, Modafinil, and others has become more prevalent among people (often students but increasingly professionals too) wishing to squeeze a bit more of productivity out of their lives. (This discussion does not include the legal and appropriate use of these drugs for clinical learning disabilities like ADHD.) If you have a half hour, I’d highly suggest Margaret Talbot’s excellent New Yorker article on the issue.
Those of you not familiar with the debate may have the immediate and understandable reaction against the use of neuroenhancers. To those people, I urge you to consider the difference between taking a stimulant in pill form (as these come) and one in drink form, as our beloved coffee comes.
Unlike steroids, these drugs don’t yet have well documented health consequences for human use yet, which makes their use harder to damn. After all, we’ve been prescribing these stimulants (n. b. – Modafinil, a sleeping disorder drug, works differently than Ritalin and Adderall and is not an amphetamine) for years without seemingly negative consequences.
And yet, Edmund Higgens, a professor of family medicine and psychiatry, has an interesting piece in Scientific American that discusses some recent (mostly animal) studies that indicate the health consequences of stimulants like Adderall are more complicated than previously thought. Animals subjected to regimens of similar stimulants displayed some signs of anxiety, depression, and long term “cognitive defects.” While it’s important to distinguish between animal studies and clinical studies, the results of the latter often follow the former.
While the article is more directed at the over-prescription of ADHD drugs for kids, its contents certainly come to bear on (what I think is) a more interesting discussion. Once health consequences of neuroenhancers become present, the case for their recreational (academic, or professional) use becomes more difficult.
What’s still unclear is how (if at all) intermittent use affects our long term mental and cognitive health. Still, these new studies color a discussion that will only become more important in the future.
July 22, 2009
Science News recently reported that amateur astronomer Anthony Wesley (home page here) has documented that something big slammed into Jupiter, causing what apparently is called an impact scar. This is only the second impact scar recording on a large gaseous planet. While I imagine this occurrence is interesting (to astronomers), what’s far more interesting to me is that the discovery was made by an amateur scientist.
In our current world of NIH/NSF-funded research, where it takes 3 grad at least students, 2 post docs, and a PI (principle investigator – the person who runs the lab) to make any scientific discovery, it’s incredibly refreshing to know that the realm of science is still open to the amateur. The divisions between “professional” scientists and everyone else compartmentalize the field(s), which not only reduces the number of future scientists but also general scientific literacy.
Scientific illiteracy not only deprives people of the amazing insights that knowledge can bring (nerd alert, I know) but also fails to inoculate them against charlatan-speak (“there is a healthy debate about global warming among scientists”).
So score one for the everyman. Remember, all the old scientists — Bacon, Boyle, Descartes, Charles, Kepler, Leibniz, Einstein (before he struck it big) — were amateurs in their fields.
July 19, 2009
I had the pleasure to attend a black tie optional wedding last night and was dismayed to see how many of the men (mostly of younger age) were sporting fake (pre-tied with a clip) bow ties. I felt sorry for them.
Not knowing how to properly tie a bow tie is somewhat shameful, in my opinion, akin to not knowing how to build a fire. It’s not that hard to learn. Here’s what I think is the best video tutorial:
If you prefer, I think these step by step instructions are pretty good.
So man up, take 30 minutes, practice tying your bow tie in the mirror, and regain your dignity. Then you too can rock the stylish untied bow tie look at the end of the night.
Some may accuse me of being stodgily old school, but the correct collar for black tie is the turn down collar. The winged collar is the domain of white tie and tails.
July 13, 2009
If I asked you what makes water form into droplets, you might say surface tension, perhaps (for the more sciency) intermolecular forces like dipoles and hydrogen bonding. Most of us are comfortable with these strange little forces acting on the tiny, molecular level, but then how can we explain these clips:
This above clip is a high-speed video of falling sand, where the camera is falling at the same speed of the sand and thus can capture the “drops” of sand that form from the thin stream. The below clip shows an iron ball falling into sand.
A recent study by the Jaeger group at U. Chicago in Nature investigates (that Mark Trodden of Cosmic Variance summarizes) the interactions of sand particles. Jaeger’s group demonstrated the existence of surface tension forces roughly 100,000 times less strong than those of normal liquids on the sand and nanoNewton forces between the particles.
Not only is this research just plain cool, but it also illustrates how we’re still learning about seemingly everyday things like sand. Most often people see current scientific developments as incredibly specialized and unapproachable. Research like this reminds us of the science that we interact with even when we’re not looking for it.