I’m usually not one for hemming in research in any field and any direction, even when the directions hold potential ethical pitfalls (human cloning, for example). However, the attempt to develop autonomous and ethical robots for use in any wartime situation completely crosses the line.

I should distinguish between autonomous and remote controlled robots. Autonomous robots receive no human input for their direct actions. They are capable of making decisions for themselves and then acting on them. The military already uses remote controlled robots for handling IEDs and scouting. Such technology is merely another extension of a human controller’s will and (in my opinion) completely ethical.

In an interview with h+ magazine, though, Ronald Arkin of Georgia Tech, discusses creating an “ethical governor” to ensure that future autonomous robots don’t break the “rules of war.” I can see the allure of having robots in the battlefield: they’re expendable, entirely rational, and have faster reaction time than humans. Here are the “rules” that Arkin suggests:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

We are so, so far from having any kind of autonomous robot that can intelligently follow these rules that it’s not even worth spending the time on them. We would have to have a solid model of human intelligence up and running before we can even think about creating a robot that can discern and apply these rules. If current soldiers have trouble distinguishing between “enemy combatants” and civilians, how in the world will a robot be able to do it?

This type of research falls right in line with Regan’s Star Wars, having dubious (probably too generous) effectiveness and absurd cost. The problem is that too many of us have naive fantasies of robots fighting our wars. Let’s grow up and spend resources money more wisely than that, eh?

When we reach the singularity and finally develop a robust artificial intelligence that parallels our own, which — I guarantee — will not be in any of our lifetimes (although , Ray Kurzweil would have you believe otherwise) then we can start thinking about the rules for our warriors bots.

(credit: Wikimedia)

Fair weather cumulous clouds (credit: Wikimedia)

Even though it’s been around for over fifty years, the idea of controlling the amount of precipitation in an area with chemicals still seems quite futuristic to me. I know many ski resorts seed their environs for more fresh power and had heard stories of China’s preventing rain from its opening Olympic ceremonies last year, but this new report about China’s efforts to again ensure dry skies for next year’s Asian Games got me wondering just how cloud seeding (as it’s called) works. Here’s a brief discussion from my research:

The entire process revolves around the phases of water in clouds. Such weather control can either be used to promote precipitation (like rain or snow) or inhibit it (like rain or often hail).

The water vapor in clouds is very, very cold (well below its freezing point, called supercooled) due to its height in the atmosphere. The problem is that in order for the vapor to turn into liquid or solid droplets, it usually needs a seed or starting particle (natural dust particles usually serve this role). One of the most common seeding chemicals, silver iodide, which has a crystalline structure very similar to that of water, is used to start these water (or ice) particles forming in the cloud.

Other chemicals like dry ice (solid CO2, liquid nitrogen, or liquid propane) can be used to cool down the water vapor so much that it spontaneously forms small droplets without the need for a starter particle, so to speak.

When enough of these little droplets in a cloud form, they can start clumping together, and eventually the droplets become so large that the air currents can no longer support them, and they fall to earth as either rain, snow, or hail.

However, if you’re like China and want to avoid precipitation, you can seed the clouds just a little bit, and the ice particles produced actually form at the detriment of the natural water particles, thus reducing their size and likelihood of falling to earth.

It’s somewhat hard to measure exactly how efficacious cloud seeding actually is since there’s no way to do a counterfactual weather experiment, but it’s been successful enough for a number of countries, including the U.S., China, Russia, and Australia, to have used it at one point or another (China’s the most aggressive at it).

For those of you who are a bit wary of dropping chemicals like silver iodide into the sky, it seems that in the amount they’re currently being used, the health and environment impacts are negligible.

As many of you may know, there’s talk of creating more clouds in the atmosphere by either spraying water droplets up from the sea with vast fleets of autonomous sailboats or dropping seeding chemicals from the sky as previously discussed. It’ll be interesting to see how this last ditch option changes (or doesn’t) as our understanding of weather control improves.

Resources:
http://en.wikipedia.org/wiki/Cloud_seeding
http://www.lightwatcher.com/chemtrails/cloud_seeding.html
http://theblanketeffect.blogspot.com/2008/02/note-we-conclude-our-series-featuring.html

a free lunch?

July 28, 2009

These two mice were fed the same high fat diet. The top mouses liver cells were engineered to metabolize fat directly into carbon dioxide.

These two mice were fed the same high fat diet. The top mouse's liver cells were engineered to metabolize fat directly into carbon dioxide. (credit: Jason Dean, University of California, Los Angeles)

Don’t you wish your body could just get rid of that extra fat by itself, without the pesky exercise or dieting? It may not be as far off as you think.

In the June edition of Cell Metabolism, James Liao’s group reports that it succeeded in reducing diet-induced obesity in mice fed a high fat diet (Technology Review also has a nice article on it). They did this by splicing in something called the glyoxylate shunt into the mice DNA from E. coli.

When our body wants to use fat, it breaks it down and often converts it to carbohydrates (mostly glucose), whose excess can have all sorts of pernicious effects (i.e. diabetes). With this new glyoxylate shunt pathway, the mouse liver cells metabolize the fat directly into CO2, which is absorbed in the blood stream and simply exhaled.

While employing this technique in humans is still far, far away, it’s a nice reminder of the fruits of genetic engineering and synthetic biology will eventually bear.

What’s interesting, though, is that short of geneticly engineering humans (which I doubt we’re close to doing any time soon, for both technological and ethical reasons), we’re a good ways off from being able to change our own DNA and thus use what we’ve discovered in other animal models. Almost all current cell-level treatment involves getting various molecules into cells rather than actually changing their DNA (although see gene therapy).

Adderall (credit: www.michaelshouse.com)

Adderall (credit: http://www.michaelshouse.com)

If you aren’t familiar with the debate about neuroenhancers, you should definitely read up on it. It’s one of the most interesting contemporary debates going in the public health/education/pharmacology realm. For those of you who aren’t familiar with the debate, here’s a quick primer: the use of neuroenhancers like Ritalin, Adderall, Modafinil, and others has become more prevalent among people (often students but increasingly professionals too) wishing to squeeze a bit more of productivity out of their lives. (This discussion does not include the legal and appropriate use of these drugs for clinical learning disabilities like ADHD.) If you have a half hour, I’d highly suggest Margaret Talbot’s excellent New Yorker article on the issue.

Those of you not familiar with the debate may have the immediate and understandable reaction against the use of neuroenhancers. To those people, I urge you to consider the difference between taking a stimulant in pill form (as these come) and one in drink form, as our beloved coffee comes.

Unlike steroids, these drugs don’t yet have well documented health consequences for human use yet, which makes their use harder to damn. After all, we’ve been prescribing these stimulants (n. b. – Modafinil, a sleeping disorder drug, works differently than Ritalin and Adderall and is not an amphetamine) for years without seemingly negative consequences.

And yet, Edmund Higgens, a professor of family medicine and psychiatry, has an interesting piece in Scientific American that discusses some recent (mostly animal) studies that indicate the health consequences of stimulants like Adderall are more complicated than previously thought. Animals subjected to regimens of similar stimulants displayed some signs of anxiety, depression, and long term “cognitive defects.” While it’s important to distinguish between animal studies and clinical studies, the results of the latter often follow the former.

While the article is more directed at the over-prescription of ADHD drugs for kids, its contents certainly come to bear on (what I think is) a more interesting discussion. Once health consequences of neuroenhancers become present, the case for their recreational (academic, or professional) use becomes more difficult.

What’s still unclear is how (if at all) intermittent use affects our long term mental and cognitive health. Still, these new studies color a discussion that will only become more important in the future.

The photo taken (and debris identified) by ameteur astonomer Anthony Wesley

The photo taken (and debris identified) by ameteur astonomer Anthony Wesley

Science News recently reported that amateur astronomer Anthony Wesley (home page here) has documented that something big slammed into Jupiter, causing what apparently is called an impact scar. This is only the second impact scar recording on a large gaseous planet. While I imagine this occurrence is interesting (to astronomers), what’s far more interesting to me is that the discovery was made by an amateur scientist.

In our current world of NIH/NSF-funded research, where it takes 3 grad at least students, 2 post docs, and a PI (principle investigator – the person who runs the lab) to make any scientific discovery, it’s incredibly refreshing to know that the realm of science is still open to the amateur. The divisions between “professional” scientists and everyone else compartmentalize the field(s), which not only reduces the number of future scientists but also general scientific literacy.

Scientific illiteracy not only deprives people of the amazing insights that knowledge can bring (nerd alert, I know) but also fails to inoculate them against charlatan-speak (“there is a healthy debate about global warming among scientists”).

So score one for the everyman. Remember, all the old scientists — Bacon, Boyle, Descartes, Charles, Kepler, Leibniz, Einstein (before he struck it big) — were amateurs in their fields.

a digression on style

July 19, 2009

I had the pleasure to attend a black tie optional wedding last night and was dismayed to see how many of the men (mostly of younger age) were sporting fake (pre-tied with a clip) bow ties. I felt sorry for them.

Not knowing how to properly tie a bow tie is somewhat shameful, in my opinion, akin to not knowing how to build a fire. It’s not that hard to learn. Here’s what I think is the best video tutorial:

If you prefer, I think these step by step instructions are pretty good.

So man up, take 30 minutes, practice tying your bow tie in the mirror, and regain your dignity. Then you too can rock the stylish untied bow tie look at the end of the night.

Some may accuse me of being stodgily old school, but the correct collar for black tie is the turn down collar. The winged collar is the domain of white tie and tails.

If I asked you what makes water form into droplets, you might say surface tension, perhaps (for the more sciency) intermolecular forces like dipoles and hydrogen bonding. Most of us are comfortable with these strange little forces acting on the tiny, molecular level, but then how can we explain these clips:


This above clip is a high-speed video of falling sand, where the camera is falling at the same speed of the sand and thus can capture the “drops” of sand that form from the thin stream. The below clip shows an iron ball falling into sand.

A recent study by the Jaeger group at U. Chicago in Nature investigates (that Mark Trodden of Cosmic Variance summarizes) the interactions of sand particles. Jaeger’s group demonstrated the existence of surface tension forces roughly 100,000 times less strong than those of normal liquids on the sand and nanoNewton forces between the particles.

Not only is this research just plain cool, but it also illustrates how we’re still learning about seemingly everyday things like sand. Most often people see current scientific developments as incredibly specialized and unapproachable. Research like this reminds us of the science that we interact with even when we’re not looking for it.

Britans controversial Mox Nuclear Recycling Plant (credit: Tehran Times)

Britan's controversial "Mox" Nuclear Recycling Plant (credit: Tehran Times)

As Nature News recently reported, the Obama administration has made the very disappointing decision to cancel plans for a domestic facility for recycling nuclear fuel, as part of the Global Nuclear Energy Partnership (GNEP), started under the Bush Administration in 2006. This program is meant to give countries we don’t want enriching their own nuclear fuel a place to get it and then a place to give back their nuclear waste, which can be reprocessed and in part used again domestically (see the Nature article for more details).

So, not only does the GNEP help other countries obtain nuclear power, a very clean (despite what people say about the waste), efficient, high-output source of power, but it also helps them obtain it in a way that’s not threatening to us (Iran, anyone?).

Without real US backing, it’s hard to imagine it getting off the ground. Congress has still appropriated $145 million for research into the nuclear fuel cycle and nuclear waste, but given the scale, cost, and political messiness of domestic nuclear power, it’s hard to imagine that R & D leading to any significant groundbreaking in the coming decades.

The most soft-and-fuzzy clean energies — wind, solar, geothermal, hydroelectric, etc — will not meet our global energy demands. We MUST include nuclear power in the pantheon of alternatives to coal, oil, and natural gas. Yes, it’s expensive. Yes, the waste issue is tricky. But if we’re going to have any shot at actually changing how we generate power, we need to get nuclear projects moving, since they take a good while to get up and running.

And if we’re going to insist that countries like Iran don’t enrich their own fuel, we need to supply them with an alternative like the GNEP. Otherwise, we’re simply being hypocritical.

credit: daninz.files.wordpress.com

credit: daninz.files.wordpress.com

I’ve always been one of those people who claims that our nation’s high drinking age has caused increased binge drinking. “Look at Europe,” I’d say, “they have lower drinking ages and less crazy binge drinking.” Correlation equals causation. The argument goes that entirely blocking someone’s access to a drug makes them all the more likely to abuse it when they do eventually get access to it.

Well, I’m sad to admit, this study shows that I am wrong. The researchers analyzed data between 1979 and 2006 from over 500,000 subjects in the National Survey on Drug Use and Health. As it turns out, except in college kids (and that’s a big except), the incidence of binge drinking declined significantly as the drinking age increased from 18 to 21. Binge drinking in college students, where access to alcohol is still quite easy through students of legal age, remained roughly the same.

I used to always tell me students that I would only believe a non-intuitive claim if it was published in an peer-reviewed academic journal. Well, it’s been done, so I have to change my opinion about the drinking age and its influences on binge drinking.

That’s not to say that someone reading the methods section of this paper in the Journal of the American Academy of Child and Adolescent Psychiatry (where it’s being published this month) might say, “That’s not right because of x, y, and z.” If that does happen, as happens in all good science, a civil, measured discussion will ensue in the academic journals. That discussion is healthy and important for us to eventually understand the complexity of the issue. But this article now shifts the burden of proof to the other side.

Too often, and I think scientists are prone to this behavior as well, we read convincing, reliable evidence contrary to our own opinions and immediate write it off for one reason or another. Do not confuse this inclination with skepticism, which challenges claims and evidence, probing them for weaknesses. Ultimately, though, the skeptic can be won over if the argument and evidence is convincing enough. The dogmatic cannot. Dogma is dangerous in that it leads to conformity, which another recent study found to be bad for a civilization.

Dogma is the antithesis of scientific inquiry, and we must be wary that it does not seep into our thinking. Ask yourself, what would it take for you to change your mind on your most fundamental principles. If the answer is nothing, you’re in trouble.

How many of you have found yourselves pressing or saying various numbers to work your way through an automated customer service menu on the phone? I certainly have. I think we all recognize how annoying this is, especially if we then have to wait for a while until the next available service representative can assist us. I recently had a very positive customer service interaction with the good people at Newegg.com (a site that sells all sorts of electronic wares). I exclusively use Newegg whenever I buy electronic stuff (from new computer parts to an mp3 player to a flash drive) because of their stellar service.

In that vein, I’ve put together what I’ll call Drausin’s Recipe for Success in Customer Relations. (Take note big cable and phone companies. This is directed at you.)

1) Put as much of the information and processing online. Most of us are comfortable navigating menus and such online and prefer doing that than talking it through with a person if we can. Newegg has a very sophisticated return (RMA) process that makes it incredibly easy to return things to them. The whole process takes about 25 seconds. They then email you a (free) shipping label that you can print and put right on the box you’re sending back.

2) Sometimes problems are too complicated to handle exclusively online, though. So we should work to make the phone experience faster and more efficient. Companies should create an online directory of help-topic queues (like Billing, Tech Support, Returns, etc) and their extensions. If you don’t have access to the internet, the recorded menus and submenus are a necessary evil to finally get into the appropriate waiting queue. But we often do have web access, and so being able to look in a directory and see that questions about billing should dial extension 4567 or whatever would allow us to skip all of those annoying phone menus just to get into a queue to talk with someone. When we call up, we could be prompted to either go into the menus or enter our queue extension.

3) When we’re waiting in a queue, listening to the smooth jazz or cheesy elevator music, they should notify us of our position in the queue at least every thirty seconds. Being stuck in hold purgatory is often enraging because we have no idea how long we’ll have to wait. I find that waiting long periods is much easier if I know roughly how it will be.

That’s not so bad, eh, phone and cable companies? Put as much information and processing on the web as possible. List a directory of help topic extensions so we can skip all the menus, and tell us how many people are in front of us on a very regular basis.

I feel like this kind of stuff is so obvious to most of us and yet we’re often still burdened with terrible customer service.