Hmmm, a major motion picture now, eh? Im glad he was doing it for the right reasons.

Hmmm, a major motion picture now, eh? I'm glad he was doing it for the right reasons.

The New Yorker’s Elizabeth Kolbert has a wonderful undercut of the so-called eco-stunts, or living an extremely environment-friendly lifestyle, and the publicity surrounding them. Her primary targets are bloggers-et-book writers Colin Beaven (No Impact Man) and Vanessa Farquharson (Sleeping Naked Is Green: How an Eco-Cynic Unplugged Her Fridge, Sold Her Car, and Found Love in 366 Days), both of whom adopted extreme eco-lifestyles for a period of time, blogged about it, and then got a book deal. After describing all the measures they take, like going without a refrigerator, toilet paper, and cars/trains/buses/subways, her closing paragraph sums it all up particularly well:

The real work of “saving the world” goes way beyond the sorts of action that “No Impact Man” is all about.

What’s required is perhaps a sequel. In one chapter, Beavan could take the elevator to visit other families in his apartment building. He could talk to them about how they all need to work together to install a more efficient heating system. In another, he could ride the subway to Penn Station and then get on a train to Albany. Once there, he could lobby state lawmakers for better mass transit. In a third chapter, Beavan could devote his blog to pushing for a carbon tax. Here’s a possible title for the book: “Impact Man.”

Seriously, take the fifteen minutes to read this article. It’s good.

I’m going to take her criticism of such publicity stunts (she reminds us that Colin Beavan’s idea of living a no impact life arose from a lunch conversation with his agent about his next book project) one step further: not only are they inane and self-promoting, but they trivialize environmental measures the rest of us — in the real world — take.

Notice that once the book deal is finished, these authors revert mostly back to their original, planet-killing lifestyle. If I were to read such a book, I would probably feel some mixture of shame — that my own life was not more environmentally efficient — and humor — because the life described by the authors sounds so crappy.

Such accounts of “extreme” lifestyles seem to trivialize the things the rest of us do to help out, like switching in CFL bulbs, recycling, and turning the lights off. Compared to the “extreme” lifestyles, some might think that these small measures don’t have as significant an impact and thus aren’t worth being as diligent about. What a shame, because those small measures we all take work into our life are by far the best sources of energy efficiency.

Someone needs to write a book about reinsulatining his house, riding his bike more, installing a smart electricity meter, going to the farmer’s market, and installing solar panels on his roof. The book needs to span a year, and then another, and then another, and another. That stuff doesn’t make us feel bad about what we do. It makes us think, “Hmm, that sounds pretty doable.” The key part is that there’s no end point to “experiment.” It’s a sustainable (ahh…wordplay) lifestyle.

Of course, such a book would be quite boring. But that’s the point, isn’t it. It’s not about selling copies. It’s getting us to change our lifestyles — for good.

Advertisements

over-softwaring our lives

August 26, 2009

How many times have you seen a program, a desktop widget, a web application, and thought, “Wow, that would really make my life easier!” I know I have. Many times. And yet, so many of them — while theoretically quite useful — never actually end up being used that much because they take a significant investment in time to learn how to use and often take more time than what we’d otherwise do to address the need.

Here’s an example, from Lifehacker.com, which (while I love them) often features such programs: Kitchen Monki. Kitchen Monki is a recipe planner/shopping list organizer that allows you to plan meals and then load the ingredients into a shopping list. (Note: the following criticisms aren’t meant at all as a hit against the programmers. The application actually looks very well thought out and easy to use. It’s the idea that we even need such a program that I’m criticizing.)

Who really sits at their computer and plans a meal? And even if you do, man, what steps you must go through: finding a recipe, adding it to your meal, checking to see what you already have, making your shopping list, exporting it to your phone or printing it out, etc. What ever happened to the pen and paper? I’m guessing that almost anyone can do all this much faster just by thinking about it in your kitchen with a cookbook and writing a few notes down on a scrap of paper.

Contrast that program with something like Mint.com, a personal finance web app that automatically categorizes your credit card and banking transactions for the purpose of keeping a budget. That’s a really useful tool, and I can say that from using it for 16 months or so. Another recent software favorite: Dropbox.

My point is not that all software (or technology in general if you like) is bad. Some of it is quite useful and improves our lives, but much of it seems like a good idea at the time and then is never used after about the first day. I’m also not trying to stifle developers’ creativity: I write programs that I think will be useful but other may not. Indeed, I imagine many of the programs that we encounter are the successful personal projects of developers that have been taken into the public sphere. So it’s really up the consumer to decide whether they should use it or not.

It’s almost ingrained into our psyche when we see something new and seemingly useful: “Ohh, I want to try that.” This is applicable in everything from organizational stuff from stores like Organized Living and to electronics to video games (like those of you who’ve downloaded every arcade game ever invented onto your PC or XBox) to software. We’re programmed (getting into a bit of social theory here) to want more stuff. And so we acquire it if it seems even remotely fun or useful without much thought about whether we’re actually going to use it or to a simpler alternative. This point holds especially true for software, most of which is free these days.

As consumers of everything, we need to be mindful of what we need (or want) and what we don’t need. And for those things we need, what’s the balance between their advantages and their efficiency (in time, energy, and money). Otherwise, we’ll go through trailing a mess of physical and digital detritus.

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

A new study (as reported in the ArXiv blog) by Fermín Moscoso Del Prado Martín of the Universite de Provence shows that the human data-processing rate is not much more than 60 bits per second. The results are based on measuring the time it takes a subject to complete what’s called a lexical decision task, like determining whether a collection of letters is a word or not. This complexity can be quantified into a certain number of bits (each bit represents a binary state like on or off). Thus, if we know the complexity of a given task (called the entropy), and we know the average time it takes a human to complete that task, we can determine the decision-processing rate of the brain, which is where we get 60 bps.

This speed is really, really, really slow, compared to today’s technology. It’s likely that the internet connection you’re using to read this post is at least 3,000,000 bps (although I should be careful to distinguish this data transfer rate of your internet connection with the data processing rate of our brain). The computer you’re reading this on probably has a processor of at least 2,000,000,000 Hz, or cycles per second, which is more similar to the brain’s data-processing speed of 60 bps. I think you get the message.

So here’s my question: if we would ultimately like to get computers that can think like humans (ultimately being a long time from now), does it make sense to limit the speed at which they can operate? Hardware (or software) that can process data at blazing speeds can allow us to approach a problem the wrong way. The best example is the use of brute-force computational churning to make a simple decision (like using a computer to test every possible game of checkers, taking almost 20 years, to figure out the next best move; don’t laugh, it’s been done).

The power we have available to us can blind us in creating things that actually mimic the way our brain works. It allows us to go far, for sure, but far down a dead end that ultimately will not lead us to the AI sophistication we’d like. Would artificially limiting the various transfer and processing rates of our hardware force us to approach decision making in machines in a way similar as our brain?

I’m not a sophisticated AI developer by any means, but this idea seems at least worth considering. Many people, perhaps most, don’t even thing we’ll ever be able to approach the functionality of the brain, but for those true believers like me out there, this thought is worth considering.

(credit: polyvore.com)

(credit: polyvore.com)

Anyone who’s thought about the environmental cost of the various products we consume, from plastic spoons to produce to toys, has thought about the costs involved in shipping these times. If you’re like me, you might think about the fossil fuels required to move a teddy bear from China to Pennsylvania: the gas of the truck from the factory to the shipping yard in China, the diesel of the ocean liner from China to Los Angeles, and then more gas for the truck from LA to Philadelphia. Usually, the longer the journey, the more environmentally expensive it is.

What many don’t consider (or at least not me) is the requirements of some products, especially food, in that transportation. For example: ice cream. It must be made, stored at the factory, shipped, and then stored at our grocery store all at freezing point. As you know, keeping food this cold requires a hefty amount of energy in addition to that required simply to transport it. Thus, as the Times of London and Scientific American report, Unilever (which owns Ben & Jerry’s) is embarking on the crazy-sounding idea of making ice cream that’s made, shipped, and stored at room temperature only to be frozen once you put it in your own freezer.

While the science behind fat, sugar, and consistency has been studied for a good while, it still sounds crazy. I’m quite skeptical that they’ll be able to do it and still have it taste as good as the real deal. But, never underestimate those food scientists, who’ve been able to create ice cream that doesn’t melt (although, again, who knows how it tastes).

Although food science hasn’t really improved the quality of our food that much, it certainly has its advantages (like decently ripe fruit 12 months out of the year), many of which we’re willing to compromise a bit on taste in order to get. And once (as I hope), we start having to confront the calories (or if you prefer metric, joules) of energy the products we buy cost (as the Brits have begun to do), we may be more willing to make sacrifices in flavor for the good of the planet, just like we often do for the good of our waistline and arteries.

Have you ever been walking in the woods and come upon a snake (startling both you and it), only to see it slither away with incredible speed? I know I have. How is it possible for the massive bulk of a whale to travel thousands of miles underwater without eating? As is often the case, the efficiency (and beauty) of nature’s solutions to common problems far supersede those we’ve developed ourselves.

A recent review by Netta Cohen and Jordan Boyle of the University of Leads (UK) to be published in Contemporary Physics has a nice discussion of the fluid mechanics involved in different models of undulatory locomotion, as presented by various organisms. What becomes clear to someone (me) not in the field, is that for something seemingly as simple as getting around in a fluid, we know pretty much exactly how the most efficient organisms do it but are a good ways from being able to replicate it well ourselves.

Towards the end of the paper, the authors discuss the emerging technologies of undulatory robotics, on both the meter scale (robotic snakes for searching for people in building rubble) and on the micrometer scale (robotic worms to swim through an artery to image tissue injury or healing progress). These applications are an interesting glimpse at an area of research ripe for development.

The propeller (which itself is of biological origin) on the back of a boat has gotten us a good, long way, but it has a number of limitations. For one, it’s quite inefficient compared to biological undulation; although, it’s significantly simpler to implement mechanically. As our material science and coordination of many mechanical movements (think how many independent muscles a fish must move to flap its body once) continues to improve, our ability to implement this form of locomotion will improve. (Perhaps in 100 years I’ll be able to take a ride in a flagella-powered boat.

A tangent
At the risk of being cliche, I’m again struck by the resourcefulness of evolution in using the tools it has available to perform a task, rather than trying to reinvent the wheel every time. So, the cells in your bronchiole tubes would like a way to move mucus and dirt up and out of the lungs? Well, why not just use oar-like cilia that many paramecium use? A less practical builder (us, perhaps) would expensively go about designing an entire new apparatus. In fact, many of the tools used by evolution (if random chance can be given some agency) are imperfect (for example, the skeletal structure of bat wings vs. bird wings), but they work well enough. This imperfect-but-good-enough usage of biological tools, by the way, is one of the best arguments (if you entertain the argument at all) against so-called intelligent design.

(credit: Wikimedia)

Last night a huge thunderstorm woke me up in the middle of the night. For some reason I realized then that I had no good idea of why rain almost always accompanies lightning (and thunder). What about the two processes makes them work together as they do? A bit of internet research yielded a logical if not entirely complete picture.

HowStuffWorks.com has a very thorough discussion of lightning. We all know that lightning is the discharge of an electric potential built up between either two clouds or a cloud and the ground. Interestingly enough, though, we don’t completely understand how those clouds get charged in the first place. The current (and best supported) hypothesis is that, within the cloud, rising and condensing water vapor collides with falling ice crystals and loses a few electrons in the process. These electrons fall down with the ice to the bottom of the cloud, causing the lower region to become negatively charged and thus the upper region to become positively charged. As the lower region of a cloud becomes negatively charged, it also causes the ground to become positively charged. At a certain point, these electric potentials become so large that they discharge in the form of lightning.

Through this collision theory of cloud-charging, the relationship between precipitation and lightning become more clear. The more precipitation moving around in a cloud, the more separation of charge occurring. Thus, thunderheads that produce a lot of lightning must have had a lot of precipitation in them to create that electric potential. And that much precipitation rarely stays up in the cloud.

A number of climate scientists have actually tried to correlate lightning strikes and rainfall in storms. Vladimir Rakov and Martin Uman, in their book Lightning: physics and effects, discuss some of these efforts. While some scientists seem to have a bit of consistency. Rakov and Uman present data (the numbers are in kg of rainfall per ground flash) from a large number of studies that ranges in four orders of magnitude. It seems that specific types of storms (especially in the same area) yield far more consistent results than generalized storms.

Again, I return to a familiar theme of mine. So many commonplace things operate in ways we don’t entirely understand. The next time you see lightning, think of those colliding water and ice particles.

Darrell Issa (R-CA) meddles in the NIH

Darrell Issa (R-CA) meddles in the NIH

ScienceInsider recently reported that representative Darrell Issa (R-CA) succeeded in stripping the funding of a study for HIV/AIDS in prostitution from an NIH funding bill. The overseas studies are aimed at understanding how the disease spreads (and how to halt it). Apparently Issa thought it was a waste for the researchers to fly over to Thailand when they could just take a $3.10 train across down. And rather than argue the point, the bill’s manager, David Obey (D–WI), accepted the amendment and moved on.

The stripped funding for the three specific grants totaled $5 million. The entire NIH funding bill was $31 billion. As I see it, this is a great example of politicians trying to score points. Not only were the studies a part of the scientific peer-review process, but they are actually incredibly important for us to understand the inextricable relationship between drugs, disease, and prostitution. You either fund the NIH for a certain amount or you don’t, and let it decide how to apportion that money. Congress doesn’t tell the CIA or FBI what to spend money on and what not to, do they?

If we’re going to get past the dogmatic aversion to drugs and prostitution (controversial, I know; perhaps I’ll post on those later), we need to understand how they interact with sexually transmitted diseases. Even if nothing legally changes here, we can certainly develop better policies for reducing the number of drug-addicted and diseased-infected prostitutes.

Non-scientists deciding not to fund certain research (like human cloning) is one thing. After all, it’s taxpayer money (and thus, in the politicians’ minds, theirs), but it is not their place to decide how that research is carried out. That’s the job of the grant reviewing committee. That’s why we have those committees, to decide what proposals meet the aims of the grant.