The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

A new study (as reported in the ArXiv blog) by Fermín Moscoso Del Prado Martín of the Universite de Provence shows that the human data-processing rate is not much more than 60 bits per second. The results are based on measuring the time it takes a subject to complete what’s called a lexical decision task, like determining whether a collection of letters is a word or not. This complexity can be quantified into a certain number of bits (each bit represents a binary state like on or off). Thus, if we know the complexity of a given task (called the entropy), and we know the average time it takes a human to complete that task, we can determine the decision-processing rate of the brain, which is where we get 60 bps.

This speed is really, really, really slow, compared to today’s technology. It’s likely that the internet connection you’re using to read this post is at least 3,000,000 bps (although I should be careful to distinguish this data transfer rate of your internet connection with the data processing rate of our brain). The computer you’re reading this on probably has a processor of at least 2,000,000,000 Hz, or cycles per second, which is more similar to the brain’s data-processing speed of 60 bps. I think you get the message.

So here’s my question: if we would ultimately like to get computers that can think like humans (ultimately being a long time from now), does it make sense to limit the speed at which they can operate? Hardware (or software) that can process data at blazing speeds can allow us to approach a problem the wrong way. The best example is the use of brute-force computational churning to make a simple decision (like using a computer to test every possible game of checkers, taking almost 20 years, to figure out the next best move; don’t laugh, it’s been done).

The power we have available to us can blind us in creating things that actually mimic the way our brain works. It allows us to go far, for sure, but far down a dead end that ultimately will not lead us to the AI sophistication we’d like. Would artificially limiting the various transfer and processing rates of our hardware force us to approach decision making in machines in a way similar as our brain?

I’m not a sophisticated AI developer by any means, but this idea seems at least worth considering. Many people, perhaps most, don’t even thing we’ll ever be able to approach the functionality of the brain, but for those true believers like me out there, this thought is worth considering.

Advertisements

Have you ever been walking in the woods and come upon a snake (startling both you and it), only to see it slither away with incredible speed? I know I have. How is it possible for the massive bulk of a whale to travel thousands of miles underwater without eating? As is often the case, the efficiency (and beauty) of nature’s solutions to common problems far supersede those we’ve developed ourselves.

A recent review by Netta Cohen and Jordan Boyle of the University of Leads (UK) to be published in Contemporary Physics has a nice discussion of the fluid mechanics involved in different models of undulatory locomotion, as presented by various organisms. What becomes clear to someone (me) not in the field, is that for something seemingly as simple as getting around in a fluid, we know pretty much exactly how the most efficient organisms do it but are a good ways from being able to replicate it well ourselves.

Towards the end of the paper, the authors discuss the emerging technologies of undulatory robotics, on both the meter scale (robotic snakes for searching for people in building rubble) and on the micrometer scale (robotic worms to swim through an artery to image tissue injury or healing progress). These applications are an interesting glimpse at an area of research ripe for development.

The propeller (which itself is of biological origin) on the back of a boat has gotten us a good, long way, but it has a number of limitations. For one, it’s quite inefficient compared to biological undulation; although, it’s significantly simpler to implement mechanically. As our material science and coordination of many mechanical movements (think how many independent muscles a fish must move to flap its body once) continues to improve, our ability to implement this form of locomotion will improve. (Perhaps in 100 years I’ll be able to take a ride in a flagella-powered boat.

A tangent
At the risk of being cliche, I’m again struck by the resourcefulness of evolution in using the tools it has available to perform a task, rather than trying to reinvent the wheel every time. So, the cells in your bronchiole tubes would like a way to move mucus and dirt up and out of the lungs? Well, why not just use oar-like cilia that many paramecium use? A less practical builder (us, perhaps) would expensively go about designing an entire new apparatus. In fact, many of the tools used by evolution (if random chance can be given some agency) are imperfect (for example, the skeletal structure of bat wings vs. bird wings), but they work well enough. This imperfect-but-good-enough usage of biological tools, by the way, is one of the best arguments (if you entertain the argument at all) against so-called intelligent design.

The sarcomere is the function unit of skeletal muscles (credit: Wikimedia)

The sarcomere is the functional unit of skeletal muscles (credit: Wikimedia)

I was out chopping wood yesterday (which, let me tell you, is incredibly good exercise) and today have distinct muscle soreness in my lower back. I’m sure most of you have experienced this post-exercise soreness at some point or another and so decided to learn what I could about what’s called Delayed Onset Muscle Soreness (DOMS). (Most of my information came from the this solid review of DOMS by Priscilla Clarkson.)

The functional unit of your skeletal muscle (as opposed to smooth muscle) is the sarcomere. Much of the damage that results with DOMS involves injury parts of the sarcomere. Specifically, the Z-lines and the protein that holds them together, desmin, are disrupted. As these individual muscle fibers are strained, the extracellular matrix (which is a scaffold for your cells and other proteins outside the cell, including muscle fibers) is pulled apart from the fibers. Furthermore, there’s evidence that small capillaries are broken in this process as well which also help to trigger an inflammation response by your body.

One of the hallmarks of muscle injury and its subsequent DOMS is a loss of strength in that muscle. The most prevalent explanation for this loss is what’s know as the “popping sarcomere hypothesis,” which posits that the irregular lengthening of individual sarcomeres (that is, some get stretched out a lot, others not as much, rather than all of them an equal amount) causes the muscle fiber to be weaker (since the overlap between the thick myosin and thin actin filaments is less in a stretched-out sarcomere).

Finally, what causes that thing we care most about: the soreness? It comes in part from the tissue swelling and and fluid pressure in the muscle. (For those of you who doubt swelling hurts, I can tell that my forearm, which is the size of Popeye’s thanks to a wasp sting yesterday, hurts a good deal even though the pain of the sting is gone.) It’s also suspected (although not distinctly proven) that the pain also comes from the release of molecules (histamines, bradykinins, and postagladins) from damaged cells that active the type III and IV afferent nerves, which basically carry pain signals to the brain.

So the next time you feel that soreness the day after some hard working out, think of your out of wack Z-links and painful histamines but know that your body’s smart enough to replace that damaged muscle with more so you won’t have the same reaction a second time (unless, of course, you damage it more).

a free lunch?

July 28, 2009

These two mice were fed the same high fat diet. The top mouses liver cells were engineered to metabolize fat directly into carbon dioxide.

These two mice were fed the same high fat diet. The top mouse's liver cells were engineered to metabolize fat directly into carbon dioxide. (credit: Jason Dean, University of California, Los Angeles)

Don’t you wish your body could just get rid of that extra fat by itself, without the pesky exercise or dieting? It may not be as far off as you think.

In the June edition of Cell Metabolism, James Liao’s group reports that it succeeded in reducing diet-induced obesity in mice fed a high fat diet (Technology Review also has a nice article on it). They did this by splicing in something called the glyoxylate shunt into the mice DNA from E. coli.

When our body wants to use fat, it breaks it down and often converts it to carbohydrates (mostly glucose), whose excess can have all sorts of pernicious effects (i.e. diabetes). With this new glyoxylate shunt pathway, the mouse liver cells metabolize the fat directly into CO2, which is absorbed in the blood stream and simply exhaled.

While employing this technique in humans is still far, far away, it’s a nice reminder of the fruits of genetic engineering and synthetic biology will eventually bear.

What’s interesting, though, is that short of geneticly engineering humans (which I doubt we’re close to doing any time soon, for both technological and ethical reasons), we’re a good ways off from being able to change our own DNA and thus use what we’ve discovered in other animal models. Almost all current cell-level treatment involves getting various molecules into cells rather than actually changing their DNA (although see gene therapy).

Adderall (credit: www.michaelshouse.com)

Adderall (credit: http://www.michaelshouse.com)

If you aren’t familiar with the debate about neuroenhancers, you should definitely read up on it. It’s one of the most interesting contemporary debates going in the public health/education/pharmacology realm. For those of you who aren’t familiar with the debate, here’s a quick primer: the use of neuroenhancers like Ritalin, Adderall, Modafinil, and others has become more prevalent among people (often students but increasingly professionals too) wishing to squeeze a bit more of productivity out of their lives. (This discussion does not include the legal and appropriate use of these drugs for clinical learning disabilities like ADHD.) If you have a half hour, I’d highly suggest Margaret Talbot’s excellent New Yorker article on the issue.

Those of you not familiar with the debate may have the immediate and understandable reaction against the use of neuroenhancers. To those people, I urge you to consider the difference between taking a stimulant in pill form (as these come) and one in drink form, as our beloved coffee comes.

Unlike steroids, these drugs don’t yet have well documented health consequences for human use yet, which makes their use harder to damn. After all, we’ve been prescribing these stimulants (n. b. – Modafinil, a sleeping disorder drug, works differently than Ritalin and Adderall and is not an amphetamine) for years without seemingly negative consequences.

And yet, Edmund Higgens, a professor of family medicine and psychiatry, has an interesting piece in Scientific American that discusses some recent (mostly animal) studies that indicate the health consequences of stimulants like Adderall are more complicated than previously thought. Animals subjected to regimens of similar stimulants displayed some signs of anxiety, depression, and long term “cognitive defects.” While it’s important to distinguish between animal studies and clinical studies, the results of the latter often follow the former.

While the article is more directed at the over-prescription of ADHD drugs for kids, its contents certainly come to bear on (what I think is) a more interesting discussion. Once health consequences of neuroenhancers become present, the case for their recreational (academic, or professional) use becomes more difficult.

What’s still unclear is how (if at all) intermittent use affects our long term mental and cognitive health. Still, these new studies color a discussion that will only become more important in the future.

anticipating wrongdoing

June 27, 2009

Those of you who remember the movie Minority Report, with Tom Cruise, are familiar with the idea of anticipating someone’s future wronging and then taking preventative action against it. It’s an interesting idea, but when it came out in 2002, it was still science fiction. Now, it seems we could be getting closer to something like that with the preliminary (unpublished) results that Vincent Clark or the University of New Mexico at Albuquerque gave a talk at the Organization for Human Brain Mapping conference.

Clark claims that he can predict which drug addicts will relapse after treatment with 89% accuracy using both traditional psychiatric techniques and fMRI brain imaging. He used 400 subjects in his decade long study. What’s interesting about this approach is that it involves a more serious level of quantitative analysis (from the fMRI) than most psychiatric evaluations and thus would be a more rigorous metric by which to measure patients against a standard.

While determining if patients in treatment will relapse (and thus might need more treatment) is a beneficial evaluation for both society and the patient, it’s not hard to extend this type of test to a more ethically difficult scenario. Suppose someone develops a test that, with 90% accuracy, determines (via MRI or some other such technique) whether a violent offender in prison will commit a repeat act of violence after paroled. I think we’re a way off (if it’s even possible) from such a test, but still, the thought experiment is interesting.

How would our criminal justice system handle such a test? Since the ostensible goal of our penitentiaries is to “reform” those who’ve done wrong, could such a test be used to determine at what point someone’s been “reformed?” How do we balance the idea of reform with the idea of penance, a similarly old but quite different justification for imprisoning someone. How much testing of such a test would we need to actually implement it, since incorrect diagnosis could lead to either additional harm to citizens or wrongful confinement. Is there any (non 100%) level of efficacy that would be acceptable?

It strikes me that implementing a test like this in our criminal justice system would force us to rework a good deal of the philosophy behind locking people up (which I don’t think would be a bad thing). It’s an interesting thought experiment now, but perhaps in a few decades it will become a reality.

Blue light causes the cells to become activated, sending out electrical signals. Yellow light causes the cells to become inactive, blocking any signal propogation though it. (credit: h+ Magazine)

Blue light causes the cells to become activated, sending out electrical signals. Yellow light causes the cells to become inactive, blocking any signal propagation though it. (credit: h+ Magazine)

Implanting electrodes in someone’s brain and then shocking them seems somewhat sci-fi to most people, but it’s a medical reality. We’ve found that by electrically stimulating parts of the brain and vagus nerve, we can reduce the effects of epileptic seizures, Parkinsons’, and other disorders.

This whole process sounds incredibly complicated, and it is, but from a larger perspective, it’s quite simplistic. Basically, we just stick wires in peoples’ heads and shock them and see if their symptoms decrease. A slight divergence from this external electrical stimulation is the subfield of optogenetics, coined first by Karl Deisseroth at Stanford. I read a nice summary of his recent work in h+ Magazine and will give you the thumbnail sketch.

Instead of sticking an electrode down into your brain, Deisseroth’s group stuck a fiber optic cable that can deliver different wavelengths of yellow and blue light. Normal neurons are not light sensitive, but the targets of this optical stimulation are genetically modified to be so. Deisseroth’s group squirts in a bit of genetically engineered virus exactly in the brain where they want to stimulate. This virus has two genes culled from algae and archaeon that then reprogram the surrounding neurons to be sensitive to blue and yellow light.

These neurons now become inhibited, unable to produce an electrical signal, when subjected to the yellow light and excited, producing an electrical signal, when subjected to the blue light.

This process, while a good bit more involved than brain stimulation, achieves basically the same thing: getting neurons to fire when we want them to. But it also has then benefit of allowing us to directly block neurons from firing, which we could only somewhat achieve through electrical stimulation by shocking one part of the brain and hoping that it causes another part to go quiet in the way we want. Silencing areas of the brain directing could prove an incredible boon in getting the brain to behave in the way that we want.

I see this optogenetic technology as an additional tool, not necessarily a replacement, to electrical brain stimulation. As our understanding of how the neurons in specific parts of the brain are connected and our technology for controlling those neurons improve, our ability to mediate the many debilitating diseases and conditions that plague us will improve dramatically.

Some of you may be uncomfortable with the idea of messing around “under the hood” of the most complicated machine in the world, but I’d then ask you how direct stimulation (or inhibition) is really any different than the many drugs that target the brain. This technology is simply the next step in our ability to fine tune ourselves.