The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

The Singularity when AI overtakes human intelligence (credit: Jamais Cascio on flickr.com)

A new study (as reported in the ArXiv blog) by Fermín Moscoso Del Prado Martín of the Universite de Provence shows that the human data-processing rate is not much more than 60 bits per second. The results are based on measuring the time it takes a subject to complete what’s called a lexical decision task, like determining whether a collection of letters is a word or not. This complexity can be quantified into a certain number of bits (each bit represents a binary state like on or off). Thus, if we know the complexity of a given task (called the entropy), and we know the average time it takes a human to complete that task, we can determine the decision-processing rate of the brain, which is where we get 60 bps.

This speed is really, really, really slow, compared to today’s technology. It’s likely that the internet connection you’re using to read this post is at least 3,000,000 bps (although I should be careful to distinguish this data transfer rate of your internet connection with the data processing rate of our brain). The computer you’re reading this on probably has a processor of at least 2,000,000,000 Hz, or cycles per second, which is more similar to the brain’s data-processing speed of 60 bps. I think you get the message.

So here’s my question: if we would ultimately like to get computers that can think like humans (ultimately being a long time from now), does it make sense to limit the speed at which they can operate? Hardware (or software) that can process data at blazing speeds can allow us to approach a problem the wrong way. The best example is the use of brute-force computational churning to make a simple decision (like using a computer to test every possible game of checkers, taking almost 20 years, to figure out the next best move; don’t laugh, it’s been done).

The power we have available to us can blind us in creating things that actually mimic the way our brain works. It allows us to go far, for sure, but far down a dead end that ultimately will not lead us to the AI sophistication we’d like. Would artificially limiting the various transfer and processing rates of our hardware force us to approach decision making in machines in a way similar as our brain?

I’m not a sophisticated AI developer by any means, but this idea seems at least worth considering. Many people, perhaps most, don’t even thing we’ll ever be able to approach the functionality of the brain, but for those true believers like me out there, this thought is worth considering.

Advertisements

Have you ever been walking in the woods and come upon a snake (startling both you and it), only to see it slither away with incredible speed? I know I have. How is it possible for the massive bulk of a whale to travel thousands of miles underwater without eating? As is often the case, the efficiency (and beauty) of nature’s solutions to common problems far supersede those we’ve developed ourselves.

A recent review by Netta Cohen and Jordan Boyle of the University of Leads (UK) to be published in Contemporary Physics has a nice discussion of the fluid mechanics involved in different models of undulatory locomotion, as presented by various organisms. What becomes clear to someone (me) not in the field, is that for something seemingly as simple as getting around in a fluid, we know pretty much exactly how the most efficient organisms do it but are a good ways from being able to replicate it well ourselves.

Towards the end of the paper, the authors discuss the emerging technologies of undulatory robotics, on both the meter scale (robotic snakes for searching for people in building rubble) and on the micrometer scale (robotic worms to swim through an artery to image tissue injury or healing progress). These applications are an interesting glimpse at an area of research ripe for development.

The propeller (which itself is of biological origin) on the back of a boat has gotten us a good, long way, but it has a number of limitations. For one, it’s quite inefficient compared to biological undulation; although, it’s significantly simpler to implement mechanically. As our material science and coordination of many mechanical movements (think how many independent muscles a fish must move to flap its body once) continues to improve, our ability to implement this form of locomotion will improve. (Perhaps in 100 years I’ll be able to take a ride in a flagella-powered boat.

A tangent
At the risk of being cliche, I’m again struck by the resourcefulness of evolution in using the tools it has available to perform a task, rather than trying to reinvent the wheel every time. So, the cells in your bronchiole tubes would like a way to move mucus and dirt up and out of the lungs? Well, why not just use oar-like cilia that many paramecium use? A less practical builder (us, perhaps) would expensively go about designing an entire new apparatus. In fact, many of the tools used by evolution (if random chance can be given some agency) are imperfect (for example, the skeletal structure of bat wings vs. bird wings), but they work well enough. This imperfect-but-good-enough usage of biological tools, by the way, is one of the best arguments (if you entertain the argument at all) against so-called intelligent design.

I’m usually not one for hemming in research in any field and any direction, even when the directions hold potential ethical pitfalls (human cloning, for example). However, the attempt to develop autonomous and ethical robots for use in any wartime situation completely crosses the line.

I should distinguish between autonomous and remote controlled robots. Autonomous robots receive no human input for their direct actions. They are capable of making decisions for themselves and then acting on them. The military already uses remote controlled robots for handling IEDs and scouting. Such technology is merely another extension of a human controller’s will and (in my opinion) completely ethical.

In an interview with h+ magazine, though, Ronald Arkin of Georgia Tech, discusses creating an “ethical governor” to ensure that future autonomous robots don’t break the “rules of war.” I can see the allure of having robots in the battlefield: they’re expendable, entirely rational, and have faster reaction time than humans. Here are the “rules” that Arkin suggests:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

We are so, so far from having any kind of autonomous robot that can intelligently follow these rules that it’s not even worth spending the time on them. We would have to have a solid model of human intelligence up and running before we can even think about creating a robot that can discern and apply these rules. If current soldiers have trouble distinguishing between “enemy combatants” and civilians, how in the world will a robot be able to do it?

This type of research falls right in line with Regan’s Star Wars, having dubious (probably too generous) effectiveness and absurd cost. The problem is that too many of us have naive fantasies of robots fighting our wars. Let’s grow up and spend resources money more wisely than that, eh?

When we reach the singularity and finally develop a robust artificial intelligence that parallels our own, which — I guarantee — will not be in any of our lifetimes (although , Ray Kurzweil would have you believe otherwise) then we can start thinking about the rules for our warriors bots.

How many of you have found yourselves pressing or saying various numbers to work your way through an automated customer service menu on the phone? I certainly have. I think we all recognize how annoying this is, especially if we then have to wait for a while until the next available service representative can assist us. I recently had a very positive customer service interaction with the good people at Newegg.com (a site that sells all sorts of electronic wares). I exclusively use Newegg whenever I buy electronic stuff (from new computer parts to an mp3 player to a flash drive) because of their stellar service.

In that vein, I’ve put together what I’ll call Drausin’s Recipe for Success in Customer Relations. (Take note big cable and phone companies. This is directed at you.)

1) Put as much of the information and processing online. Most of us are comfortable navigating menus and such online and prefer doing that than talking it through with a person if we can. Newegg has a very sophisticated return (RMA) process that makes it incredibly easy to return things to them. The whole process takes about 25 seconds. They then email you a (free) shipping label that you can print and put right on the box you’re sending back.

2) Sometimes problems are too complicated to handle exclusively online, though. So we should work to make the phone experience faster and more efficient. Companies should create an online directory of help-topic queues (like Billing, Tech Support, Returns, etc) and their extensions. If you don’t have access to the internet, the recorded menus and submenus are a necessary evil to finally get into the appropriate waiting queue. But we often do have web access, and so being able to look in a directory and see that questions about billing should dial extension 4567 or whatever would allow us to skip all of those annoying phone menus just to get into a queue to talk with someone. When we call up, we could be prompted to either go into the menus or enter our queue extension.

3) When we’re waiting in a queue, listening to the smooth jazz or cheesy elevator music, they should notify us of our position in the queue at least every thirty seconds. Being stuck in hold purgatory is often enraging because we have no idea how long we’ll have to wait. I find that waiting long periods is much easier if I know roughly how it will be.

That’s not so bad, eh, phone and cable companies? Put as much information and processing on the web as possible. List a directory of help topic extensions so we can skip all the menus, and tell us how many people are in front of us on a very regular basis.

I feel like this kind of stuff is so obvious to most of us and yet we’re often still burdened with terrible customer service.

true randomness

July 1, 2009

We’ve all rolled dice in board games and are confident that those rolls are truly random, i.e. not dependent on any measurable forces. We can’t recreate or predict any type of roll. You’ve probably also flipped a coin, which might seem easier to fix but have probably met a good deal of frustration if you tried to do so.

We take this kind of randomness for granted, but what’s interesting is that for all of our computational prowess, we are unable to create random numbers in computers. Of course, we can create pseudorandom numbers, but not the real deal.

Computers often generate pseudorandom numbers using a starting number, or seed, and then complicated functions to get the next random number. If you supply the same seed number to the randomizing function, you’ll get the same (infinite) stream of “random” numbers out. This property of pseudorandom numbers is actually quite useful when building and debugging computer programs because it allows the programmer to recreate seemingly random scenarios for testing.

When computers are actually trying to generate random numbers, they often use the clock’s timestamp as a seed, since it’s never the same. While this technique does generate a random stream of numbers it’s still, in a way, determined. We can predict what a random number will be given a seed value at a specific time.

In order to achieve true randomness, programmers have had to turn to the real world. Sites like random.org, which generates random numbers from atmospheric noise, and (more recently) the Dice-O-Matic hopper, which physically performs millions of dice rolls today (follow the link for a cool video of it in action), serve up genuine random numbers.

What’s interesting is that, in theory, these numbers aren’t really random either. Sure, they’re products of chaos theory, but they each have predictable forces acting upon them that cause them to behave in a predictable way (remember quantum uncertainty operates on much, much smaller levels than dice rolling). What makes them basically random, though, is that there are so many interacting forces, it’s impossible for us to compute with today’s computing power. But Laplace’s demon could figure it out, which makes me wonder if at some day in the future we won’t be able to predict the outcome of a dice roll, taking us one small step further to predicting the future itself.

(I’m currently reading a very interesting book by Daniel Dennet entitled Freedom Evolves about how to have free will in an determined universe. If this kind of stuff interests you, I’d certainly recommend it.)

misguided spaceflight

June 24, 2009

NASAs nifty Constellation logo

NASA's nifty logo for Constellation: "Earth, Moon, Mars"

The Obama administration has ordered a review of NASA’s human spaceflight program, the next iteration of which is called Constellation and is planned to take us to back to the moon in 2020 and to Mars around 2030. Budget woes may delay the program, but I question the strategy in returning to the Moon.

To properly approach this issue, I must first explain the entire context of manned spaceflight. In my opinion, it’s a PR campaign. In the Cold War 50s, 60s, and 70s, we used spaceflight to flex our technological muscle. Not only did landing on the moon and the rest of the groundbreaking missions flaunt our scientific and engineering abilities to the world, it also inspired a generation of scientists an engineers here in the U.S. Hell, it still inspires me that we were able to land people on the moon.

In another perspective, one could argue, is that exploring is what we humans do, from Marco Polo to Leif Ericson to Shackleton. Exploring our planetary neighbors is simply the next phase. I’m quite amenable to this idea because I attach our desire to explore to our general quest to make sense of our surroundings.

Pursuing this desire is an important undertaking, but it shouldn’t distract from the rest of space exploration through probes, robots, and telescopes, a project with much more (in my opinion) scientific and philosophical promise. Given the massive cost of sending a human to the moon again (the GAO estimates as high as as $230 billion!), I can’t help but think that we can get more bang for our buck in other cosmological projects.

The ultimate goal for NASA’s Constellation project is putting humans on Mars (and returning them, of course). I this this goal is fruitful and is the natural next step to our space exploration. Going to the moon again simply because we can seems like a waste of time and resources.

When space exploration bleeds into politics and PR (and believe me, NASA’s got plenty to go around), we must be all the more thoughtful about which direction in space we’re going.

faith in AI

June 22, 2009


Namit Arora at 3quarksdaily has a very interesting and thoughtful post about the future (or lack thereof) of true artificial intelligence.

He does a good job at tracing the major phases of AI design, from essentially large databases to the more modern neural networks. He points out that while AIs have become more and more capable of solving well-defined problems (although one could argue we’ve been able to expend the set of solvable well-defined problems a great deal over the years), ultimately they will fail to reach the truly human je ne sais quoi because they are unable to become completely immersed in the human experience of emotions, relationships, and even simple relationships between objects and things in our world. (Arora borrows much of these ideas, which I am only briefly paraphrasing, from Hubert L. Dreyfus who borrows from Heidegger.)

While I agree that we are no where near the singularity, as Ray Kurzweil would have you believe, I disagree that we are no where nearer than when we started in the early days of artificial intelligence (that is, the 60s and 70s).

A big shift in the development of AI, in my opinion, was moving away from the teleological view of intelligence, away from “This is how we think the mind works, so this is how we’re going to program our AI.” The transition from symbolic (brute force) AI to neural networks marks a large shift in that it’s basically an acknowledgement that we programmers don’t know how to solve every problem. Now, what we still know how to do (and must do for now at least) is to define our problems. Thus, if I make an AI to solve a certain problem, I may run it though millions of machine-learning iterations so that it can figure out the best way to solve that problem, but I’m still defining the parameters, the heuristics that make that program determine whether the current technique it’s testing is good or not.

I agree that this approach, while yielding many powerful problem-solving applications, is ultimately doomed. But in pursuing it, we have bootstrapped ourselves into a less well-defined area of AI. If you believe (as I do, although I don’t like the religiously aspects of the word “believe”), that the brain is simply a collection of interconnected cells and nothing else, then in theory we can recreate it in silicone. The problem arises in determining how the cells (nodes in comp sci language) are interconnected. How can we even know where to start?

And here’s where the faith aspect comes in. I’ll call it what it is. As our understanding of the functional aspects of the brain improves (thanks to all the tools of modern technology) as do our computational processing and storage capabilities, I find it hard to think that we will not ultimately get there.

Yes, we will probably need a more philosophical view of what it means to be human and sentient. Yes, it will probably take us a long, long time from now, perhaps even after my lifetime, but remember, the field is incredibly new. I’m heartened by work done by Jeff Krichmar’s group at UC Irvine with neurobots in approaching the idea of intelligence from a non-bounded perspective.

As our technology and understanding of intelligence improves, I simply cannot believe (and here, perhaps, I am using a more religious flavor) that our quest to understand ourselves would allow us to abandon this project.