essay / research

Bloody computers: “what is cognitive science?” redux

Appropriately close to Halloween, IBM announced plans to develop super computers that run on what they’re calling “electronic blood” (see the BBC article about it here). The project is motivated by a desire to make computers more like the brain, which can use much less energy than traditional computers need to perform complex tasks.

Researchers think blood-like liquid will make computers more energy-efficient, because the brain’s circulatory system does so much for its productivity. Blood fuels and cools at the same time, pointing toward an escape around traditional computing’s heating problems. (As quoted in the BBC article, an IBM researcher notes that 99% of a typical computer’s volume goes toward cooling and powering, instead of processing information). So IBM has begun to copy these properties of blood with electrolyte liquid — liquid that contains free-floating charged particles and so can conduct electricity. Building on technology used in common batteries, the liquid is charged and pumped into a computer, where it can deliver power while also removing heat. The ultimate goals include a standard for the tech world: building a supercomputer that can fit “inside a sugarcube.”

This is a totally awesome idea, one that might gain power from — or turn out to depend on — other novel approaches to computing. But being largely ignorant when it comes to computer design, I can offer no more on the details, which relate little to my daily encounters with cognitive science. My encounters involve lots more theorizing about big concepts and then scrutinizing said concepts based on very specific experiments, none of which can explain anything on their own (bless you, academia). As I mentioned in a previous post, one of the biggest concepts on the cognitive science block is the extent to which minds and brains should be thought of as computers. The IBM “electronic blood” research asks this question in reverse: to what extent can or should we make computers look more like human brains?

Looking at the brain-computer relationship from different angles is not new. In fact, there’s a whole approach to building computer networks and studying the mind by modeling brain structures. And as I’ve spent a lot of time considering at Edinburgh, these approaches reflect broader divides among cognitive scientists. I wrote my how-on-earth-do-I-explain-my-degree post before I started classes, at which point I learned that the brain-as-computer metaphor is not taken for granted in cognitive science the way I’d thought. I’ve been exposed to a growing community that challenges that metaphor with another proposal: embodied cognition.

The key idea behind embodied cognition is that the workings of our minds depend crucially on features of our bodies and environments, not just on the internal structure of our brains. This claim goes beyond making the (seemingly obvious) point that our thoughts and behaviors respond to knowledge picked up from our bodies and environments. According to embodied cognition, such knowledge isn’t just stuff our brains respond to: it shapes thought and may be part of thinking itself. Hence, we shouldn’t view thought as happening primarily in the brain as an isolated “machine.”

The debates between embodied cognition and more traditional approaches might seem abstract, but to me they have clear implications for research. An obvious example is in artificial intelligence. An embodied cognition person would say we’ll never build something that “thinks” with a human level of complexity unless we give it a body that’s like ours in certain ways, and make it sensitive to particular features in our environments. (The way I like to think of it, such a belief points in spirit more toward Battlestar Galactica-type Cylons than R2-D2-like droids.)

If robots don’t interest you — and just a warning, robots are only going to be around more —  I think clinical problems also have a stake here. If we’re trying to resolve clinical issues relating to thought and behavior, it seems like it has to matter whether we should zero in on the brain for solutions or look more broadly at brain, body and environment.

As with most things, I doubt the most fruitful path lies entirely on one side of this debate. Lots of issues cross sides, and each view of cognition has internal disagreements. But I will say my first draft of what constitutes cognitive science requires an update, or at least a clarification. To do cognitive science, you don’t have to work from the notion of the mind and brain as a self-contained computer. And as illustrated by IBM’s work toward “bloody” computers, at least some computers might function better the more they resemble brains.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s