Adam Ash

Your daily entertainment scout. Whatever is happening out there, you'll find the best writing about it in here.

Monday, May 14, 2007

Bookplanet: the Godel, Escher, Bach" author has a new book out on how consciousness arises out of mere matter

The Bookshelf talks with Douglas Hofstadter
Greg Ross/American Scientist


Douglas R. Hofstadter was only 35 when he received the Pulitzer Prize in 1980 for Gödel, Escher, Bach: An Eternal Golden Braid (Basic Books, 1979). Examining the creative themes of the logician, the artist and the composer, the book offers a witty interplay of analogy, recursion, paradox and metafiction that has invited comparisons to Lewis Carroll and Jorge Luis Borges.

For all the book's popular success, though, Hofstadter worried that its audience was overlooking its fundamental argument. " GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter," he wrote later. "What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"

Now a cognitive scientist at Indiana University, Hofstadter takes up these ideas again in I Am a Strange Loop (Basic Books, March 2007). The new book partakes of some of the same playful metaphors and dialogues as GEB , but it addresses more directly Hofstadter's conception of the nature of self and consciousness. "It sort of hits everybody over the head with it," he says.

American Scientist Online Managing Editor Greg Ross interviewed Hofstadter by telephone in January 2007.

What led you to write the book?

I think the spark for this was that two philosophers [Ken Williford and Uriah Kriegel] asked me if I would write about my thoughts about what an "I" is. They said that they had appreciated what I had said of these ideas in Gödel, Escher, Bach many years ago, but that they knew that I felt that my message had not really been absorbed—that Gödel, Escher, Bach had become popular but that the driving force behind the book had not really been perceived by most readers, let alone absorbed by a large number of people, and I was frustrated with this. I felt I had reached people, but not exactly as I had hoped. I had greater success with the book than I'd ever expected, but I didn't have the exact type of success that I wanted, and so they were giving me an opportunity to write an article in an anthology on the philosophy of mind, particularly on "I"—they called it Self-Representational Approaches to Consciousness [MIT Press, 2006].

I thought, "This is a good opportunity to at least address the world of philosophers of mind. It's a narrow world, but if I can say it well, at least they'll know what I intended to do in my book GEB almost 30 years ago." But as I wrote the article, it became so long that I started to see it could easily be turned into a book.

So actually there are two things. One is this article that appeared in their book as a chapter called "What Is It Like to Be a Strange Loop?", and then my forthcoming book, I Am a Strange Loop . My book is much more leisurely, let's put it that way. It fills out these ideas, I hope, very richly, with tons of metaphors and analogies and things so that it gets across ideas [and] I think, "Now, they may not like it, but at least they'll know what I'm trying to say." It sort of hits everybody over the head with it.

In recent years, have there been any developments in philosophy of mind and in computer models of consciousness that you find especially compelling?

That's hard to say. I think there's been a kind of shift in feeling over the years, but I wouldn't be able to say anything specific. You have to understand that I'm not professionally involved in the philosophy of mind in the sense of being in the thick of things. I do like to think that my ideas about the philosophy of mind will interest and have some effect on philosophers of mind, but I don't spend my time in their company. I don't go to their meetings; I don't read their books or articles very much, so I'm really out of it. I couldn't say. I went to a conference a few months ago in Tucson, and I could see that it was popular to talk about self-reference, and that might not have been popular when GEB came out 30 years ago. And in fact I think that's why this book—the two philosophers who invited me to contribute to their anthology—I think it's sort of like an idea whose time has come. I'm not saying that it's going to sweep the world; it might or might not. But it wasn't a very fashionable idea 30 years ago, and it's much more fashionable today. That means that I think the atmosphere for a reception for my ideas may be better, but I don't know whether there are any big developments that have actually changed things.

And as far as computer models go, one of the big sea changes that took place after GEB came out in 1979 was the enormous swing in computer modeling away from what they now call symbolic AI [artificial intelligence] to connectionism or neural nets. That was a very big thing that took place in the 1980s. It didn't invalidate or stop the work in symbolic AI, but it simply opened up a whole new dimension, so to speak. That's sort of a sea change, because people who at one time would never have thought about the architecture of the brain or neurons or anything like that became all involved in it, and in fact I think it had one bad consequence, which is that a lot of people got so deeply involved in the idea that "everything is neurons" that they sort of forgot about the idea that when you're trying to explain how, let's say, a heart works, you don't focus on the cells of the heart, but you focus on the overarching fact that it pumps, that a heart has a higher-level description. You can look at the compartments of the heart and the way in which they interact and so forth and see how it pumps without descending all the way to the level of its microscopic constituents, and I think that the art of explaining the mind is going to be one of being able to find the right levels of description, being able to sometimes refer to things that are microscopic, but not always. Not always referring to things at the level of neurons, sometimes referring to things that are sort of symbolic—like words, concepts, analogies—and not always being able to descend below that. I think it's a very deep and subtle art, and I think that over the years we'll get there, and we're getting there. I can't say there's been any giant revolution yet in any of these fields, but I think we're making small, steady progress.

Your books often draw on your own personal experience for examples and for inspiration. I'm wondering if you've found that getting older itself has changed your thinking about consciousness.

Well, I'm sure it has. I think that as I watch people—my own children, my mother, my own self, my friends' parents who have Alzheimer's—as I watch people of every sort, and particularly as I watch my own intimate interactions with other people, people whom I love and know extremely well, and in some sense in whom I live and who live in me, I have had a growing conviction that selves are distributed entities.

In the book you mention losing your wife quite suddenly in 1993, and I was struck by how that affected your thinking and your work. It's a consoling idea that your wife's personality or point of view might persist somehow. Do you still feel that way?

Absolutely. I have to emphasize that the sad truth of the matter is, of course, that whatever persists in me is a very feeble copy of her. Whatever persists of her interiority is not her full self. It's reduced, a sort of low-resolution version, coarse-grained. Otherwise it would be a claim that "it's all fine, she didn't die, she lives on in me just as much as she ever did." And of course I don't believe that. I believe that there is a trace of her "I", her interiority, her inner light, however you want to phrase it, that remains inside me and inside some other people, people who really had internalized her viewpoint, people who really had interacted intimately with her over years, and that trace that remains is a valid trace of her self—her soul, if you wish. But it's diminished; it's very dilute relative to what existed in her own brain. So there are two sides to the coin. It's consoling on the one hand that there's something left, but of course it doesn't remove the sting of death. It doesn't say, "Oh, well, it didn't matter that she died because she lives on just fine in my brain." Would that it were. But, anyway, it is a bit of a consolation.

There's a popular idea currently that technology may be converging on some kind of culmination—some people refer to it as a singularity. It's not clear what form it might take, but some have suggested an explosion of artificial intelligence. Do you have any thoughts about that?

Oh, yeah, I've organized several symposia about it; I've written a long article about it; I've participated in a couple of events with Ray Kurzweil, Hans Moravec and many of these singularitarians, as they refer to themselves. I have wallowed in this mud very much. However, if you're asking for a clear judgment, I think it's very murky.

The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it's a very confusing thing that they're suggesting. If you read Ray Kurzweil's books and Hans Moravec's, what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.

Ray Kurzweil says 2029 is the year that a computer will pass the Turing test [converse well enough to pass as human], and he has a big bet on it for $1,000 with [Lotus Software founder Mitch Kapor], who says it won't pass. Kurzweil is committed to this viewpoint, but that's only the beginning. He says within 10 or 15 years after that, a thousand dollars will buy you computational power that will be equivalent to all of humanity. What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?

This is one of the things that bother me about the current developments that you see in robotics. There's more and more emphasis on humanoid robotics and supposedly robots with either genuine or fake emotions. You get the sense that a lot of people are doing it deliberately to be fake; they say, "Well, it's comforting to people to have a fake, artificial companion, so we'll make it look as human as possible, but we don't make any pretenses about it being real." Other people, you get the sense that they're saying, "No, these robots are going to have genuine emotions, they're going to be human." But the point is, there doesn't seem to be any discussion anywhere of "Is this good?" It's all "Let's go faster! Faster! Faster!" Well, where are you going? What are you trying to do? And I don't see any asking of these questions.

That's why I organized my symposia. I organized one right after the 1999 book by Kurzweil came out [ The Age of Spiritual Machines ] and a similar one by Moravec [ Robot: Mere Machine to Transcendent Mind ]. I organized my symposium here at Indiana University to confront these questions, and I found that nobody at my university even had any idea what to say—a three- or four-hour symposium and basically people avoided the topic. I couldn't believe it. So a year later, when I was on sabbatical at Stanford, I organized a full-day symposium, and I invited Kurzweil and Moravec and [University of Michigan computer scientist] John Holland, [Sun Microsystems cofounder] Bill Joy, [SETI Institute director] Frank Drake and [ Wired magazine founder] Kevin Kelly. I had a lot of these people together, and they talked about their views, but once again, it was as if Kurzweil and Moravec felt a little bit inhibited by the context and they didn't talk about their far-out views. They talked about rather conservative images of what was going to happen. And I had to go into their books and read out loud their most crazy quotes in order to say, "Look, you're not saying in front of this audience of a thousand people what you've said in your books. Here's what you've said in your books. What do you think of this?"

The symposia weren't satisfactory to me; the people didn't confront their own ideas. I feel as if there's an evasion in our culture. Ray goes around saying it's going to happen, and he says it's all going to be bliss. Our brain patterns will be uploaded into software, we're all going to become immortal, everything is going to go faster and faster, our personalities will all blend and merge in cyberspace—the craziest sort of dog excrement mixed with very good food. It's bizarre, and I don't have any easy way to say what's right or wrong.

Kelly said to me, "Doug, why did you not talk about the singularity and things like that in your book?" And I said, "Frankly, because it sort of disgusts me, but also because I just don't want to deal with science-fiction scenarios." I'm not talking about what's going to happen someday in the future; I'm not talking about decades or thousands of years in the future. I'm talking about "What is a human being? What is an 'I'?" This may be an outmoded question to ask 30 years from now. Maybe we'll all be floating blissfully in cyberspace, there won't be any human bodies left, maybe everything will be software living in virtual worlds, it may be science-fiction city. Maybe my questions will all be invalid at that point. But I'm not writing for people 30 years from now, I'm writing for people right now. We still have human bodies. We don't yet have artificial intelligence that is at this level. It doesn't seem on the horizon. So that's what I'm writing for and about.

And I don't have any real predictions as to when or if this is going to come about. I think there's some chance that some of what these people are saying is going to come about. When, I don't know. I wouldn't have predicted myself that the world chess champion would be defeated by a rather boring kind of chess program architecture, but it doesn't matter, it still did it. Nor would I have expected that a car would drive itself across the Nevada desert using laser rangefinders and television cameras and GPS and fancy computer programs. I wouldn't have guessed that that was going to happen when it happened. It's happening a little faster than I would have thought, and it does suggest that there may be some truth to the idea that Moore's Law [predicting a steady increase in computing power per unit cost] and all these other things are allowing us to develop things that have some things in common with our minds. I don't see anything yet that really resembles a human mind whatsoever. The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself than to a human mind, and certainly the computer program that plays chess doesn't have any intelligence or anything like human thoughts.

But as things develop, who knows? Ray Kurzweil and others are predicting that there's a tidal wave coming. But they say it's bliss—it's not bad, it's good, at least if you're surfing it in the right way. If you own the right kind of surfboard, it'll be fun.

Getting away from artificial intelligence—having spent so much time in thinking about thinking, about the human mind, is it possible to extrapolate on that? Do you see an eventual outcome—will the mind one day understand itself?

Depends on what you mean by understand itself. If you mean in broad-principle terms if we will come to understand things, yeah, I don't see why not. For example, I like to look back at Freud. I don't know when it was that he first published his ideas about the ego, the id and the superego, and I don't know how much truth there is to those ideas, but it was a big leap even if it wasn't completely correct, because nobody had ever spoken of the abstract architecture of a human soul or a human self. It's as if he were saying that a self can be thought of in an abstract way, the way a government is thought of, with a legislative branch, a judicial, an executive, and he was making guesses at what the architecture of a human self is. And maybe they were all wrong, but it doesn't matter; the point is it was a first stab. Like the Bohr atom, it was a wonderful intuitive leap.

If you mean, will we understand the basic ideas of what it is that makes a human self, I think yes, I think we will. But if you mean will I, Doug Hofstadter, understand everything about my brain, exactly why I do every single thing I do, no, we will not. We will always remain mysteries to ourselves—if we were totally transparent to ourselves, then the whole idea of an "I" would vanish. But the broad scientific explanation of what a human self is, it does not seem to me to be something that fundamentally is going to be denied to us, because I think our minds are remarkably capable of making wonderful leaps, and going into areas that were murky and finding clarity. It doesn't mean that everybody finds clarity, but somebody illuminates something—Andrew Wiles illuminates something [Wiles proved Fermat's Last Theorem], Albert Einstein illuminates something, Sigmund Freud illuminates something, and we do make enormous progress. So I think it's possible.

What's next for you?

I'm working on a book with a French colleague, Emmanuel Sander, which is about how I see analogy as being the core of all of human thought. He's a young psychologist in Paris, about 39. He and I just see eye to eye, it's a wonderful thing when one finds somebody that one resonates with so well. I have quite a number of books that I'm hoping to get out, but each book is a struggle!

0 Comments:

Post a Comment

<< Home